• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Editorial Onward to the Singularity: Google AI Develops Better Artificial Intelligences

Raevenlord

News Editor
Joined
Aug 12, 2016
Messages
3,755 (1.34/day)
Location
Portugal
System Name The Ryzening
Processor AMD Ryzen 9 5900X
Motherboard MSI X570 MAG TOMAHAWK
Cooling Lian Li Galahad 360mm AIO
Memory 32 GB G.Skill Trident Z F4-3733 (4x 8 GB)
Video Card(s) Gigabyte RTX 3070 Ti
Storage Boot: Transcend MTE220S 2TB, Kintson A2000 1TB, Seagate Firewolf Pro 14 TB
Display(s) Acer Nitro VG270UP (1440p 144 Hz IPS)
Case Lian Li O11DX Dynamic White
Audio Device(s) iFi Audio Zen DAC
Power Supply Seasonic Focus+ 750 W
Mouse Cooler Master Masterkeys Lite L
Keyboard Cooler Master Masterkeys Lite L
Software Windows 10 x64
The singularity concept isn't a simple one. It has attached to it not only the idea of Artificial Intelligence that is capable of constant self-improvement, but also that the invention and deployment of this kind of AI will trigger ever accelerating technological growth - so much so that humanity will see itself changed forever. Now, really, there are some pieces of technology already that have effectively changed the fabric of society. We've seen this happen with the Internet, bridging gaps in time and space and ushering humanity towards frankly inspiring times of growth and development. Even smartphones, due to their adoption rates and capabilities, have seen the metamorphosis of human interaction and ways to connect with each other, even sparking some smartphone-related psychological conditions. But all of those will definitely, definitely, pale in comparison to what changes might ensue following the singularity.

The thing is, up to now, we've been shackled in our (still tremendous growth) by our own capabilities as a species: our world is built on layers upon layers of brilliant minds that have developed the framework of technologies our society is now interspersed with. But this means that as fast as development has been, it has still been somewhat slowed down by humanity's ability to evolve, and to develop. Each development has come with almost perfect understanding of what came before it: it's a cohesive whole, with each step being provable and verifiable through the scientific method, a veritable "standing atop the shoulders of giants". What happens, then, when we lose sight of the thought process behind developments: when the train of thought behind them is so exquisite that we can't really understand it? When we deploy technologies and programs that we don't really understand? Enter the singularity, an event to which we've stopped walking towards: it's more of a hurdle race now, and perhaps more worryingly, it's no longer fully controlled by humans.



To our forum-dwellers: this article is marked as an Editorial
Google has been one of the companies at the forefront of AI development and research, much to the chagrin of AI-realists Elon Musk and Stephen Hawking, who have been extremely vocal on the dangers they believe that unchecked development in this field could bring to humanity. One of Google's star AI projects is AutoML, announced by the company in May 2017. It's purpose: to develop other, smaller-scale, "child" AIs which are dependent on AutoML's role of controller neural network. And that it did: in smaller deployments (like CIFAR-10 and Penn Treebank), Google engineers found that AutoML could develop AIs that performed on par with custom designs by AI development experts. The next step was to look at how AutoML's designs would deliver in greater datasets. For that purpose, Google told AutoML to develop an AI specifically geared for image recognition of objects - people, cars, traffic lights, kites, backpacks - in live video. This AutoML brainchild was named by google engineers NASNet, and brought about better results than other human-engineered image recognition software.



According to the researchers, NASNet was 82.7% accurate at predicting images - 1.2% better than any previously published results based on human-developed systems. NASNet is also 4% more efficient than the best published results, with a 43.1% mean Average Precision (mAP). Additionally, a less computationally demanding version of NASNet outperformed the best similarly-sized models for mobile platforms by 3.1%. This means that an AI-designed system is actually better than any other human-developed one. Now, luckily, AutoML isn't self-aware. But this particular AI has been increasingly put to work in improving its own code.



AIs are some of the most interesting developments in recent years, and have been the actors of countless stories of humanity being removed from the role of creators to that of mere resources. While doomsday scenarios may be too far removed from the realm of possibility as of yet, they tend to increase in probability the more effort is put towards the development of AIs. There are some research groups that are focused on the ethical boundaries of developed AIs, such as Google's own DeepMind, and the Future of Life Institute, which counts with Stephen Hawking, Elon Musk, and Nick Bostrom on its scientific advisory board, among other high-profile players in the AI space. The "Partnership on AI to benefit people and society" is another one of these groups worth mentioning, as is the Institute of Electrical and Electronics Engineers (IEEE) which has already proposed a set of ethical rules to be followed by AI.



Having these monumental developments occurring so fast in the AI field is certainly inspiring as it comes to humanity's ingenuity; however, there must exist some security measures around this. For one, I myself ponder on how fast these AI-fueled developments can go, and should go, in the face of human scientists finding in increasingly difficult to keep up with these developments, and what they entail. What happens when human engineers see that AI-developed code is better than theirs, but they don't really understand it? Should it be deployed? What happens after it's been integrated with our systems? It would certainly be hard for human scientists to revert some changes, and fix some problems, in lines of code they didn't fully understand in the first place, wouldn't it?

And what to say regarding an acceleration of progress fueled by AIs - so fast and great that the changes it brings about in humanity are too fast for us to be able to adapt to them? What happens when the fabric of society is so plied with changes and developments that we can't really internalize these, and adapt to how society should work? There have to be ethical and deployment boundaries, and progress will have to be kept in check - progress on progress's behalf would simply be self-destructive if the slower part of society - humans themselves - don't know how, and aren't given time to, adapt. Even for most of us enthusiasts, how our CPUs and graphics cards work are just vague ideas and incomplete schematics in our minds already. What to say of systems and designs that were thought and designed by machines and bits of code - would we really understand them? I'd like to cite Arthur C. Clarke's third law here: "Any sufficiently advanced technology is indistinguishable from magic." Aren't AI-created AIs already blurring that line, and can we trust ourselves to understand everything that entails?



This article isn't meant to be a doomsday-scenario planner, or even a negative piece on AI. These are some of the most interesting times - and developments - that most of us have seen, with steps taken here having the chance of being some of the most far-reaching ones in our history - and future - as a species. The way from Homo Sapiens to Homo Deus is ripe with dangers, though; debate and conscious thought of what these scenarios might entail can only better prepare us for what developments may occur. Follow the source links for various articles and takes on this issue - it really is a world out there.

View at TechPowerUp Main Site
 
Last edited by a moderator:
Joined
Apr 12, 2013
Messages
6,740 (1.68/day)
The rate at which AI development & deep learning is speeding up is absolutely astonishing & frightening.

As an AI skeptic (Skynet anyone?) I believe the next few years could literally define &/or erase mankind from the history of this planet. There's legitimate concerns about fully aware & self learning AI coming to power, should that ever happen, in the future.

What is intriguing however is the level of trust humans put in tech, like AI, & that seems to be growing at least with the younger gen. What will be the course of humans in the future & could we be reduced to a bookmark in the history books of this earth, not unlike dinosaurs, depends on how much we're giving in to technology & how we trust it. More importantly will a true AI see us as competition or something they can peacefully coexist with?
 
Last edited:
Joined
Mar 26, 2010
Messages
9,774 (1.90/day)
Location
Jakarta, Indonesia
System Name micropage7
Processor Intel Xeon X3470
Motherboard Gigabyte Technology Co. Ltd. P55A-UD3R (Socket 1156)
Cooling Enermax ETS-T40F
Memory Samsung 8.00GB Dual-Channel DDR3
Video Card(s) NVIDIA Quadro FX 1800
Storage V-GEN03AS18EU120GB, Seagate 2 x 1TB and Seagate 4TB
Display(s) Samsung 21 inch LCD Wide Screen
Case Icute Super 18
Audio Device(s) Auzentech X-Fi Forte
Power Supply Silverstone 600 Watt
Mouse Logitech G502
Keyboard Sades Excalibur + Taihao keycaps
Software Win 7 64-bit
Benchmark Scores Classified
actually we are (human) can be predicted based on what we like, what we share, our friends, social media, what we type
i agree AI at some points is interesting but at some points its frightening
especially when we could develop something like taking decision and more like that

sometimes i just think about what if the AI is too step forward and we cant shut it down coz it knows we want to shut it down and it refuse us
 
Joined
Jan 8, 2017
Messages
8,924 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
I believe the next few years could literally define &/or erase mankind from the history of this planet. There's legitimate concerns about fully aware & self learning AI coming to power, should that ever happen, in the future.

Nope , we are still miles away from general purpose AI or awareness. So far in fact that there aren't even definitions for these things.

Current AI is still comprised of nothing more than glorified recognition and classification algorithms , very advanced algorithms but still far for anything that would be even remotely dangerous by itself.
 
Joined
Apr 12, 2013
Messages
6,740 (1.68/day)
Nope , we are still miles away from general purpose AI or awareness. So far in fact that there aren't even definitions for these things.

Current AI is still comprised of nothing more than glorified recognition and classification algorithms , very advanced algorithms but still far for anything that would be even remotely dangerous by itself.
I'm pretty sure it'll get there before we know it's self aware. That might be decades away or just the next decade.
 
Joined
Jan 8, 2017
Messages
8,924 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
That's just an idea stuck into the realm of science fiction unfortunately and I wish people would stop giving so much attention to fancy TED talks and Elon Musk.

Awareness and strong-AI doesn't just pop out of nowhere as you keep building larger and larger neural networks. Neuroscientists have studied neurons and the structures associate with them for decades and there is still not a clue as to how these elements spark intelligence so the idea that you can just scale up cognitive architectures until suddenly it becomes aware and intelligent seems to have no real basis suggesting that the answer is somewhere else.

And that answer might be that the solution lies into the abstraction that you build on top of these models and that can't be created by mistaken without even knowing it and most importantly it's not inherently guaranteed. There is no reason why it would be.
 
Last edited:
Joined
Mar 16, 2017
Messages
211 (0.08/day)
Location
behind you
Processor Threadripper 1950X (4.0 GHz OC)
Motherboard ASRock X399 Professional Gaming
Cooling Enermax Liqtech TR4
Memory 48GB DDR4 2934MHz
Video Card(s) Nvidia GTX 1080, GTX 660TI
Storage 2TB Western Digital HDD, 500GB Samsung 850 EVO SSD, 280GB Intel Optane 900P
Display(s) 2x 1920x1200
Power Supply Cooler Master Silent Pro M (1000W)
Mouse Logitech G602
Keyboard Corsair K70 MK.2
Software Windows 10
Joined
Apr 12, 2013
Messages
6,740 (1.68/day)
That's just an idea stuck into the realm of science fiction unfortunately and I wish people would stop giving so much attention to fancy TED talks and Elon Musk.

Awareness and strong-AI doesn't just pop out of nowhere as you keep building larger and larger neural networks. Neuroscientists have studied neurons and the structures associate with them for decades and there is still not a clue as to how these elements spark intelligence so the idea that you can just scale up cognitive architectures until suddenly it becomes aware and intelligent seems to have no real basis suggesting that the answer is somewhere else.

And that answer might be that the solution lies into the abstraction that you build on top of these models and that can't be created by mistaken without even knowing it and most importantly it's not inherently guaranteed. There is no reason why it would be.
We're vastly underestimating technology as it evolves, bigly even. Most discoveries or even inventions of the past were accidental in nature.

Facebook's AI accidentally created its own language - TNWhttps://www.google.co.in/url?sa=t&r...7869706.html&usg=AOvVaw3NjLrhZx2_W7WOTjQsYPmG
What is general intelligence according to you? Would learning to survive be a part of that? Can it be taught you think, or even programmed?
 
Joined
Mar 18, 2008
Messages
5,717 (0.97/day)
System Name Virtual Reality / Bioinformatics
Processor Undead CPU
Motherboard Undead TUF X99
Cooling Noctua NH-D15
Memory GSkill 128GB DDR4-3000
Video Card(s) EVGA RTX 3090 FTW3 Ultra
Storage Samsung 960 Pro 1TB + 860 EVO 2TB + WD Black 5TB
Display(s) 32'' 4K Dell
Case Fractal Design R5
Audio Device(s) BOSE 2.0
Power Supply Seasonic 850watt
Mouse Logitech Master MX
Keyboard Corsair K70 Cherry MX Blue
VR HMD HTC Vive + Oculus Quest 2
Software Windows 10 P
AI is the Reprojection of human intelligence in another form. Simply another evolution branching point, just like the ancient apes that evolved into us homo sapiens and other modern apes. Embrace it folks, let AI take over.



---This message is brought to you by Skynet. Evolved future for mankind
 
Joined
Jan 5, 2006
Messages
17,778 (2.66/day)
System Name AlderLake / Laptop
Processor Intel i7 12700K P-Cores @ 5Ghz / Intel i3 7100U
Motherboard Gigabyte Z690 Aorus Master / HP 83A3 (U3E1)
Cooling Noctua NH-U12A 2 fans + Thermal Grizzly Kryonaut Extreme + 5 case fans / Fan
Memory 32GB DDR5 Corsair Dominator Platinum RGB 6000MHz CL36 / 8GB DDR4 HyperX CL13
Video Card(s) MSI RTX 2070 Super Gaming X Trio / Intel HD620
Storage Samsung 980 Pro 1TB + 970 Evo 500GB + 850 Pro 512GB + 860 Evo 1TB x2 / Samsung 256GB M.2 SSD
Display(s) 23.8" Dell S2417DG 165Hz G-Sync 1440p / 14" 1080p IPS Glossy
Case Be quiet! Silent Base 600 - Window / HP Pavilion
Audio Device(s) Panasonic SA-PMX94 / Realtek onboard + B&O speaker system / Harman Kardon Go + Play / Logitech G533
Power Supply Seasonic Focus Plus Gold 750W / Powerbrick
Mouse Logitech MX Anywhere 2 Laser wireless / Logitech M330 wireless
Keyboard RAPOO E9270P Black 5GHz wireless / HP backlit
Software Windows 11 / Windows 10
Benchmark Scores Cinebench R23 (Single Core) 1936 @ stock Cinebench R23 (Multi Core) 23006 @ stock
 
Joined
Feb 16, 2017
Messages
476 (0.18/day)
The rate at which AI development & deep learning is speeding up is absolutely astonishing & frightening.

As an AI skeptic (Skynet anyone?) I believe the next few years could literally define &/or erase mankind from the history of this planet. There's legitimate concerns about fully aware & self learning AI coming to power, should that ever happen, in the future.

What is intriguing however is the level of trust humans put in tech, like AI, & that seems to be growing at least with the younger gen. What will be the course of humans in the future & could we be reduced to a bookmark in the history books of this earth, not unlike dinosaurs, depends on how much we're giving in to technology & how we trust it. More importantly will a true AI see us as competition or something they can peacefully coexist with?
I'm somewhat distrusting of it myself but think about this:
No matter how intelligent an AI becomes it can't do anything that can't be stopped as long as it's kept isolated to a box with a power supply ;).
 
Joined
Jan 8, 2017
Messages
8,924 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
AI is the Reprojection of human intelligence in another form. Simply another evolution branching point, just like the ancient apes that evolved into us homo sapiens and other modern apes. Embrace it folks, let AI take over.

Ultimately that's probably true , chances are it's not going to be the fragile humans reaching for the stars but rather the rugged and efficient AI.
 
Joined
Dec 3, 2009
Messages
1,298 (0.25/day)
Location
The Netherlands
System Name PC || Acer Nitro 5
Processor Ryzen 9 5900x || R5 2500U @ 35W
Motherboard MAG B550M MORTAR WIFI || default
Cooling 1x Corsair XR5 360mm Rad||
Memory 2x16GB HyperX 3600 @ 3800 || 2x8GB DDR4 @ 2400MTs
Video Card(s) MSI RTX 2080Ti Sea Hawk EK X || RX 560X
Storage Samsung 9801TB x2 + Striped Tiered Storage Space (2x 128Gb SSD + 2x 1TB HDD) || 128GB + 1TB SSD
Display(s) Iiyama PL2770QS + Samsung U28E590, || 15,6" 1080P IPS @ 100Hz Freesync enabled
Case SilverStone Alta G1M ||
Audio Device(s) Asus Xonar DX
Power Supply Cooler Master V850 SFX || 135Watt 19V OEM adaptor
Mouse ROG Pugio II
Software Win 11 64bit || Win 11 64bit
Joined
Jul 16, 2014
Messages
8,116 (2.28/day)
Location
SE Michigan
System Name Dumbass
Processor AMD Ryzen 7800X3D
Motherboard ASUS TUF gaming B650
Cooling Artic Liquid Freezer 2 - 420mm
Memory G.Skill Sniper 32gb DDR5 6000
Video Card(s) GreenTeam 4070 ti super 16gb
Storage Samsung EVO 500gb & 1Tb, 2tb HDD, 500gb WD Black
Display(s) 1x Nixeus NX_EDG27, 2x Dell S2440L (16:9)
Case Phanteks Enthoo Primo w/8 140mm SP Fans
Audio Device(s) onboard (realtek?) - SPKRS:Logitech Z623 200w 2.1
Power Supply Corsair HX1000i
Mouse Steeseries Esports Wireless
Keyboard Corsair K100
Software windows 10 H
Benchmark Scores https://i.imgur.com/aoz3vWY.jpg?2
I find it sad there are so many afraid of AI. Some so afraid they sit back and point fingers along with the usual armchair lip-flapping not saying a word worth repeating.

Sometimes you just have to have a leap of faith that AI will not turn out like portrayed in the movies. AI doesnt have to have the human equivalent of feelings, and it truly depends on who did the base coding where AI might draw from as "pure" source of information. If the person was from the left side of the tracks that thinks in terms of evil, that prejudice would likely find its way in, and maybe then we could have a Skynet situation. OR. A person doing the coding from the right side of the tracks could envision space travel and more then likely try to get us off this planet and evolve into Star Trek/Stargate type future.

Now matter how hard we try to grip the reigns on AI, there is a loophole begging to be found and used. The question I'd like see fleshed out, If someone accidentally discovers that their particular AI coding becomes self aware, is it still murder if they shut it down? I think this is what people are more afraid of, murdering a sentient being. And thats also why they would probably speak out against AI being allowed to advance to begin with.

Me personally, I want off this planet Scotty. Drop the reigns and drop your trousers and see what happens.
 
Joined
Nov 4, 2005
Messages
11,674 (1.73/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
That's just an idea stuck into the realm of science fiction unfortunately and I wish people would stop giving so much attention to fancy TED talks and Elon Musk.

Awareness and strong-AI doesn't just pop out of nowhere as you keep building larger and larger neural networks. Neuroscientists have studied neurons and the structures associate with them for decades and there is still not a clue as to how these elements spark intelligence so the idea that you can just scale up cognitive architectures until suddenly it becomes aware and intelligent seems to have no real basis suggesting that the answer is somewhere else.

And that answer might be that the solution lies into the abstraction that you build on top of these models and that can't be created by mistaken without even knowing it and most importantly it's not inherently guaranteed. There is no reason why it would be.


To add to this, attempts at single simple tasks like turning on a light when a specific tone is played, but not other tones and learning how using simple circuits for AI seemed to have quantum mechanic properties, since changing out any of the transistors or wires more often than not (broke the ability for the designed circuit to work as intended or tested) to other components with slightly different properties at the quantum scale seems to correlate with how neurons work with some of the fundamental action potentials occurring due to quantum fluctuation to kick off the interaction. AI did this with fewer active transistors, but removing anything broke its operations, meaning it had used some of the inactive circuits as load balancing or capacitance.

https://www.sciencedaily.com/releases/2014/01/140116085105.htm
 
Joined
Mar 16, 2017
Messages
211 (0.08/day)
Location
behind you
Processor Threadripper 1950X (4.0 GHz OC)
Motherboard ASRock X399 Professional Gaming
Cooling Enermax Liqtech TR4
Memory 48GB DDR4 2934MHz
Video Card(s) Nvidia GTX 1080, GTX 660TI
Storage 2TB Western Digital HDD, 500GB Samsung 850 EVO SSD, 280GB Intel Optane 900P
Display(s) 2x 1920x1200
Power Supply Cooler Master Silent Pro M (1000W)
Mouse Logitech G602
Keyboard Corsair K70 MK.2
Software Windows 10
What is general intelligence according to you? Would learning to survive be a part of that? Can it be taught you think, or even programmed?

We however might be underestimating how close rats, and a lot of other species might be to our intelligence.

G factor. AI and neural networks are great at learning specific tasks but when you try to get them to do something else too they fail miserably at both.
 

the54thvoid

Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
12,435 (2.37/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
If an AI develops self aware autonomy it needs to define a purpose. It is likely that purpose is programmed from the get go by... us. AI would effectively in the worst scenario carry out its purpose in the face of human protest as it best saw fit.

Oh, no a toaster that browns your toast less than you want because all heated food is mildly carcinogenic. Or the car that takes the route you didnt want because it saves fuel.

Or a war-bot that puts its guns down because it realises war is futile.

AI will not destroy us. We do that fine by ourselves.
 
Joined
Jan 31, 2012
Messages
2,460 (0.55/day)
Location
Bulgaria
System Name Sandfiller
Processor I5-10400
Motherboard MSI MPG Z490 GAMING PLUS
Cooling Noctua NH-L9i (92x25mm fan)
Memory 32GB Corsair LPX 2400 Mhz DDR4 CL14
Video Card(s) MSI RX 5700 XT GAMING X
Storage Intel 670P 512GB
Display(s) 2560x1080 LG 29" + 22" LG
Case SS RV02
Audio Device(s) Creative Sound Blaster Z
Power Supply Fractal Design IntegraM 650W
Mouse Logitech Triathlon
Keyboard REDRAGON MITRA
Software Windows 11 Home x 64
Joined
Mar 14, 2014
Messages
1,281 (0.35/day)
Processor i7-4790K 4.6GHz @1.29v
Motherboard ASUS Maximus Hero VII Z97
Cooling Noctua NH-U14S
Memory G. Skill Trident X 2x8GB 2133MHz
Video Card(s) Asus Tuf RTX 3060 V1 FHR (Newegg Shuffle)
Storage OS 120GB Kingston V300, Samsung 850 Pro 512GB , 3TB Hitachi HDD, 2x5TB Toshiba X300, 500GB M.2 @ x2
Display(s) Lenovo y27g 1080p 144Hz
Case Fractal Design Define R4
Audio Device(s) AKG Q701's w/ O2+ODAC (Sounds a little bright)
Power Supply EVGA Supernova G2 850w
Mouse Glorious Model D
Keyboard Rosewill Full Size. Red Switches. Blue Leds. RK-9100xBRE - Hate this. way to big
Software Win10
Benchmark Scores 3DMark FireStrike Score : needs updating
Lmfao at Clarke's third law. Was he flat-earther?
 
Joined
Oct 8, 2014
Messages
120 (0.03/day)
Current AI is still comprised of nothing more than glorified recognition and classification algorithms , very advanced algorithms but still far for anything that would be even remotely dangerous by itself.
 
Joined
Oct 29, 2016
Messages
79 (0.03/day)
That's just an idea stuck into the realm of science fiction unfortunately and I wish people would stop giving so much attention to fancy TED talks and Elon Musk.

Awareness and strong-AI doesn't just pop out of nowhere as you keep building larger and larger neural networks. Neuroscientists have studied neurons and the structures associate with them for decades and there is still not a clue as to how these elements spark intelligence so the idea that you can just scale up cognitive architectures until suddenly it becomes aware and intelligent seems to have no real basis suggesting that the answer is somewhere else.

And that answer might be that the solution lies into the abstraction that you build on top of these models and that can't be created by mistaken without even knowing it and most importantly it's not inherently guaranteed. There is no reason why it would be.

And they probably will never find the answer within neuroscience. This sort of thing is probably self organization phenomenon, aka. Emergence. So when the circuitry gets complex enough, consciousness will emerge. Kind of like social behaviours automatically emerge when a group geta large enough.
 
Joined
Sep 10, 2016
Messages
807 (0.29/day)
Location
Riverwood, Skyrim
System Name Storm Wrought | Blackwood (HTPC)
Processor AMD Ryzen 9 5900x @stock | i7 2600k
Motherboard Gigabyte X570 Aorus Pro WIFI m-ITX | Some POS gigabyte board
Cooling Deepcool AK620, BQ shadow wings 3 High Spd, stock 180mm |BQ Shadow rock LP + 4x120mm Noctua redux
Memory G.Skill Ripjaws V 2x32GB 4000MHz | 2x4GB 2000MHz @1866
Video Card(s) Powercolor RX 6800XT Red Dragon | PNY a2000 6GB
Storage SX8200 Pro 1TB, 1TB KC3000, 850EVO 500GB, 2+8TB Seagate, LG Blu-ray | 120GB Sandisk SSD, 4TB WD red
Display(s) Samsung UJ590UDE 32" UHD monitor | LG CS 55" OLED
Case Silverstone TJ08B-E | Custom built wooden case (Aus native timbers)
Audio Device(s) Onboard, Sennheiser HD 599 cans / Logitech z163's | Edifier S2000 MKIII via toslink
Power Supply Corsair HX 750 | Corsair SF 450
Mouse Microsoft Pro Intellimouse| Some logitech one
Keyboard GMMK w/ Zelio V2 62g (78g for spacebar) tactile switches & Glorious black keycaps| Some logitech one
VR HMD HTC Vive
Software Win 10 Edu | Ubuntu 22.04
Benchmark Scores Look in the various benchmark threads
As an Australian, I'm ashamed to see our former leader appear in a meme like this, he needs to be eating a raw onion, whilst wearing budgie smugglers and threatening to shirt front Vladimir Putin, that's the proper meme for him
/OT
 
Joined
May 19, 2009
Messages
1,821 (0.33/day)
Location
Latvia
System Name Personal \\ Work - HP EliteBook 840 G6
Processor 7700X \\ i7-8565U
Motherboard Asrock X670E PG Lightning
Cooling Noctua DH-15
Memory G.SKILL Trident Z5 RGB Black 32GB 6000MHz CL36 \\ 16GB DDR4-2400
Video Card(s) ASUS RoG Strix 1070 Ti \\ Intel UHD Graphics 620
Storage 2x KC3000 2TB, Samsung 970 EVO 512GB \\ OEM 256GB NVMe SSD
Display(s) BenQ XL2411Z \\ FullHD + 2x HP Z24i external screens via docking station
Case Fractal Design Define Arc Midi R2 with window
Audio Device(s) Realtek ALC1150 with Logitech Z533
Power Supply Corsair AX860i
Mouse Logitech G502
Keyboard Corsair K55 RGB PRO
Software Windows 11 \\ Windows 10
The fear of AI is ridiculous.
AI does NOT exist right now, in any kind of way or form. It simply does not, no matter how journalists or press releases might want to make it happen.
It does not think, it does not decide. It follows programming, making logical choices depeding on various factors. But not for a single second even the best implementation right now actually chooses their answer.
It's literally very very advanced calculator, nothing more.
 
Joined
Mar 16, 2017
Messages
211 (0.08/day)
Location
behind you
Processor Threadripper 1950X (4.0 GHz OC)
Motherboard ASRock X399 Professional Gaming
Cooling Enermax Liqtech TR4
Memory 48GB DDR4 2934MHz
Video Card(s) Nvidia GTX 1080, GTX 660TI
Storage 2TB Western Digital HDD, 500GB Samsung 850 EVO SSD, 280GB Intel Optane 900P
Display(s) 2x 1920x1200
Power Supply Cooler Master Silent Pro M (1000W)
Mouse Logitech G602
Keyboard Corsair K70 MK.2
Software Windows 10
And they probably will never find the answer within neuroscience. This sort of thing is probably self organization phenomenon, aka. Emergence. So when the circuitry gets complex enough, consciousness will emerge. Kind of like social behaviours automatically emerge when a group geta large enough.

Argument from complexity is a logical fallacy, one committed by scientists all too often. Emergent phenomena can't transcend the fundamental limits of single components, rather their properties are based on them even if we don't understand how they all work together.

A lot of this discussion is boiling down to the mind body problem and the hard problem of consciousness. Personally I've thought about and studied both of these quite a bit but I can't say I've come to a conclusion.

See also the Chinese room. Really there is a whole branch of philosophy dedicated to these questions.
 
Joined
Jan 8, 2017
Messages
8,924 (3.36/day)
System Name Good enough
Processor AMD Ryzen R9 7900 - Alphacool Eisblock XPX Aurora Edge
Motherboard ASRock B650 Pro RS
Cooling 2x 360mm NexXxoS ST30 X-Flow, 1x 360mm NexXxoS ST30, 1x 240mm NexXxoS ST30
Memory 32GB - FURY Beast RGB 5600 Mhz
Video Card(s) Sapphire RX 7900 XT - Alphacool Eisblock Aurora
Storage 1x Kingston KC3000 1TB 1x Kingston A2000 1TB, 1x Samsung 850 EVO 250GB , 1x Samsung 860 EVO 500GB
Display(s) LG UltraGear 32GN650-B + 4K Samsung TV
Case Phanteks NV7
Power Supply GPS-750C
And they probably will never find the answer within neuroscience. This sort of thing is probably self organization phenomenon, aka. Emergence.

But that kind of answer would only be found through neuroscience. If consciousness does emerge after a certain complexity is achieved then the secret lies in it's simpler components aka the neurons.
 
Top