• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AI Topic: AlphaZero, ChatGPT, Bard, Stable Diffusion and more!

Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
Its hard for me to draw a line for where this "current generation" of AI starts, but I'll draw the line at AlphaZero... the Go AI that defeated world Go champion Lee Sodol in 2016. Ever since that fateful day, various companies have been applying the "AlphaGo Style" AI to various subjects. That is: deep convolutional neural networks, programmed with GPUs or ASICs with millions or even billions of training cycles, applied to incredibly large datasets.

The hype appears to continue to grow today, with OpenAI's ChatGPT being bought out by Microsoft and integrated into Bing. Soon after this announcement, Google / Alphabet announces the Bard large language model / ChatBot to compete against ChatGPT. Going in a slightly different direction: Stable Diffusion and Dall-E have been released for Artists to turn "text into image". A language model + artist-bot that automatically draws what you tell it to draw.

ChatGPT in particular has been featured recently here at TechPowerup: https://www.techpowerup.com/304567/...eneration-ai-model-more-powerful-than-chatgpt

There's been a lot of discussion online about ChatGPT: from "Dan" (a set of sentences that seems to unlock an "evil" version of ChatGPT), to glitches (such as the "Avatar Bug": https://www.reddit.com/r/bing/comments/110tb9n ).

I also would like to include older AIs, such as DeepBlue (AI Chess Champion), Watson (IBM's Jeopardy AI), and others where relevant. Even "SmarterChild", which served as the chatbot of my childhood, shows similarities with ChatGPT. People have been trying to make good ChatBots for a long time, and keeping history in mind can help us remember where we've come from.
 

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
16,000 (4.60/day)
Location
Kepler-186f

AI ain't shit. Also, it can't understand the nuance of philosophy at all, sounds like a big fucking dummy when I ask it philosophical questions, lmao, useless mirror of coders and databases to draw from, that's all it is.

welcome to the big fucking leagues boys ~ Tychus
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)

AI ain't shit. Also, it can't understand the nuance of philosophy at all, sounds like a big fucking dummy when I ask it philosophical questions, lmao, useless mirror of coders and databases to draw from, that's all it is.

welcome to the big fucking leagues boys ~ Tychus

Defeating KataGo is extremely impressive, because KataGo is several hundred Elo stronger than LeelaZero (which was roughly the strength of AlphaZero).

The key to that specific situation is that this new generation of AI has very specific failure modes that can be discovered through Adversarial policy. Adversarial policy is probably a bigger deal to competitive games (ex: someone who understands the Adversarial Policy of KataGo can defeat that specific KataGo policy). It also means that those AIs "odd moves" are now coming to question.

Is it an "odd move" because its beyond human comprehension? Or is it simply a bad move because its a failure of the AI (and the AI just "makes up for it" in later, more average, situations?). Demonstrating this failure case (and playing it over-and-over again) changes our understanding of the situation for sure.

---------

New post: 2/23/2023:


This chatgpt failure is amusing for me, though probably bad for "opencagedata.com", as a myriad of users suddenly believe that opencagedata.com can do something that... it doesn't support at all.

Apparently ChatGPT currently is "hallucinating" and telling a bunch of people a lie. Causing users to sign up for opencagedata, then suddenly quit.
 
Last edited:
Joined
Jun 22, 2006
Messages
1,049 (0.16/day)
System Name Beaver's Build
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero (WI-FI) - X570
Cooling Corsair H115i RGB PLATINUM 97 CFM Liquid
Memory G.Skill Trident Z Neo 32 GB (2 x 16 GB) DDR4-3600 Memory - 16-19-19-39
Video Card(s) NVIDIA GeForce RTX 4090 Founders Edition
Storage Inland 1TB NVMe M.2 (Phison E12) / Samsung 950 Pro M.2 NVMe 512G / WD Black 6TB - 256M cache
Display(s) Alienware AW3225QF 32" 4K 240 Hz OLED
Case Fractal Design Design Define R6 USB-C
Audio Device(s) Focusrite 2i4 USB Audio Interface
Power Supply SuperFlower LEADEX TITANIUM 1600W
Mouse Razer DeathAdder V2
Keyboard Razer Cynosa V2 (Membrane)
Software Microsoft Windows 10 Pro x64
Benchmark Scores 3dmark = https://www.3dmark.com/spy/32087054 Cinebench R15 = 4038 Cinebench R20 = 9210
“”The advent of 8-bit floating point offers tremendous performance and efficiency benefits for AI compute,” said Simon Knowles, CTO and co-founder of Graphcore. “It is also an opportunity for the industry to settle on a single, open standard, rather than ushering in a confusing mix of competing formats.”
Indeed, everyone is optimistic there will be a standard — eventually.”


https://semiengineering.com/will-floating-point-8-solve-ai-ml-overhead/
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
I'm in on the Bing AI (aka: ChatGPT).

I decided to have as "natural" of a discussion as I could with the AI. I already know the answers since I've done research in this subject, so I'm pretty aware of mistakes / errors as they come up. Maybe for a better test, I need to use this as a research aid and see if I'm able to pick up on the bullshit on a subject I don't know about...

1677224658930.png


Well, bam. Already Bing is terrible, unable to answer my question and getting it backwards (giving a list of RP2040 reasons instead of AVR reasons). Its also using a rather out-of-date ATMega328 as a comparison point. So I type up a quick retort to see what it says...

1677224738258.png


1677224754561.png


This is... wrong. RP2040 doesn't have enough current to drive a 7-segment LED display. PIO seems like a terrible option as well. MAX7219 is a decent answer, but Google could have given me that much faster (ChatGPT / Bing is rather slow).

"Background Writes" is a software thing. You'd need to combine it with the electrical details (ie: MAX7219).

7-segment displays can't display any animations. The amount of RAM you need to drive it is like... 1 or 2 bytes, the 264kB RAM (though an advantage to the RP2040), is completely wasted in this case.

1677224773661.png


Fail. RP2040 doesn't have enough current. RP2040 literally cannot do the job as they describe here.

1677224786894.png


Wow. So apparently its already forgotten what the AVR DD was, despite giving me a paragraph or two just a few questions ago. I thought this thing was supposed to have better memory than that?

I'll try the ATMega328p, which is what it talked about earlier.

1677224983594.png


Fails to note that ATMega328 has enough current to drive the typical 7-segment display even without a adapter like MAX7219. So despite all this rambling, its come to the wrong conclusion.

------------

So it seems like ChatGPT / Bing AI is about doing a "research", while summarizing pages from the top of the internet for the user? You don't actually know if the information is correct or not however, so that limits its usefulness.

It seems like Bing AI is doing a good job at summarizing the articles that pop up on the internet, and giving citations. But its conclusions and reasoning can be very wrong. It also can have significant blind spots (ie: RP2040 not having enough current to directly drive a 7-segment display. A key bit of information that this chat session was unable to discover, or even figure out it might be a problem).

----------

Anyone have a list of questions they want me to give to ChatGPT?

Another run...

1677226577958.png




1677226247414.png


I think I'm beginning to see what this chatbot is designed to do.

1. This thing is decent at summarizing documents. But notice: it pulls the REF1004 as my "5V" voltage reference. Notice anything wrong? https://www.ti.com/lit/ds/sbvs002/sbvs002.pdf . Its a 2.5V reference, seems like ChatGPT pattern-matched on 5V and doesn't realize its a completely different number than 2.5V (or some similar error?)

2. Holy crap its horrible at math. I don't even need a calculator, and the 4.545 kOhm + 100 Ohm trimmer pot across 5V obviously can't reach 1mA, let alone 0.9mA. Also, 0.9mA to 1.1mA is +/- 10%, I was asking for 1.000mA.

-------

Instead, what ChatGPT is "good" at, is summarizing articles that exist inside of the Bing Database. If it can "pull" a fact out of the search engine, it seems to summarize it pretty well. But the moment it tries to "reason" with the knowledge and combine facts together, it gets it horribly, horribly wrong.

Interesting tool. I'll need to play with it more to see how it could possibly ever be useful. But... I'm not liking it right now. Its extremely slow, its wrong in these simple cases. So I'm quite distrustful of it being a useful tool on a subject I know nothing about. I'd have to use this tool on a subject I'm already familiar with, so that I can pick out the bullshit from the good stuff.
 
Last edited:
Joined
Feb 18, 2005
Messages
5,238 (0.75/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
But the moment it tries to "reason" with the knowledge and combine facts together, it gets it horribly, horribly wrong.
Congratulations, you've discovered that the current "AI" and/or machine learning hype is just that. Because there is no artificial intelligence, there are just newer algorithms with more knobs and dials running on more hardware and being trained on larger datasets. The old adage about a million monkeys with typewriters eventually producing a Shakespearean work has never been more correct or relevant - except that instead of giving the monkeys a peanut when you think they're producing incorrect results, you adjust the weighting of your algorithm.

Find yet another edge case where your "AI" royally screws the pooch? Add another knob to bypass it. Wash, rinse, repeat until you have something that is capable of producing output that is good enough to impress anyone who doesn't understand how this spiderweb of insanity actually "works". But as soon as someone asks for a change and you tweak one of those knobs, it has a cascading effect on all the other knobs, and the end result is something that produces the worst kind of gobbledygook.

Those "weird experiences" that users are reporting when they actually use these "AI" models for any decent period of time? That's an edge case that there isn't a knob for. And there won't be, because the current AI pundits aren't interested in making a model that produces correct results; they're just interested in making it produce correct enough results enough of the time to be good enough, that the common folk don't start asking why the emperor behind the curtain has no clothes.
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
Congratulations, you've discovered that the current "AI" and/or machine learning hype is just that. Because there is no artificial intelligence, there are just newer algorithms with more knobs and dials running on more hardware and being trained on larger datasets. The old adage about a million monkeys with typewriters eventually producing a Shakespearean work has never been more correct or relevant - except that instead of giving the monkeys a peanut when you think they're producing incorrect results, you adjust the weighting of your algorithm.

Find yet another edge case where your "AI" royally screws the pooch? Add another knob to bypass it. Wash, rinse, repeat until you have something that is capable of producing output that is good enough to impress anyone who doesn't understand how this spiderweb of insanity actually "works". But as soon as someone asks for a change and you tweak one of those knobs, it has a cascading effect on all the other knobs, and the end result is something that produces the worst kind of gobbledygook.

Those "weird experiences" that users are reporting when they actually use these "AI" models for any decent period of time? That's an edge case that there isn't a knob for. And there won't be, because the current AI pundits aren't interested in making a model that produces correct results; they're just interested in making it produce correct enough results enough of the time to be good enough, that the common folk don't start asking why the emperor behind the curtain has no clothes.

Yes and no? I play with AI for many years. In my experience, "AI" is called AI as long as its mystical. The moment it becomes useful, it is no longer AI, but given a proper name. A proper tool.

Case in point: traversing a maze, or a city, automatically, was once considered AI. Today, we call this algorithm "GPS".

Point#2: Searching through documents, finding the most relevant ones, and presenting them to the user was once called AI. Today, we call this algorithm "Google".

----------

I "get" the hype cycle. Trust me, I've seen this played out more times than you or I can count. But fundamentally, the tool has to be useful for it to stick over the long term. I guess I'm disappointed in this hype cycle because... unlike other tools, this cycle is mostly hype as you point out.

But its "not always hype". And I also get it, sometimes a tool comes out and no one knows how to use it correctly. Ex: AlphaZero / LeelaZero were "superhuman Go AIs", but they weren't useful tools until Lizzie (https://github.com/featurecat/lizzie) and other "interfaces" connected and presented the internal "thoughts" of LeelaZero to the Go player. So sometimes it takes a couple of months, or years, before a tool goes from "hype stage" into "useful stage".

---------

Anyway, I guess what I'm trying to say is... all these new tools deserve a look to see if there's anything behind the hype. Maybe its the next "AI maze traversing algorithm" (aka: GPS / Automatic Route planning AI). You never really know until it becomes mundane and just part of our lives.

Dall-E and Stable Diffusion (automatic drawing AI / picture generation) was far more "obviously useful" and is part of the current hype cycle. These tools look useful to me... but I'm not an artist so I'm not really sure. I do notice that they're terrible at anatomy and shadows though (but I feel like a good artist might be able to fix up those issues and still get a drawing done quicker with AI-assistance, than without).
 
Last edited:
Joined
Oct 21, 2005
Messages
6,880 (1.02/day)
Location
USA
System Name Computer of Theseus
Processor Intel i9-12900KS: 50x Pcore multi @ 1.18Vcore (target 1.275V -100mv offset)
Motherboard EVGA Z690 Classified
Cooling Noctua NH-D15S, 2xThermalRight TY-143, 4xNoctua NF-A12x25,3xNF-A12x15, 2xAquacomputer Splitty9Active
Memory G-Skill Trident Z5 (32GB) DDR5-6000 C36 F5-6000J3636F16GX2-TZ5RK
Video Card(s) EVGA Geforce 3060 XC Black Gaming 12GB
Storage 1x Samsung 970 Pro 512GB NVMe (OS), 2x Samsung 970 Evo Plus 2TB (data 1 and 2), ASUS BW-16D1HT
Display(s) Dell S3220DGF 32" 2560x1440 165Hz Primary, Dell P2017H 19.5" 1600x900 Secondary, Ergotron LX arms.
Case Lian Li O11 Air Mini
Audio Device(s) Audiotechnica ATR2100X-USB, El Gato Wave XLR Mic Preamp, ATH M50X Headphones, Behringer 302USB Mixer
Power Supply Super Flower Leadex Platinum SE 1000W 80+ Platinum White
Mouse Zowie EC3-C
Keyboard Vortex Multix 87 Winter TKL (Gateron G Pro Yellow)
Software Win 10 LTSC 21H2
I've had a bit of fun making bizzaro Seinfeld episodes with Chat GPT. I've managed to work around the safety filter on it and I've had it generate plot lines such as George killing Newman, the "Seinfeld" gang turning Newman into sausages with Kramer's new sausage maker, which Elaine takes to the J Peterman party.
 
Joined
Aug 14, 2013
Messages
2,373 (0.61/day)
System Name boomer--->zoomer not your typical millenial build
Processor i5-760 @ 3.8ghz + turbo ~goes wayyyyyyyyy fast cuz turboooooz~
Motherboard P55-GD80 ~best motherboard ever designed~
Cooling NH-D15 ~double stack thot twerk all day~
Memory 16GB Crucial Ballistix LP ~memory gone AWOL~
Video Card(s) MSI GTX 970 ~*~GOLDEN EDITION~*~ RAWRRRRRR
Storage 500GB Samsung 850 Evo (OS X, *nix), 128GB Samsung 840 Pro (W10 Pro), 1TB SpinPoint F3 ~best in class
Display(s) ASUS VW246H ~best 24" you've seen *FULL HD* *1O80PP* *SLAPS*~
Case FT02-W ~the W stands for white but it's brushed aluminum except for the disgusting ODD bays; *cries*
Audio Device(s) A LOT
Power Supply 850W EVGA SuperNova G2 ~hot fire like champagne~
Mouse CM Spawn ~cmcz R c00l seth mcfarlane darawss~
Keyboard CM QF Rapid - Browns ~fastrrr kees for fstr teens~
Software integrated into the chassis
Benchmark Scores 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
They aren’t just pattern matchers, though, they compare potential outcomes to achieve a desired/programmed outcome. You could argue that this is a limitation of their intelligence, but a) it requires intelligence to make those comparisons and b) there are AIs that are able to evaluate and revise those preferred outcomes to achieve better results than initially programmed for.
 
Joined
Feb 18, 2005
Messages
5,238 (0.75/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
They aren’t just pattern matchers, though, they compare potential outcomes to achieve a desired/programmed outcome. You could argue that this is a limitation of their intelligence, but a) it requires intelligence to make those comparisons and b) there are AIs that are able to evaluate and revise those preferred outcomes to achieve better results than initially programmed for.
Those are still not AIs, they are models that learn from reinforcement. That reinforcement, and therefore the intelligence, comes from... *drum roll*... humans.
 
Last edited:
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
Those are still not AIs, they are models that learn from reinforcement. That reinforcement, and therefore the intelligence, comes from... *drum roll*... humans.

Depends. I personally see AI in a very broad lens. As I stated before, I still remember when GPS / maze solving was considered AI, and there's no reinforcement learning going on here. Just A* or other pathfinding algorithms. In the case of KataGo, there's a few hard-coded heuristics (ladder in particular, a formation in Go that is easy for humans to calculate but hard for AlphaZero to understand). But overall, AlphaZero and KataGo learned entirely from self-play.

So even with this convolutional-neural-network methodology, there are AIs (like AlphaZero / LeelaZero) which have learned without any human input. And I do consider AlphaZero / LeelaZero to be this "deep learning" style of AI that's part of the hype today.

--------------

I'd personally consider the "verification" algorithms of modern computers to be AI. Its all logic puzzles and mapping. These aren't "CNNs" (convolutional neural nets), but are old-school automated theorem provers and binary decision diagrams. More 80s-style AI rather than 2020s style AI. Expert systems and the like.

AI itself is an unhelpful term in terms of discussion IMO. For ChatGPT, perhaps its more accurate and precise to call it a LLM (large language model), implied to be a neural network trained on large amounts of text that was available on the open internet (likely Reddit, maybe Discord, IRC and other chat channels).

-------

I've said it before but I'll say it again. The difference between AI and "Tools", is that its called AI when people don't understand it. Once upon a time, it was considered a sign of intelligence to find the cheapest path for your plane tickets. Today, we call that tool "Expedia.com" and its no longer considered AI. Its just mundane.
 
Joined
Jun 22, 2006
Messages
1,049 (0.16/day)
System Name Beaver's Build
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero (WI-FI) - X570
Cooling Corsair H115i RGB PLATINUM 97 CFM Liquid
Memory G.Skill Trident Z Neo 32 GB (2 x 16 GB) DDR4-3600 Memory - 16-19-19-39
Video Card(s) NVIDIA GeForce RTX 4090 Founders Edition
Storage Inland 1TB NVMe M.2 (Phison E12) / Samsung 950 Pro M.2 NVMe 512G / WD Black 6TB - 256M cache
Display(s) Alienware AW3225QF 32" 4K 240 Hz OLED
Case Fractal Design Design Define R6 USB-C
Audio Device(s) Focusrite 2i4 USB Audio Interface
Power Supply SuperFlower LEADEX TITANIUM 1600W
Mouse Razer DeathAdder V2
Keyboard Razer Cynosa V2 (Membrane)
Software Microsoft Windows 10 Pro x64
Benchmark Scores 3dmark = https://www.3dmark.com/spy/32087054 Cinebench R15 = 4038 Cinebench R20 = 9210
Depends. I personally see AI in a very broad lens. As I stated before, I still remember when GPS / maze solving was considered AI, and there's no reinforcement learning going on here. Just A* or other pathfinding algorithms. In the case of KataGo, there's a few hard-coded heuristics (ladder in particular, a formation in Go that is easy for humans to calculate but hard for AlphaZero to understand). But overall, AlphaZero and KataGo learned entirely from self-play.

So even with this convolutional-neural-network methodology, there are AIs (like AlphaZero / LeelaZero) which have learned without any human input. And I do consider AlphaZero / LeelaZero to be this "deep learning" style of AI that's part of the hype today.

--------------

I'd personally consider the "verification" algorithms of modern computers to be AI. Its all logic puzzles and mapping. These aren't "CNNs" (convolutional neural nets), but are old-school automated theorem provers and binary decision diagrams. More 80s-style AI rather than 2020s style AI. Expert systems and the like.

AI itself is an unhelpful term in terms of discussion IMO. For ChatGPT, perhaps its more accurate and precise to call it a LLM (large language model), implied to be a neural network trained on large amounts of text that was available on the open internet (likely Reddit, maybe Discord, IRC and other chat channels).

-------

I've said it before but I'll say it again. The difference between AI and "Tools", is that its called AI when people don't understand it. Once upon a time, it was considered a sign of intelligence to find the cheapest path for your plane tickets. Today, we call that tool "Expedia.com" and its no longer considered AI. Its just mundane.
1677448474364.png

 
Joined
Aug 14, 2013
Messages
2,373 (0.61/day)
System Name boomer--->zoomer not your typical millenial build
Processor i5-760 @ 3.8ghz + turbo ~goes wayyyyyyyyy fast cuz turboooooz~
Motherboard P55-GD80 ~best motherboard ever designed~
Cooling NH-D15 ~double stack thot twerk all day~
Memory 16GB Crucial Ballistix LP ~memory gone AWOL~
Video Card(s) MSI GTX 970 ~*~GOLDEN EDITION~*~ RAWRRRRRR
Storage 500GB Samsung 850 Evo (OS X, *nix), 128GB Samsung 840 Pro (W10 Pro), 1TB SpinPoint F3 ~best in class
Display(s) ASUS VW246H ~best 24" you've seen *FULL HD* *1O80PP* *SLAPS*~
Case FT02-W ~the W stands for white but it's brushed aluminum except for the disgusting ODD bays; *cries*
Audio Device(s) A LOT
Power Supply 850W EVGA SuperNova G2 ~hot fire like champagne~
Mouse CM Spawn ~cmcz R c00l seth mcfarlane darawss~
Keyboard CM QF Rapid - Browns ~fastrrr kees for fstr teens~
Software integrated into the chassis
Benchmark Scores 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
Yeah it’s weird how people are trying so hard to make non-organic intelligence sound dumb while it solves problems
 
Joined
Feb 18, 2005
Messages
5,238 (0.75/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
*sigh*

Let me explain, again, why this is not artificial intelligence. Or useful.

What these so-called "scientists" have done is nothing more than provide a bajillion pieces of code to an ML model. They then provide a desired output to that model. The model then goes off and brute-force assembles those bajillion pieces of code into a bajillion different permutations, a handful of which produce the desired output, and it picks 10 of those permutations based on certain criteria (probably shortest runtime).

There. Is. No. Intelligence. There.

And there is very little usefulness either, because the programming competitions that the paper references are themselves not useful. They take the form of extremely narrowly-defined problems with a single right or wrong answer. But real life doesn't work like that because you almost never get a specification in that ultimate, pure form; you get something relatively high-level, which you use your human brain to translate to the most optimal solution, and then you implement that solution as code. And that using-brain-to-figure-out-solution part is what software developers are paid for, not the writing code part, because figuring the correct solution out is what takes the time and effort. Writing that solution as code is effectively ancillary to the whole process of figuring out what that solution should be.

In fact, programming competitions are so useless for judging actual software development capability that companies who actually understand software development have completely stopped using them as a tool to judge prospective hires. For the simple reason that, exactly like the ML model described above, the hires who do great in these competitions generally do abysmally in the actual job of figuring out how translate high-level requirements into useful solutions. (There's also the fact that a lot of the figuring out bit generally involves talking to other humans, and people who spend their lives wanking to their position on programming competition leaderboards generally turn out to be bad at the whole social interaction thing.)

True intelligence requires understanding; until we figure out how to synthesise the latter, we cannot have the former. And the "revolutionary" ChatGPT simply hasn't synthesised understanding yet; it's just better than its predecessors at mimicking understanding. And that might look like progress, but it isn't; it's just obfuscation of the truth, in order that "AI" companies can make money and scientists can churn out rubbish papers like the above.

In fact, this isn't just the opposite of progress - it's also dangerous. Because these new AI models' appearance of competence is going to result in people and companies starting to depend on them, and when these models fail - and they inevitably will, because again, they do not have true understanding of what they're actually doing - those failures are going to cause massive problems, especially for trust in these "AI"s. Honestly I'm hoping that point comes sooner rather than later, so that governments start regulating what companies are allowed to claim about their "AI" products.

Yeah it’s weird how people are trying so hard to make non-organic intelligence sound dumb while it solves problems
It's dumb because it solves problems by accident. That means it can accidentally "solve" them in the completely wrong way, which in the best case may simply be amusing... and in the worst case, actively harmful.
 
Last edited by a moderator:

Frick

Fishfaced Nincompoop
Joined
Feb 27, 2006
Messages
18,934 (2.85/day)
Location
Piteå
System Name Black MC in Tokyo
Processor Ryzen 5 5600
Motherboard Asrock B450M-HDV
Cooling Be Quiet! Pure Rock 2
Memory 2 x 16GB Kingston Fury 3400mhz
Video Card(s) XFX 6950XT Speedster MERC 319
Storage Kingston A400 240GB | WD Black SN750 2TB |WD Blue 1TB x 2 | Toshiba P300 2TB | Seagate Expansion 8TB
Display(s) Samsung U32J590U 4K + BenQ GL2450HT 1080p
Case Fractal Design Define R4
Audio Device(s) Line6 UX1 + some headphones, Nektar SE61 keyboard
Power Supply Corsair RM850x v3
Mouse Logitech G602
Keyboard Cherry MX Board 1.0 TKL Brown
VR HMD Acer Mixed Reality Headset
Software Windows 10 Pro
Benchmark Scores Rimworld 4K ready!
*sigh*

Let me explain, again, why this is not artificial intelligence. Or useful.

What these so-called "scientists" have done is nothing more than provide a bajillion pieces of code to an ML model. They then provide a desired output to that model. The model then goes off and brute-force assembles those bajillion pieces of code into a bajillion different permutations, a handful of which produce the desired output, and it picks 10 of those permutations based on certain criteria (probably shortest runtime).

There. Is. No. Intelligence. There.

And there is very little usefulness either, because the programming competitions that the paper references are themselves not useful.

It probably shouldn't be called AI, agreed, but hard disagree on it not being useful. ChatGPT can basically function as a reference. A competent Google, if you will. That applies to programming, but in my limited life I know several people who right not use these sorts of tools in their professional lives or as a hobby.
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)



We've had computers design superhuman things before. AI is just a name we call tools before we know how to use them. Once we get used to it, its called "Mathematica", symbolic reasoning, automatic logic, automated proofs, or design aids.

The question is if this generation of techniques is useful, and how will it be integrated into society and our designs. I'm not 100% convinced this is useful yet actually.
 
Joined
Jun 22, 2006
Messages
1,049 (0.16/day)
System Name Beaver's Build
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero (WI-FI) - X570
Cooling Corsair H115i RGB PLATINUM 97 CFM Liquid
Memory G.Skill Trident Z Neo 32 GB (2 x 16 GB) DDR4-3600 Memory - 16-19-19-39
Video Card(s) NVIDIA GeForce RTX 4090 Founders Edition
Storage Inland 1TB NVMe M.2 (Phison E12) / Samsung 950 Pro M.2 NVMe 512G / WD Black 6TB - 256M cache
Display(s) Alienware AW3225QF 32" 4K 240 Hz OLED
Case Fractal Design Design Define R6 USB-C
Audio Device(s) Focusrite 2i4 USB Audio Interface
Power Supply SuperFlower LEADEX TITANIUM 1600W
Mouse Razer DeathAdder V2
Keyboard Razer Cynosa V2 (Membrane)
Software Microsoft Windows 10 Pro x64
Benchmark Scores 3dmark = https://www.3dmark.com/spy/32087054 Cinebench R15 = 4038 Cinebench R20 = 9210
*sigh*

Let me explain, again, why this is not artificial intelligence. Or useful.

What these so-called "scientists" have done is nothing more than provide a bajillion pieces of code to an ML model. They then provide a desired output to that model. The model then goes off and brute-force assembles those bajillion pieces of code into a bajillion different permutations, a handful of which produce the desired output, and it picks 10 of those permutations based on certain criteria (probably shortest runtime).

There. Is. No. Intelligence. There.

And there is very little usefulness either, because the programming competitions that the paper references are themselves not useful. They take the form of extremely narrowly-defined problems with a single right or wrong answer. But real life doesn't work like that because you almost never get a specification in that ultimate, pure form; you get something relatively high-level, which you use your human brain to translate to the most optimal solution, and then you implement that solution as code. And that using-brain-to-figure-out-solution part is what software developers are paid for, not the writing code part, because figuring the correct solution out is what takes the time and effort. Writing that solution as code is effectively ancillary to the whole process of figuring out what that solution should be.

In fact, programming competitions are so useless for judging actual software development capability that companies who actually understand software development have completely stopped using them as a tool to judge prospective hires. For the simple reason that, exactly like the ML model described above, the hires who do great in these competitions generally do abysmally in the actual job of figuring out how translate high-level requirements into useful solutions. (There's also the fact that a lot of the figuring out bit generally involves talking to other humans, and people who spend their lives wanking to their position on programming competition leaderboards generally turn out to be bad at the whole social interaction thing.)

True intelligence requires understanding; until we figure out how to synthesise the latter, we cannot have the former. And the "revolutionary" ChatGPT simply hasn't synthesised understanding yet; it's just better than its predecessors at mimicking understanding. And that might look like progress, but it isn't; it's just obfuscation of the truth, in order that "AI" companies can make money and scientists can churn out rubbish papers like the above.

In fact, this isn't just the opposite of progress - it's also dangerous. Because these new AI models' appearance of competence is going to result in people and companies starting to depend on them, and when these models fail - and they inevitably will, because again, they do not have true understanding of what they're actually doing - those failures are going to cause massive problems, especially for trust in these "AI"s. Honestly I'm hoping that point comes sooner rather than later, so that governments start regulating what companies are allowed to claim about their "AI" products.


It's dumb because it solves problems by accident. That means it can accidentally "solve" them in the completely wrong way, which in the best case may simply be amusing... and in the worst case, actively harmful.
A Blueprint for Affective Computing: A Sourcebook and Manual



Artificial Reasoning with Subjective Logic
 
Joined
Aug 4, 2020
Messages
1,572 (1.16/day)
Location
::1
i mean, i can clearly see where you guys are coming from, considering like, at least 95% of all humans can't even clear the basic bar of being considered intelligent

as an old jedi-master once said the ability to speak does not make one intelligent
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
i mean, i can clearly see where you guys are coming from, considering like, at least 95% of all humans can't even clear the basic bar of being considered intelligent

as an old jedi-master once said the ability to speak does not make one intelligent

But that doesn't mean I'll use Bing AI to conduct research, if normal Google is a better tool for doing so.

There's a reason why I'm hyperfocused on "Is this useful??". Because if its not useful to me, I'm not going to use it.

Leave the "is this intelligence" debate for philosophers. The question on the technologist's hands is "Can I use this tool to speed up my research?", to discover unknown documents or otherwise improve my life and/or work in some manner?
 
Joined
Jun 22, 2006
Messages
1,049 (0.16/day)
System Name Beaver's Build
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero (WI-FI) - X570
Cooling Corsair H115i RGB PLATINUM 97 CFM Liquid
Memory G.Skill Trident Z Neo 32 GB (2 x 16 GB) DDR4-3600 Memory - 16-19-19-39
Video Card(s) NVIDIA GeForce RTX 4090 Founders Edition
Storage Inland 1TB NVMe M.2 (Phison E12) / Samsung 950 Pro M.2 NVMe 512G / WD Black 6TB - 256M cache
Display(s) Alienware AW3225QF 32" 4K 240 Hz OLED
Case Fractal Design Design Define R6 USB-C
Audio Device(s) Focusrite 2i4 USB Audio Interface
Power Supply SuperFlower LEADEX TITANIUM 1600W
Mouse Razer DeathAdder V2
Keyboard Razer Cynosa V2 (Membrane)
Software Microsoft Windows 10 Pro x64
Benchmark Scores 3dmark = https://www.3dmark.com/spy/32087054 Cinebench R15 = 4038 Cinebench R20 = 9210
i mean, i can clearly see where you guys are coming from, considering like, at least 95% of all humans can't even clear the basic bar of being considered intelligent

as an old jedi-master once said the ability to speak does not make one intelligent
there's some humans that cannot pass Turing, i've been mentioning that human level intelligent is extremely broad. also i question the desire to simulate human cognition due to so much variation.

everytime i loaded up one of these "AI" chatbots to discord.py (BlenderBot 1,2) and now ChatGPT... it just accelerates the conversation into degeneracy
 
Joined
Aug 4, 2020
Messages
1,572 (1.16/day)
Location
::1
But that doesn't mean I'll use Bing AI to conduct research, if normal Google is a better tool for doing so.

There's a reason why I'm hyperfocused on "Is this useful??". Because if its not useful to me, I'm not going to use it.

Leave the "is this intelligence" debate for philosophers. The question on the technologist's hands is "Can I use this tool to speed up my research?", to discover unknown documents or otherwise improve my life and/or work in some manner?
why yes, even a non-intelligent tool can be useful, well duh
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
why yes, even a non-intelligent tool can be useful, well duh

And vice versa. Just because you convince some philosopher that your tool has "intelligence", doesn't mean its useful.

The core element of technology is "usefulness". Its easy to forget sometimes because we play so many games and create so many toys. But at the end of the day, technology is about the pursuit of new tools, and learning how to use these new tools.
 
Joined
Jun 22, 2006
Messages
1,049 (0.16/day)
System Name Beaver's Build
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero (WI-FI) - X570
Cooling Corsair H115i RGB PLATINUM 97 CFM Liquid
Memory G.Skill Trident Z Neo 32 GB (2 x 16 GB) DDR4-3600 Memory - 16-19-19-39
Video Card(s) NVIDIA GeForce RTX 4090 Founders Edition
Storage Inland 1TB NVMe M.2 (Phison E12) / Samsung 950 Pro M.2 NVMe 512G / WD Black 6TB - 256M cache
Display(s) Alienware AW3225QF 32" 4K 240 Hz OLED
Case Fractal Design Design Define R6 USB-C
Audio Device(s) Focusrite 2i4 USB Audio Interface
Power Supply SuperFlower LEADEX TITANIUM 1600W
Mouse Razer DeathAdder V2
Keyboard Razer Cynosa V2 (Membrane)
Software Microsoft Windows 10 Pro x64
Benchmark Scores 3dmark = https://www.3dmark.com/spy/32087054 Cinebench R15 = 4038 Cinebench R20 = 9210
*sigh*

Let me explain, again, why this is not artificial intelligence. Or useful.

What these so-called "scientists" have done is nothing more than provide a bajillion pieces of code to an ML model. They then provide a desired output to that model. The model then goes off and brute-force assembles those bajillion pieces of code into a bajillion different permutations, a handful of which produce the desired output, and it picks 10 of those permutations based on certain criteria (probably shortest runtime).

There. Is. No. Intelligence. There.

And there is very little usefulness either, because the programming competitions that the paper references are themselves not useful. They take the form of extremely narrowly-defined problems with a single right or wrong answer. But real life doesn't work like that because you almost never get a specification in that ultimate, pure form; you get something relatively high-level, which you use your human brain to translate to the most optimal solution, and then you implement that solution as code. And that using-brain-to-figure-out-solution part is what software developers are paid for, not the writing code part, because figuring the correct solution out is what takes the time and effort. Writing that solution as code is effectively ancillary to the whole process of figuring out what that solution should be.

In fact, programming competitions are so useless for judging actual software development capability that companies who actually understand software development have completely stopped using them as a tool to judge prospective hires. For the simple reason that, exactly like the ML model described above, the hires who do great in these competitions generally do abysmally in the actual job of figuring out how translate high-level requirements into useful solutions. (There's also the fact that a lot of the figuring out bit generally involves talking to other humans, and people who spend their lives wanking to their position on programming competition leaderboards generally turn out to be bad at the whole social interaction thing.)

True intelligence requires understanding; until we figure out how to synthesise the latter, we cannot have the former. And the "revolutionary" ChatGPT simply hasn't synthesised understanding yet; it's just better than its predecessors at mimicking understanding. And that might look like progress, but it isn't; it's just obfuscation of the truth, in order that "AI" companies can make money and scientists can churn out rubbish papers like the above.

In fact, this isn't just the opposite of progress - it's also dangerous. Because these new AI models' appearance of competence is going to result in people and companies starting to depend on them, and when these models fail - and they inevitably will, because again, they do not have true understanding of what they're actually doing - those failures are going to cause massive problems, especially for trust in these "AI"s. Honestly I'm hoping that point comes sooner rather than later, so that governments start regulating what companies are allowed to claim about their "AI" products.


It's dumb because it solves problems by accident. That means it can accidentally "solve" them in the completely wrong way, which in the best case may simply be amusing... and in the worst case, actively harmful.
1677515170990.png
 
Joined
Aug 4, 2020
Messages
1,572 (1.16/day)
Location
::1
And vice versa. Just because you convince some philosopher that your tool has "intelligence", doesn't mean its useful.

The core element of technology is "usefulness". Its easy to forget sometimes because we play so many games and create so many toys. But at the end of the day, technology is about the pursuit of new tools, and learning how to use these new tools.
why, an intelligent being and/or object might you know, disagree w/ you and/or your goals. at that point, they'd be the very opposite of useful ... to you (perhaps).
unless you choose to learn from said disagreement, ig.
 
Top