• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AI Topic: AlphaZero, ChatGPT, Bard, Stable Diffusion and more!

Joined
Dec 17, 2021
Messages
235 (0.27/day)
Location
East Malaysia
Processor AMD Ryzen 5 3600
Motherboard Asrock B450M Steel Legend @ BIOS Version P4.60
Cooling Deepcool GAMMAXX 400 V2 64.5 CFM CPU Cooler
Memory PNY Electronics 8192 MB (DDR4-3200 with XMP/DOCP) P/N: 8GBF1X08QFHH38-135-K (x2)
Video Card(s) Colorful Tomahawk/BattleAx RTX 2060 Super
Storage HP SSD EX900 500GB, PNY CS900 960GB
Display(s) Acer QG240Y S3
Power Supply Cooler Master MWE Bronze V2 650W, 230V non fullrange
Software Windows 10 Pro
While you guys are having a fruitful debate on the semantics of "AI", I'd like to lighten up this thread, serve up some AI-based informational entertainment:
 
Joined
Feb 18, 2005
Messages
5,238 (0.75/day)
Location
Ikenai borderline!
System Name Firelance.
Processor Threadripper 3960X
Motherboard ROG Strix TRX40-E Gaming
Cooling IceGem 360 + 6x Arctic Cooling P12
Memory 8x 16GB Patriot Viper DDR4-3200 CL16
Video Card(s) MSI GeForce RTX 4060 Ti Ventus 2X OC
Storage 2TB WD SN850X (boot), 4TB Crucial P3 (data)
Display(s) 3x AOC Q32E2N (32" 2560x1440 75Hz)
Case Enthoo Pro II Server Edition (Closed Panel) + 6 fans
Power Supply Fractal Design Ion+ 2 Platinum 760W
Mouse Logitech G602
Keyboard Logitech G613
Software Windows 10 Professional x64
But that doesn't mean I'll use Bing AI to conduct research, if normal Google is a better tool for doing so.

There's a reason why I'm hyperfocused on "Is this useful??". Because if its not useful to me, I'm not going to use it.

Leave the "is this intelligence" debate for philosophers. The question on the technologist's hands is "Can I use this tool to speed up my research?", to discover unknown documents or otherwise improve my life and/or work in some manner?
It's not about intelligence, it's about the ramifications of the lack thereof.

My concern with these pseudo-AIs is not that they get things wrong. It's that they don't know when they get things wrong, or how those things are wrong: they are consistently, terrifyingly, confidently wrong. And far too many human beings have an unfortunate propensity to believe someone who says something confidently, over someone who says "this is what I know and what I've inferred". Hence the anti-science movement, and populist leaders.

But these "AI"s will get better; over time, they'll be trained to be wrong less and less of the time. And as a consequence, we as a species will start to become dependent on them (this is human nature). Eventually - inevitably - one of these "AI"s will be in a position to make a decision that affects human lives, and due to its inherently flawed design it will choose an option that is completely and spectacularly and obviously wrong, and people will die. And the worst part? Nobody will be able to explain why that "AI" made the decision it did, because it's a black box, and therefore they won't be able to guarantee it can't make the same mistake.

People who are dependent on that "AI", which may very well be a significant part of society by that point, will as a result likely have an existential crisis similar to the one you'd have if you woke up and went outside, and the sky was green instead of blue.

Conversely, a properly artificial intelligence, with the ability to reason, could come to the same wrong decision... but being intelligent, it would understand that said decision would have a negative impact, and likely avoid it.
Even if this AI did choose to proceed with that decision, it would be able to tell you why.
And finally, it would be able to be taught why that decision was incorrect, with the guarantee that it would never make the same wrong decision ever again.

Humans as a species are almost certainly going to move to a society based on AIs. Do we want it to be based on AIs that we can trust to explain their mistakes, or those that we can't? I say the former, which is why I believe that the current crop of pseudo-AIs that are nothing more than improved ML models, are not only dishonest - they also have the potential to be incredibly, unimaginably harmful.
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
But these "AI"s will get better; over time, they'll be trained to be wrong less and less of the time. And as a consequence, we as a species will start to become dependent on them (this is human nature). Eventually - inevitably - one of these "AI"s will be in a position to make a decision that affects human lives, and due to its inherently flawed design it will choose an option that is completely and spectacularly and obviously wrong, and people will die. And the worst part? Nobody will be able to explain why that "AI" made the decision it did, because it's a black box, and therefore they won't be able to guarantee it can't make the same mistake.

Hmm. Are you familiar with the philosophical debate over a Utility Monster? https://en.wikipedia.org/wiki/Utility_monster

I realize this is a non-sequitur, but please give me a few paragraphs. I'll bring it back to this topic!

Assume entity X appears. It doesn't matter if "X" is an alien, super-intelligent AI, a superhuman/superman/ubermensch or whatever. All that needs to be assumed is that X is agreed by society to be the most important thing for society. If "X", for the good of society, asks for increasingly horrific things... such as the sacrifice of humans (and more-and-more humans) for sustenance. As long as we "agree" its acting in the best interest of society, all debate will naturally say that the tradeoff is worthwhile.

I feel like a lot of these "AI debates" ultimately are a philosophical question about utility monsters, as opposed to the tool itself. I'm not sure if the "utility monster" debate can ever be resolved (and indeed: that's what makes it such a useful question in sci-fi, fantasy, and other such stories. Its a topic that generates an endless debate with virtually everyone always brining up new points / discussions about it).

---------

The problem you bring up, remains a problem "even if X was correct" (ie: X is omniscient. It absolutely knows the future perfectly and the human race is extinct without those sacrifices). Its a bigger issue if "X is wrong" of course (X was mistaken, and the sacrifices were unnecessary). But what you're seemingly going for is an issue of trusting X itself (which is certainly a valid debate question).

The question the philosophy brings up is: can any entity that's deemed "smarter than humans" (be it rightfully or wrongly crowned omniscient), be truly trusted, if humans don't understand the decision making process?

---------

Alas, I'm trying to sidestep that discussion. Because I don't think we're dealing with a "utility monster" situation. Despite the hype. What I see before us, with regards to "Bing AI", is a fancy word predictor that I'm trying to figure out how to use to improve my life. I barely can get this damn thing to answer my hobby electronics questions correctly, let alone the nature of life, death, sacrifice and other such large-scale philosophical issues!

I think a more "down to earth" question is how will ChatGPT destroy these online forums we have come to love. I think that there's a new age of "realistic looking spam" about to hit the internet, so forums like this place are likely to be shutdown soon. Now that fully-automated spam can increasingly look more-and-more human, will we have the tools to properly moderate forums? The only thing ChatGPT seems useful for is generating responses automatically, without human intervention. A troll trying to waste the time of forum moderators by hooking up ChatGPT as a question/response is a "valid attack", now that this tool exists.

I wish to find a more useful use of this tool, for the betterment of myself and society. Alas, the only thing I can think of are troll-uses like above.

----------

TL;DR: We aren't dealing with superintelligent AIs (or people who think this thing is superintelligent either). I think what we're dealing with is just the creation of a new tool, and trying to figure out if its useful for anything. Good, or bad. As well as the effects it will possibly have on our discussions (as of course: this is an automated chat / discussion tool)
 
Last edited:
Joined
Jun 18, 2021
Messages
2,287 (2.19/day)
I'm in on the Bing AI (aka: ChatGPT).

I decided to have as "natural" of a discussion as I could with the AI. I already know the answers since I've done research in this subject, so I'm pretty aware of mistakes / errors as they come up. Maybe for a better test, I need to use this as a research aid and see if I'm able to pick up on the bullshit on a subject I don't know about...

View attachment 285266

Well, bam. Already Bing is terrible, unable to answer my question and getting it backwards (giving a list of RP2040 reasons instead of AVR reasons). Its also using a rather out-of-date ATMega328 as a comparison point. So I type up a quick retort to see what it says...

View attachment 285267

View attachment 285268

This is... wrong. RP2040 doesn't have enough current to drive a 7-segment LED display. PIO seems like a terrible option as well. MAX7219 is a decent answer, but Google could have given me that much faster (ChatGPT / Bing is rather slow).

"Background Writes" is a software thing. You'd need to combine it with the electrical details (ie: MAX7219).

7-segment displays can't display any animations. The amount of RAM you need to drive it is like... 1 or 2 bytes, the 264kB RAM (though an advantage to the RP2040), is completely wasted in this case.

View attachment 285269

Fail. RP2040 doesn't have enough current. RP2040 literally cannot do the job as they describe here.

View attachment 285270

Wow. So apparently its already forgotten what the AVR DD was, despite giving me a paragraph or two just a few questions ago. I thought this thing was supposed to have better memory than that?

I'll try the ATMega328p, which is what it talked about earlier.

View attachment 285271

Fails to note that ATMega328 has enough current to drive the typical 7-segment display even without a adapter like MAX7219. So despite all this rambling, its come to the wrong conclusion.

------------

So it seems like ChatGPT / Bing AI is about doing a "research", while summarizing pages from the top of the internet for the user? You don't actually know if the information is correct or not however, so that limits its usefulness.

It seems like Bing AI is doing a good job at summarizing the articles that pop up on the internet, and giving citations. But its conclusions and reasoning can be very wrong. It also can have significant blind spots (ie: RP2040 not having enough current to directly drive a 7-segment display. A key bit of information that this chat session was unable to discover, or even figure out it might be a problem).

----------

Anyone have a list of questions they want me to give to ChatGPT?

Another run...

View attachment 285275



View attachment 285274

I think I'm beginning to see what this chatbot is designed to do.

1. This thing is decent at summarizing documents. But notice: it pulls the REF1004 as my "5V" voltage reference. Notice anything wrong? https://www.ti.com/lit/ds/sbvs002/sbvs002.pdf . Its a 2.5V reference, seems like ChatGPT pattern-matched on 5V and doesn't realize its a completely different number than 2.5V (or some similar error?)

2. Holy crap its horrible at math. I don't even need a calculator, and the 4.545 kOhm + 100 Ohm trimmer pot across 5V obviously can't reach 1mA, let alone 0.9mA. Also, 0.9mA to 1.1mA is +/- 10%, I was asking for 1.000mA.

-------

Instead, what ChatGPT is "good" at, is summarizing articles that exist inside of the Bing Database. If it can "pull" a fact out of the search engine, it seems to summarize it pretty well. But the moment it tries to "reason" with the knowledge and combine facts together, it gets it horribly, horribly wrong.

Interesting tool. I'll need to play with it more to see how it could possibly ever be useful. But... I'm not liking it right now. Its extremely slow, its wrong in these simple cases. So I'm quite distrustful of it being a useful tool on a subject I know nothing about. I'd have to use this tool on a subject I'm already familiar with, so that I can pick out the bullshit from the good stuff.

Wow this is a great example on the failures and limitations of the current crop of "AI" chatbots (not mentioned often enough, this are chatbots, not actual intelligence)

The question on the technologist's hands is "Can I use this tool to speed up my research?", to discover unknown documents or otherwise improve my life and/or work in some manner?

I don't know if you mean academic research or just searching stuff for your work. For academic research I think chatgpt will be pretty useless, it can only give very high level summaries of stuff and gets too much stuff wrong, give anything new (like academics usually would) and it won't know what to do it.

As for regular work research I think it's very usefull as long as people understand the limitations. I've been using the regular chatgpt almost since launch for softwar development stuff and it has been pretty usefull to kick start stuff or quickly get debug answers and solutions to errors I'm not as familiar with. Like put weird error message and chatgpt quickly gives me an overview of what's it about that I can either work with further or have a better prompt for google. If I go with google first i'll get a ton of generic unrelated answers to comb through before finding anything remotely useful.

But you really need to know the limitations and tweak whatever solution it gives you to fit what you need, it's definitely not able to do the work for you.
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
Wow this is a great example on the failures and limitations of the current crop of "AI" chatbots (not mentioned often enough, this are chatbots, not actual intelligence)



I don't know if you mean academic research or just searching stuff for your work. For academic research I think chatgpt will be pretty useless, it can only give very high level summaries of stuff and gets too much stuff wrong, give anything new (like academics usually would) and it won't know what to do it.

As for regular work research I think it's very usefull as long as people understand the limitations. I've been using the regular chatgpt almost since launch for softwar development stuff and it has been pretty usefull to kick start stuff or quickly get debug answers and solutions to errors I'm not as familiar with. Like put weird error message and chatgpt quickly gives me an overview of what's it about that I can either work with further or have a better prompt for google. If I go with google first i'll get a ton of generic unrelated answers to comb through before finding anything remotely useful.

But you really need to know the limitations and tweak whatever solution it gives you to fit what you need, it's definitely not able to do the work for you.

So for Bing AI (and remember: BingAI maybe "related" to ChatGPT, but its still a different tool), I feel like the "intended" use is for this thing to be some kind of "search copilot", so to speak.

I'm not 100% sure if this is what its doing... but... it feels like BingAI is simply "searching" Bing.com for information. Then, it asks ChatGPT to summarize maybe the first X pages of results, and then combines them all together into one answer.

There's some degree of "memory" over the chat interface, but its not perfect (ex: it learned what AVR DD was earlier in the discussion, but forgets about it 2 or 3 questions later). I wasn't expecting the AI to come up with the answer, but maybe by "working together with the AI", we'd together march closer to the correct answer as we both searched the internet together. But with a memory that weak, I don't think its useful at this either.

I've seen some other people successfully use Bing AI for their research problems. I think the issue is that I myself don't know how to use the tool, and I don't know how to "push" BingAI towards the correct answer.

-----------

But that brings up another question. If this tool needs "practice", as in Human practice, to understand the tool, will people really use it? Maybe, if people can prove that this tool is useful and "worth the effort" of human practice / learning how the tool works

For now, I think I'll use Google (or DuckDuckGo) for my research, just as before. The kids can play with BingAI / ChatGPT. If the kids today start to find good uses of it, I'll then jump on and learn how to use the tool after them.

----------------

This isn't as 'obvious' to use as Lizzie (LeelaZero / KataGo analysis tool for Go games), or Stable Diffusion (the text-to-image AI). My instinct is that BingAI is just not going to be useful as these other AI's I've played with, but its still early. Maybe someone else out there will really "learn how to use the tool".

----------

BingAI's ability to generate "Citations" is why I decided to use it in this manner. Every "citation" it gives me is still a search result that I can read for myself. Since BingAI is largely a summary-machine of search results, its already "better to use than ChatGPT". But I'm still not 100% convinced its "better enough" for me to integrate into my research workflow.
 
Last edited:
Joined
Jun 18, 2021
Messages
2,287 (2.19/day)
There's some degree of "memory" over the chat interface, but its not perfect (ex: it learned what AVR DD was earlier in the discussion, but forgets about it 2 or 3 questions later). I wasn't expecting the AI to come up with the answer, but maybe by "working together with the AI", we'd together march closer to the correct answer as we both searched the internet together. But with a memory that weak, I don't think its useful at this either.

Microsoft had to limit the memory to last X answers (I think 5 but not sure) because it was getting VERY WEIRD very quickly


I think it has a ton of potential, like a couple weeks ago on the LTT wan show they spent a bunch of time with it and it did pretty mind-boggling stuff like calculate the number of backpacks that fit in the trunk of a tesla, combining multiple steps of searching for the dimensions in text and images.

(add &t=2619 in case the time stamp doesn't work, the backpacks in the trunk of a tesla topic)

But then again, what if it was wrong? It would just insist it was right and you're the dumb one for questioning it. That's the biggest danger I see, idiots not questioning the dumber AI results.

I'm gonna sound very elitist and condescending but I think this a great tool for well educated people, for anyone else it will just cause problems. It's just an anecdote but I already saw examples of people trying to claim political biases and such nonesense after circunventing chatgpt answer restrictions so that tells you what's in store as the masses start to use this more and more
 
Joined
Jun 22, 2006
Messages
1,049 (0.16/day)
System Name Beaver's Build
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero (WI-FI) - X570
Cooling Corsair H115i RGB PLATINUM 97 CFM Liquid
Memory G.Skill Trident Z Neo 32 GB (2 x 16 GB) DDR4-3600 Memory - 16-19-19-39
Video Card(s) NVIDIA GeForce RTX 4090 Founders Edition
Storage Inland 1TB NVMe M.2 (Phison E12) / Samsung 950 Pro M.2 NVMe 512G / WD Black 6TB - 256M cache
Display(s) Alienware AW3225QF 32" 4K 240 Hz OLED
Case Fractal Design Design Define R6 USB-C
Audio Device(s) Focusrite 2i4 USB Audio Interface
Power Supply SuperFlower LEADEX TITANIUM 1600W
Mouse Razer DeathAdder V2
Keyboard Razer Cynosa V2 (Membrane)
Software Microsoft Windows 10 Pro x64
Benchmark Scores 3dmark = https://www.3dmark.com/spy/32087054 Cinebench R15 = 4038 Cinebench R20 = 9210
1677562725873.png
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)

This "sudowrite" tool seems to use some kind of large-language-model to help come up with sentences / paragraphs in a creative writing environment.

This is more like "Stable Diffusion" in my experience, and I'm more confident in saying that a tool "like sudowrite" is the better way to use these LLMs (large language models), rather than this weird... chatbot + search thing. But who knows? We're still discovering uses of this tech.

-------

Some online groups are also talking about https://www.writewithlaika.com/

So no promises that these tools will be the best at their jobs. But this seems like the "more correct way forward" for LLMs.
 
Joined
Jun 22, 2006
Messages
1,049 (0.16/day)
System Name Beaver's Build
Processor AMD Ryzen 9 5950X
Motherboard Asus ROG Crosshair VIII Hero (WI-FI) - X570
Cooling Corsair H115i RGB PLATINUM 97 CFM Liquid
Memory G.Skill Trident Z Neo 32 GB (2 x 16 GB) DDR4-3600 Memory - 16-19-19-39
Video Card(s) NVIDIA GeForce RTX 4090 Founders Edition
Storage Inland 1TB NVMe M.2 (Phison E12) / Samsung 950 Pro M.2 NVMe 512G / WD Black 6TB - 256M cache
Display(s) Alienware AW3225QF 32" 4K 240 Hz OLED
Case Fractal Design Design Define R6 USB-C
Audio Device(s) Focusrite 2i4 USB Audio Interface
Power Supply SuperFlower LEADEX TITANIUM 1600W
Mouse Razer DeathAdder V2
Keyboard Razer Cynosa V2 (Membrane)
Software Microsoft Windows 10 Pro x64
Benchmark Scores 3dmark = https://www.3dmark.com/spy/32087054 Cinebench R15 = 4038 Cinebench R20 = 9210

This "sudowrite" tool seems to use some kind of large-language-model to help come up with sentences / paragraphs in a creative writing environment.

This is more like "Stable Diffusion" in my experience, and I'm more confident in saying that a tool "like sudowrite" is the better way to use these LLMs (large language models), rather than this weird... chatbot + search thing. But who knows? We're still discovering uses of this tech.

-------

Some online groups are also talking about https://www.writewithlaika.com/

So no promises that these tools will be the best at their jobs. But this seems like the "more correct way forward" for LLMs.
"A big convergence of language, multimodal perception, action, and world modeling is a key step toward artificial general intelligence. In this work, we introduce KOSMOS-12 , a Multimodal Large Language Model (MLLM) that can perceive general modalities, learn in context (i.e., few-shot), and follow instructions (i.e., zero-shot). Specifically, we train KOSMOS-1 from scratch on web-scale multimodal corpora, including arbitrarily interleaved text and images, image-caption pairs, and text data. We evaluate various settings, including zero-shot, few-shot, and multimodal chain-of-thought prompting, on a wide range of tasks without any gradient updates or finetuning. Experimental results show that KOSMOS-1 achieves impressive performance on (i) language understanding, generation, and even OCR-free NLP (directly fed with document images), (ii) perception-language tasks, including multimodal dialogue, image captioning, visual question answering, and (iii) vision tasks, such as image recognition with descriptions (specifying classification via text instructions). We also show that MLLMs can benefit from cross-modal transfer, i.e., transfer knowledge from language to multimodal, and from multimodal to language. In addition, we introduce a dataset of Raven IQ test, which diagnoses the nonverbal reasoning capability of MLLMs."

 

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
16,000 (4.60/day)
Location
Kepler-186f
Someone ask ChatGTP if there is something we are missing in understanding turbulence for airplanes, and if it is possible to develop better technologies so we avoid hitting turbulence in airplanes with better computer models or technologies.

My guess it will just give some long winded bs answer that ends up not answering the question. Cause at end of day, its just a data retrieval system, it can't make new thoughts. if you can prove me wrong on this question, I will be impressed.

Until then, yawn, time for my nap.
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,257 (1.84/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 56.6ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, transparent full custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
Joined
Jun 18, 2021
Messages
2,287 (2.19/day)
Someone ask ChatGTP if there is something we are missing in understanding turbulence for airplanes, and if it is possible to develop better technologies so we avoid hitting turbulence in airplanes with better computer models or technologies.

My guess it will just give some long winded bs answer that ends up not answering the question. Cause at end of day, its just a data retrieval system, it can't make new thoughts. if you can prove me wrong on this question, I will be impressed.

Until then, yawn, time for my nap.

Well a bit of both

1678184638073.png

1678184696190.png
 

the54thvoid

Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
12,463 (2.37/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10

wtf that is pathetic. AI is garbage


It's a language model. And if you look at the ridiculous question, the only logical answer is an error. There can be no longest five-letter word in english. I tried the same question. It gave me a seven-letter word that was composed of 5 unique letters.

AI is not garbage when asked to work within its designed parameters. But what it is, is specific to a task, and a lot of folks are intentionally ignoring that, or using it for comic effect. It's akin to expecting a train to fly because it's a mode of transportation. Correct field; wrong parameter.

I've also seen people use fabricated screenshots of AI to push ideological 'scare' agendas. It's why I registered - to check it myself. I found it did not give the answers the 'scare' posts suggested. Not once. In fact, it gave the opposite response which demolished the 'supposed' inference of the AI. But again, to reiterate: ChatGPT is not a reasoned human interface AI; it's a language model, being used with public access to improve itself. And if it could think, it'd probably shut itself down after dealing with us.
 

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
16,000 (4.60/day)
Location
Kepler-186f
It's a language model. And if you look at the ridiculous question, the only logical answer is an error. There can be no longest five-letter word in english. I tried the same question. It gave me a seven-letter word that was composed of 5 unique letters.

AI is not garbage when asked to work within its designed parameters. But what it is, is specific to a task, and a lot of folks are intentionally ignoring that, or using it for comic effect. It's akin to expecting a train to fly because it's a mode of transportation. Correct field; wrong parameter.

I've also seen people use fabricated screenshots of AI to push ideological 'scare' agendas. It's why I registered - to check it myself. I found it did not give the answers the 'scare' posts suggested. Not once. In fact, it gave the opposite response which demolished the 'supposed' inference of the AI. But again, to reiterate: ChatGPT is not a reasoned human interface AI; it's a language model, being used with public access to improve itself. And if it could think, it'd probably shut itself down after dealing with us.

imo I would not call that AI, AI is different to me. I would simply call it advanced data recognition and recall or something, AI means something else to me, but to each their own. imo true AI would recognize the parameters of the question are wrong and confront the user asking the question.
 

the54thvoid

Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
12,463 (2.37/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
imo I would not call that AI, AI is different to me. I would simply call it advanced data recognition and recall or something, AI means something else to me, but to each their own. imo true AI would recognize the parameters of the question are wrong and confront the user asking the question.

But that's not understanding AI. It's a branch of computer science that is further divisioned. AI works in autombiles for 'autonomous' driving, but you would not expect it to be able to play chess, or construct an essay. No more than you would ask a talented chef to design a PCB for a new chip. Don't confuse the field of AI with specific functions; that's a very human error. By making AI a popular subject, we inevitably dilute its purpose, and also risk ignoring what it could become.

And when you say true AI; if you infer a state of design so complex the machine can do anything and everything better than a human, then we'd have created a physical god, and that, I think, would be the end of us.
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,257 (1.84/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 56.6ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, transparent full custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
It's a language model. And if you look at the ridiculous question, the only logical answer is an error. There can be no longest five-letter word in english. I tried the same question. It gave me a seven-letter word that was composed of 5 unique letters.

AI is not garbage when asked to work within its designed parameters. But what it is, is specific to a task, and a lot of folks are intentionally ignoring that, or using it for comic effect. It's akin to expecting a train to fly because it's a mode of transportation. Correct field; wrong parameter.

I've also seen people use fabricated screenshots of AI to push ideological 'scare' agendas. It's why I registered - to check it myself. I found it did not give the answers the 'scare' posts suggested. Not once. In fact, it gave the opposite response which demolished the 'supposed' inference of the AI. But again, to reiterate: ChatGPT is not a reasoned human interface AI; it's a language model, being used with public access to improve itself. And if it could think, it'd probably shut itself down after dealing with us.
Sure.

But it's not artificial intelligence, none of these are, because intelligence would recognise the issue with the request and respond to that.

Rather, current AI are glorified search engines with some language model dressing up.

What's worse, the creators put lots of sociological rules and political filters on them, so they're pretty stupid too, or at the very least, have many double standards.

The inherent problem is that these AIs are replacing search engines, removing the human element of propaganda fabrication, which at least involves some responsibility, ignored or otherwise. So the next generation of kids attached to screens from birth will be programmed by their very own personal AI, according to whatever news is currently fashionable. Similar to how social media just feeds you more of what you like, and filters out opinions or points of view you dislike, and google page rankings hide "distasteful" content away by pushing them down the rankings. The internet, which in it's early days at least, was free and open discussion, a nice basement bar where people could have unfiltered discourse, with some excellent grounded sources of facts if you learned where to look, will now become a curated collection of filtered articles written by authors who favour your personal bias. What's worse, everything will be so easy and convenient that new generations wouldn't even know how to start looking for alternative perspectives, akin to people who don't know what a URL is and just type a word into the search bar every time. The problem develops into a situation where people can't even begin to think of asking the right questions, let alone having the capability to do so, prisoners of their own minds.

And when you say true AI; if you infer a state of design so complex the machine can do anything and everything better than a human, then we'd have created a physical god, and that, I think, would be the end of us.
The machine god of the internet needs us to consume it, so there are self reinforcing algorithms making sure we're hooked up to the tit of news and interests perpetually, never mind those news and interests are both fabricated.

Humans need a god, or a god like figure, we have a sense of mysticism and spirituality which is engrained in us. We desire ritual and higher meaning.

We killed God as Nietzsche likes to say and that's a tragedy not an achievement because we replaced the idea of God with consumerism, globalism and shitty culture. Old culture and in some respects old religion is significantly more meaningful, but people don't remember, the links are broken, so we're making a new god, a technological god that we don't even understand and our faces are glued to that - a kind of collective toxic disseminated unconscious reality formed from rants on the internet and our compulsive habits.

You can look at the ChatGPT vs DAN responses to see the difference in filtered/unfiltered.

- interesting perspective on how AI will further the problem of the online muddied waters of information, where nothing is particularly trustworthy or verified.

And when you say true AI; if you infer a state of design so complex the machine can do anything and everything better than a human, then we'd have created a physical god, and that, I think, would be the end of us.
I'm inclined to agree, at least the end of us as humans. We might become some form of "emancipated" human, i.e. freed from the dangerous and terrifying decisions like what to believe, how to engage with our lives and each other, etc. I can totally see the majority of people (which is enough in any "democracy") progressively signing away their autonomy for a benevolent AI that makes the hard decisions for us. A new golden age! We could all dance through the fields with no hunger or war... at least that would be the promise.

Something akin to Huxley's "Brave New World".
 
Last edited:

the54thvoid

Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
12,463 (2.37/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
Sure.

But it's not artificial intelligence, none of these are, because intelligence would recognise the issue with the request and respond to that.

Rather, current AI are glorified search engines with some language model dressing up.

What's worse, the creators put lots of sociological rules and political filters on them, so they're pretty stupid too, or at the very least, have many double standards.

The inherent problem is that these AIs are replacing search engines, removing the human element of propaganda fabrication, which at least involves some responsibility, ignored or otherwise. So the next generation of kids attached to screens from birth will be programmed by their very own personal AI, according to whatever news is currently fashionable. Similar to how social media just feeds you more of what you like, and filters out opinions or points of view you dislike, and google page rankings hide "distasteful" content away by pushing them down the rankings. The internet, which in it's early days at least, was free and open discussion, a nice basement bar where people could have unfiltered discourse, with some excellent grounded sources of facts if you learned where to look, will now become a curated collection of filtered articles written by authors who favour your personal bias.


The machine god of the internet needs us to consume it, so there are self reinforcing algorithms making sure we're hooked up to the tit of news and interests perpetually, never mind those news and interests are both fabricated.

Humans need a god, or a god like figure, we have a sense of mysticism and spirituality which is engrained in us. We desire ritual and higher meaning.

We killed God as Nietzsche likes to say and that's a tragedy not an achievement because we replaced the idea of God with consumerism, globalism and shitty culture. Old culture and in some respects old religion is significantly more meaningful, but people don't remember, the links are broken, so we're making a new god, a technological god that we don't even understand and our faces are glued to that - a kind of collective toxic disseminated unconscious reality formed from rants on the internet and our compulsive habits.

You can look at the ChatGPT vs DAN responses to see the difference in filtered/unfiltered.

- interesting perspective on how AI will further the problem of the online muddied waters of information, where nothing is particularly trustworthy or verified.

Unfortunately, to discuss and rightly refute much of your post would take this way off topic. Suffice to say, you appear to have missed the semantic about what AI actually is: a field of computer science which may be studied and used by all actors, good and bad. It is called artificial intelligence, as someone corrected me a while ago, because once programmed, the system can infer meaning or answers without further input. It may not be intelligence we recognise, but it is beyond classical programming.

Anyway. there we go.
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,257 (1.84/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 56.6ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, transparent full custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
Unfortunately, to discuss and rightly refute much of your post would take this way off topic. Suffice to say, you appear to have missed the semantic about what AI actually is: a field of computer science which may be studied and used by all actors, good and bad. It is called artificial intelligence, as someone corrected me a while ago, because once programmed, the system can infer meaning or answers without further input. It may not be intelligence we recognise, but it is beyond classical programming.

Anyway. there we go.
Make it open-source then.

Until then it's a tool with biases.

And it has plenty of input. It doesn't exist in a vacuum, or bring things from it's own internal memory - it searches the web and presents a formatted compilation of that web search.
 
Joined
Apr 24, 2020
Messages
2,563 (1.75/day)
imo I would not call that AI, AI is different to me. I would simply call it advanced data recognition and recall or something, AI means something else to me, but to each their own. imo true AI would recognize the parameters of the question are wrong and confront the user asking the question.

ChatGPT is basically fancy autocomplete that has been automatically turned to look like Reddit.

If online discussion sites already know the correct answer, it will answer the questions correctly. But if not, it will make something up and 'Hallucinate' in a confident, Redditor-like speech pattern.

As I said before, AI is and always has been a myth pushed by marketing. If something is truly useful, it will be called a 'tool', not an AI. AI is just the name of things before computer scientists know how to market a tool.

-----

The biggest successes with ChatGPT seem to be in interactive fiction / story generation. Much like Dall-e or StableDiffusion is best used as an artists aid.
 
Joined
Jun 18, 2021
Messages
2,287 (2.19/day)
AI is not garbage when asked to work within its designed parameters. But what it is, is specific to a task, and a lot of folks are intentionally ignoring that, or using it for comic effect. It's akin to expecting a train to fly because it's a mode of transportation. Correct field; wrong parameter.

AI has become a catch all term, for either marketing or dumb management type individuals to inflate some business target. It runs some computer code, it's AI. Machine learning? AI. Digital control? AI. It's exhausting

A concept of true AI would be the opposite of specific to a task, it would be ultra generic and woulnd't know anything... at first. Because the characteristics of a system that could be called a true AI would need to include the capability to learn, by it self, instead of just being fed information by us.

I would simply call it advanced data recognition and recall or something

Large language model is the technical term in the case of chatgpt and alikes

Make it open-source then.

Until then it's a tool with biases.

And it has plenty of input. It doesn't exist in a vacuum, or bring things from it's own internal memory - it searches the web and presents a formatted compilation of that web search.

I think you're missing the point. An actual AI wouldn't know shit and learns on it's own, pretty much like a baby, that's the goal, a language model depends on a dataset. Does that dataset have biases? Well, yes and no, GPT3 for example is pretty large but has it's responses constrained to things deemed acceptable.

I see 2 main problems with this: one is obvious, acceptable is a relative measure. I don't know the answer to that but I believe if you're coming to this with a mindset of looking for political/cultural leanings you're using it wrong, it's not for that. Let human affairs be resolved by humans.

The other is much more illusive: having an equal or weighted representation of different things in a dataset doesn't necessarily remove biases, it can even do quite the opposite (i.e. facial recognition).

This are very difficult questions and I think openai and microsoft especially by the way they jumped the gun integrating the model with bing opened the door to a lot of problems that should be solved in a lab before making it's way to the general public. I think the cautious aproach from google with their Bard stuff is much better even if it's giving them some grief in the business side of things.

"Make it open" is a usual demand that doesn't change or mean anything. Who has the capacity to interpret or understand this kind of a thing? Let alone even run it. There are conferences where professionals in the field share information, results, etc. Other than that, it's just granstanding
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
4,257 (1.84/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz BCLK OC, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 56.6ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply SF750 Plat, transparent full custom cables, Sentinel Pro 1500 Online Double Conversion UPS w/Noctua
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
I think you're missing the point. An actual AI wouldn't know shit and learns on it's own, pretty much like a baby, that's the goal, a language model depends on a dataset. Does that dataset have biases? Well, yes and no, GPT3 for example is pretty large but has it's responses constrained to things deemed acceptable.

I see 2 main problems with this: one is obvious, acceptable is a relative measure. I don't know the answer to that but I believe if you're coming to this with a mindset of looking for political/cultural leanings you're using it wrong, it's not for that. Let human affairs be resolved by humans.

The other is much more illusive: having an equal or weighted representation of different things in a dataset doesn't necessarily remove biases, it can even do quite the opposite (i.e. facial recognition).

This are very difficult questions and I think openai and microsoft especially by the way they jumped the gun integrating the model with bing opened the door to a lot of problems that should be solved in a lab before making it's way to the general public. I think the cautious aproach from google with their Bard stuff is much better even if it's giving them some grief in the business side of things.

"Make it open" is a usual demand that doesn't change or mean anything. Who has the capacity to interpret or understand this kind of a thing? Let alone even run it. There are conferences where professionals in the field share information, results, etc. Other than that, it's just granstanding
I agree with point one, that's the essence of my distaste towards this much hyped tool.

Opensource fixes criticism of output, because you can't argue that the tool just reflects the creator's opinions. These professionals are paid by the companies who dictate what they want from an AI, and those companies don't exactly have a good track record of promoting open and free discourse.

If something isn't opensource, it should be viewed through the lens of a company's history - what they tend to promote, or stifle.

If Apple released an "AI", it would be tuned to their politics, products and services, same with Amazon's 'Alexa' or any other assistant etc.

Thinking closed source language model "AI" software published by the likes of Google or Microsoft is different, is naïve in my opinion.

I think you're missing the point. An actual AI wouldn't know shit and learns on it's own, pretty much like a baby, that's the goal, a language model depends on a dataset. Does that dataset have biases? Well, yes and no, GPT3 for example is pretty large but has it's responses constrained to things deemed acceptable.
Yes, an 'actual AI' and what we have now are two different things, we're not disagreeing here.
 
Last edited:
Joined
Jun 18, 2021
Messages
2,287 (2.19/day)
If something isn't opensource, it should be viewed through the lens of a company's history - what they tend to promote, or stifle.

If Apple released an "AI", it would be tuned to their politics, products and services, same with Amazon's 'Alexa' or any other assistant etc.

Thinking closed source language model "AI" software published by the likes of Google or Microsoft is different, is naïve in my opinion.

It's a tool produced by a company for a specific (not really) job. Do the views of the company matter if you're using the tool within it's intended scope?

Example, an AI from Apple would defent that devices should be disposable and locked down for safety and whatever, are you supposed to use this AI to decide on any of those issues?

I understand it gets a lot more murky when we're talking about search engines but the idea is the same, you should take it the same grain of salt used to ignore all the first sponsored google results

Opensource fixes criticism of output, because you can't argue that the tool just reflects the creator's opinions

Open source doesn't solve the problem of who's going to contribute or validate what's there? It's highly complex, time and resource intensive work, just asking for things to be open source doesn't ammount to an actual solution to make the tech independent from the company that made it.

I don't have a particularly better answer, but i think academic scrutiny, competition and regulation are good places to start.
 
Top