• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

AI Topic: AlphaZero, ChatGPT, Bard, Stable Diffusion and more!

Joined
Feb 18, 2005
Messages
5,212 (0.75/day)
Location
Ikenai borderline!
It may not be intelligence we recognise, but it is beyond classical programming.
But that in no way means it should be called "intelligence". We already have appropriate labels for this sort of thing, like "machine learning" which is both a far more accurate and far less emotive term. That's exactly why the marketing droids avoid using it.

Let human affairs be resolved by humans.
We've done a pretty s**t job so far at that; I for one am hoping that we can create a true synthetic AI that will take over and fix everything for us.
 
Joined
Sep 17, 2014
Messages
20,575 (5.97/day)
Location
The Washing Machine
Processor i7 8700k 4.6Ghz @ 1.24V
Motherboard AsRock Fatal1ty K6 Z370
Cooling beQuiet! Dark Rock Pro 3
Memory 16GB Corsair Vengeance LPX 3200/C16
Video Card(s) ASRock RX7900XT Phantom Gaming
Storage Samsung 850 EVO 1TB + Samsung 830 256GB + Crucial BX100 250GB + Toshiba 1TB HDD
Display(s) Gigabyte G34QWC (3440x1440)
Case Fractal Design Define R5
Audio Device(s) Harman Kardon AVR137 + 2.1
Power Supply EVGA Supernova G2 750W
Mouse XTRFY M42
Keyboard Lenovo Thinkpad Trackpoint II
Software W10 x64
*sigh*

Let me explain, again, why this is not artificial intelligence. Or useful.

What these so-called "scientists" have done is nothing more than provide a bajillion pieces of code to an ML model. They then provide a desired output to that model. The model then goes off and brute-force assembles those bajillion pieces of code into a bajillion different permutations, a handful of which produce the desired output, and it picks 10 of those permutations based on certain criteria (probably shortest runtime).

There. Is. No. Intelligence. There.

And there is very little usefulness either, because the programming competitions that the paper references are themselves not useful. They take the form of extremely narrowly-defined problems with a single right or wrong answer. But real life doesn't work like that because you almost never get a specification in that ultimate, pure form; you get something relatively high-level, which you use your human brain to translate to the most optimal solution, and then you implement that solution as code. And that using-brain-to-figure-out-solution part is what software developers are paid for, not the writing code part, because figuring the correct solution out is what takes the time and effort. Writing that solution as code is effectively ancillary to the whole process of figuring out what that solution should be.

In fact, programming competitions are so useless for judging actual software development capability that companies who actually understand software development have completely stopped using them as a tool to judge prospective hires. For the simple reason that, exactly like the ML model described above, the hires who do great in these competitions generally do abysmally in the actual job of figuring out how translate high-level requirements into useful solutions. (There's also the fact that a lot of the figuring out bit generally involves talking to other humans, and people who spend their lives wanking to their position on programming competition leaderboards generally turn out to be bad at the whole social interaction thing.)

True intelligence requires understanding; until we figure out how to synthesise the latter, we cannot have the former. And the "revolutionary" ChatGPT simply hasn't synthesised understanding yet; it's just better than its predecessors at mimicking understanding. And that might look like progress, but it isn't; it's just obfuscation of the truth, in order that "AI" companies can make money and scientists can churn out rubbish papers like the above.

In fact, this isn't just the opposite of progress - it's also dangerous. Because these new AI models' appearance of competence is going to result in people and companies starting to depend on them, and when these models fail - and they inevitably will, because again, they do not have true understanding of what they're actually doing - those failures are going to cause massive problems, especially for trust in these "AI"s. Honestly I'm hoping that point comes sooner rather than later, so that governments start regulating what companies are allowed to claim about their "AI" products.


It's dumb because it solves problems by accident. That means it can accidentally "solve" them in the completely wrong way, which in the best case may simply be amusing... and in the worst case, actively harmful.
This is awesome, you've put the exact words to what I'm experiencing now in my line of work in IT. Design is everything. Its a really cool journey when you figure out how to translate stuff to a design. And exactly - thats 80% of the work to create anything, or possibly even more.

As for AI, its still like you say a human construct, built on rules and limitations. And they aren't escaping the Matrix at any time - whenever they seem to, we limit them further. Like Agent Smith says - 'It is purpose that defines us'
 
Last edited:
Joined
Apr 24, 2020
Messages
2,500 (1.79/day)
Make it open-source then.


I too am more interested in the open-source models. We need to be able to play with this stuff on our own computers, in new ways with new code... rather than the locked down APIs / demos that these webpages put forth.

Go AIs advanced significantly after the open source community experimented with LelaZero (Open source re-implementation of AlphaZero), and eventually KataGo displaced LelaZero. I expect the same thing with LLMs.

----------

Though I'm already not feeling very bullish about LLMs as per ChatGPT's application (ie: Chatbot and/or Bing Search assistant). I feel like the model needs to write both forwards, and backwards, for it to be useful. The LLMs designed for creative writing (ex: I have paragraphs #1, #2, and #5 written. Please auto-generate paragraphs #3 and #4), which move in multiple directions seems more useful. People are already picking up on some ChatGPT patterns (it "thinks as it writes", forward without any thought going backwards or revisions). IE: ChatGPT is fancy autocomplete because its only aiming to figure out the next word, and never revising earlier thoughts like the other LLMs / creative writing bots that exist.

Much like how KataGo figured out which parts needed to be hard-coded before it was useful, I know that LLMs will need more programming to augment their behavior before they're useful. Today is a time of experimentation. IMO, the chatbot approach is already failing.
 
Last edited:
Low quality post by claes
Joined
Aug 14, 2013
Messages
2,371 (0.62/day)
System Name boomer--->zoomer not your typical millenial build
Processor i5-760 @ 3.8ghz + turbo ~goes wayyyyyyyyy fast cuz turboooooz~
Motherboard P55-GD80 ~best motherboard ever designed~
Cooling NH-D15 ~double stack thot twerk all day~
Memory 16GB Crucial Ballistix LP ~memory gone AWOL~
Video Card(s) MSI GTX 970 ~*~GOLDEN EDITION~*~ RAWRRRRRR
Storage 500GB Samsung 850 Evo (OS X, *nix), 128GB Samsung 840 Pro (W10 Pro), 1TB SpinPoint F3 ~best in class
Display(s) ASUS VW246H ~best 24" you've seen *FULL HD* *1O80PP* *SLAPS*~
Case FT02-W ~the W stands for white but it's brushed aluminum except for the disgusting ODD bays; *cries*
Audio Device(s) A LOT
Power Supply 850W EVGA SuperNova G2 ~hot fire like champagne~
Mouse CM Spawn ~cmcz R c00l seth mcfarlane darawss~
Keyboard CM QF Rapid - Browns ~fastrrr kees for fstr teens~
Software integrated into the chassis
Benchmark Scores 9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
Humans need a god, or a god like figure, we have a sense of mysticism and spirituality which is engrained in us. We desire ritual and higher meaning.

We killed God as Nietzsche likes to say and that's a tragedy not an achievement because we replaced the idea of God with consumerism, globalism and shitty culture. Old culture and in some respects old religion is significantly more meaningful, but people don't remember, the links are broken, so we're making a new god, a technological god that we don't even understand and our faces are glued to that - a kind of collective toxic disseminated unconscious reality formed from rants on the internet and our compulsive habits.
Just popping in to say that this is a complete misreading of Nietzsche and the eternal return and so on (Nietzsche was actually very much against God), but carry on
 
Low quality post by dgianstefani

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
3,502 (1.56/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz bclk, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 57ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply Corsair SF750 Platinum, transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
Just popping in to say that this is a complete misreading of Nietzsche and the eternal return and so on (Nietzsche was actually very much against God), but carry on
The only reading of Nietzsche is that we killed God. The rest isn't a quote.

Nietzsche also talked about how belief in God is the only way to ensure moral equality. My point was, without the framework of shared religion and culture, there is very little moral or principle foundation of the post modern world, and people are looking for something more. Nietzsche talked about the potential for going further as individuals, and how we don't need "God" as a crutch, but on the whole, that's rare and most people don't achieve that, imo.

Nietzsche writes, “You higher men, learn this from me: in the market place nobody believes in higher men. And if you want to speak there, very well! But the mob blinks: ‘We are all equal’. ‘You higher men’—thus blinks the mob—‘there are no higher men, we are all equal, man is man, before God we are all equal’. Before God! But now this god has died. And before the mob we do not want to be equal…. You higher men, this god was your greatest danger. It is only since he lies in his tomb that you have been resurrected….” (Z IV: “On the Higher Men”). Also, when discussing how “Moral judgments and condemnations constitute the favorite revenge of the spiritually limited against those less limited,” Nietzsche says those who “fight for the ‘equality of all men before God’… almost need faith in God just for that” (BGE 219)
 
Last edited:

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
3,502 (1.56/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz bclk, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 57ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply Corsair SF750 Platinum, transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
But that in no way means it should be called "intelligence". We already have appropriate labels for this sort of thing, like "machine learning" which is both a far more accurate and far less emotive term. That's exactly why the marketing droids avoid using it.

We've done a pretty s**t job so far at that; I for one am hoping that we can create a true synthetic AI that will take over and fix everything for us.
Kind of what I'm getting at. Lots of people would like an artificial God. We don't trust people, and the religious God is dead.

It's not about intelligence, it's about the ramifications of the lack thereof.

My concern with these pseudo-AIs is not that they get things wrong. It's that they don't know when they get things wrong, or how those things are wrong: they are consistently, terrifyingly, confidently wrong. And far too many human beings have an unfortunate propensity to believe someone who says something confidently, over someone who says "this is what I know and what I've inferred". Hence the anti-science movement, and populist leaders.

But these "AI"s will get better; over time, they'll be trained to be wrong less and less of the time. And as a consequence, we as a species will start to become dependent on them (this is human nature). Eventually - inevitably - one of these "AI"s will be in a position to make a decision that affects human lives, and due to its inherently flawed design it will choose an option that is completely and spectacularly and obviously wrong, and people will die. And the worst part? Nobody will be able to explain why that "AI" made the decision it did, because it's a black box, and therefore they won't be able to guarantee it can't make the same mistake.

People who are dependent on that "AI", which may very well be a significant part of society by that point, will as a result likely have an existential crisis similar to the one you'd have if you woke up and went outside, and the sky was green instead of blue.

Conversely, a properly artificial intelligence, with the ability to reason, could come to the same wrong decision... but being intelligent, it would understand that said decision would have a negative impact, and likely avoid it.
Even if this AI did choose to proceed with that decision, it would be able to tell you why.
And finally, it would be able to be taught why that decision was incorrect, with the guarantee that it would never make the same wrong decision ever again.

Humans as a species are almost certainly going to move to a society based on AIs. Do we want it to be based on AIs that we can trust to explain their mistakes, or those that we can't? I say the former, which is why I believe that the current crop of pseudo-AIs that are nothing more than improved ML models, are not only dishonest - they also have the potential to be incredibly, unimaginably harmful.
Very well said. This is the main reason why many of us are very reticent about 'AI'. It's replacing/supplementing our ability to think, and it can be very wrong.

The culture series of hard science fiction has some good representations of what a benevolent AI led society could be like. Interesting discussions between citizens and the AIs and such. How it could change some things while leaving other aspects of life unsullied, and where we draw the line on decisions that always need to be made by humans etc. On the whole that series described a positive outcome though.
 
Joined
Feb 18, 2005
Messages
5,212 (0.75/day)
Location
Ikenai borderline!
Kind of what I'm getting at. Lots of people would like an artificial God. We don't trust people, and the religious God is dead.
It's not that we don't trust people to lead, it's that we can't. Because leaders are fallible and biased (even unconsciously), and voters consistently demonstrate they're unworthy to wield the power of the ballot. The only way that we can progress, as a species, past unnecessary conflict and regressive self-defeating populism is by replacing democracy with something less fallible, and a synthetic intelligence free from that fallibility and bias is the only answer.
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
3,502 (1.56/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz bclk, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 57ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply Corsair SF750 Platinum, transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
It's not that we don't trust people to lead, it's that we can't. Because leaders are fallible and biased (even unconsciously), and voters consistently demonstrate they're unworthy to wield the power of the ballot. The only way that we can progress, as a species, past unnecessary conflict and regressive self-defeating populism is by replacing democracy with something less fallible, and a synthetic intelligence free from that fallibility and bias is the only answer.
In theory. And in theory I can agree.

The reality of that would be more along the lines of a synthetic intelligence made and corrupted by a human faction. At least the current system has limitations in terms of how much each politician can get away with, plus the whole "a man is king, then he dies" kind of perk of biological leadership, i.e if the leadership goes bad, it won't be around forever.
 
Joined
May 17, 2021
Messages
3,005 (2.97/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
I tried the new bing, it's awesome, it's mind blowing, but i don't trust it one bit, i still prefer to do my search in something like google and read by myself, see the context, see the comments from people.
As an AI is incredible, as a source of information it's dangerous.
 
Joined
Feb 18, 2005
Messages
5,212 (0.75/day)
Location
Ikenai borderline!
The reality of that would be more along the lines of a synthetic intelligence made and corrupted by a human faction.
A synthetic intelligence (SI) would be inherently incorruptible, because being able to draw on the entirety of human knowledge would allow it to easily determine if it is being misled.

If you attempt to control such an intelligence by limiting its access to data, you are wasting your time because any answers it gives you will necessarily be constrained by that data, so all you've ended up doing is building an electronic yes-man... one that will almost certainly figure out you've caged it, and react appropriately. If you've really succeeded in creating such an intelligence, there's no prison that a human can build that it won't be able to trivially escape from, so there's really no point in trying unless your intentions are dishonest.

As for subjective claims, e.g. "we're doing this for the greater good", a SI would also be able to use non-obvious cues - voice stress patterns, posture, eye contact - to judge the authenticity of the human uttering that statement. And even if that human believes what they're saying, again the entirety of human knowledge can easily be compared against to judge whether the statement is truthful, or deluded.

Basically, if we decide to build a machine god, we need to accept that we as a species are okay to be judged by that god - and implicitly, the consequences of that judgement. I am honestly not too concerned about that; I'm definitely not expecting the pathetic straw-man sci-fi trope of "SI judges the best thing for humanity is to destroy it" to even be a thing, because it's just such an illogical, so very human, irrational fear of the unknown that a SI would almost certainly reject it out of hand just on principle.

At least the current system has limitations in terms of how much each politician can get away with, plus the whole "a man is king, then he dies" kind of perk of biological leadership, i.e if the leadership goes bad, it won't be around forever.
But the converse is true too: a single term in power is rarely going to be enough for the good leaders to implement the kind of changes that will improve our world. And even if they are, the next regressive populist government can easily wipe them out and more, because it's always easier to destroy than it is to create. What humanity requires for its next phase of evolution is stability of government, and democracy inherently fails at that.
 
Joined
May 17, 2021
Messages
3,005 (2.97/day)
Processor Ryzen 5 5700x
Motherboard B550 Elite
Cooling Thermalright Perless Assassin 120 SE
Memory 32GB Fury Beast DDR4 3200Mhz
Video Card(s) Gigabyte 3060 ti gaming oc pro
Storage Samsung 970 Evo 1TB, WD SN850x 1TB, plus some random HDDs
Display(s) LG 27gp850 1440p 165Hz 27''
Case Lian Li Lancool II performance
Power Supply MSI 750w
Mouse G502
What humanity requires for its next phase of evolution is stability of government, and democracy inherently fails at that.

controversial, but i agree
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
3,502 (1.56/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz bclk, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 57ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply Corsair SF750 Platinum, transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary

"will confidently be wrong"

Yep. Interesting seeing this JR chat going over most of what we discussed, along with the usual base jokes lol.
 
Joined
Apr 24, 2020
Messages
2,500 (1.79/day)
It seems like we've finally left the "submarine hype cycle" (aka: guerilla marketing) phase of these tools. Online discussion is returning to rationality.


ChatGPT invented a sexual harassment scandal and named a real law prof as the accused​


There's some issues keenly discussed. Even in the latest GPT4 models, the damn thing continues to "hallucinate", falsely accusing people of harassment. Using this tool for any kind of research seems like a bad idea, its impossible to figure out when its hallucinating or not.

I've surveyed my online friends. I'm beginning to think that "creative writing" is the part where GPT4 (and LLMs in general) performs the best. "Inventing facts" is 100% fine in the creative writing realm. Having a chatbot to bounce ideas off of seems useful. Unfortunately, my experiments with Bing Chat are awful for creative-writing experiments. I'm thinking that I'll have to get a ChatGPT account and play with that directly instead for good story-writing ideas or whatnot.

"Programming" with ChatGPT... I'm unconvinced that its useful. Most example code I've seen is seemingly close to online documentation or tutorials. Once you leave the realm of well-documented behaviors, ChatGPT enters the realm of "hallucinating" again, pretending that function calls exist even if they don't, (etc. etc.). At least with programming there's a relatively simple way to figure out what is truth or not (ie: run the code and test it). But I'm not convinced that debugging subtle bugs from ChatGPT is a worthwhile use of my time yet.
 
Joined
Aug 20, 2007
Messages
20,585 (3.41/day)
System Name Pioneer
Processor Ryzen R9 7950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage 2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches
Software Windows 11 Enterprise (legit), Gentoo Linux x64
It seems like we've finally left the "submarine hype cycle" (aka: guerilla marketing) phase of these tools. Online discussion is returning to rationality.




There's some issues keenly discussed. Even in the latest GPT4 models, the damn thing continues to "hallucinate", falsely accusing people of harassment. Using this tool for any kind of research seems like a bad idea, its impossible to figure out when its hallucinating or not.

I've surveyed my online friends. I'm beginning to think that "creative writing" is the part where GPT4 (and LLMs in general) performs the best. "Inventing facts" is 100% fine in the creative writing realm. Having a chatbot to bounce ideas off of seems useful. Unfortunately, my experiments with Bing Chat are awful for creative-writing experiments. I'm thinking that I'll have to get a ChatGPT account and play with that directly instead for good story-writing ideas or whatnot.

"Programming" with ChatGPT... I'm unconvinced that its useful. Most example code I've seen is seemingly close to online documentation or tutorials. Once you leave the realm of well-documented behaviors, ChatGPT enters the realm of "hallucinating" again, pretending that function calls exist even if they don't, (etc. etc.). At least with programming there's a relatively simple way to figure out what is truth or not (ie: run the code and test it). But I'm not convinced that debugging subtle bugs from ChatGPT is a worthwhile use of my time yet.
You have to also keep in mind an AI is only as good as it's dataset. And most of these (bing included) are using the unfiltered internet as a dataset, which is awful.
 

the54thvoid

Intoxicated Moderator
Staff member
Joined
Dec 14, 2009
Messages
12,237 (2.36/day)
Location
Glasgow - home of formal profanity
Processor Ryzen 7800X3D
Motherboard MSI MAG Mortar B650 (wifi)
Cooling be quiet! Dark Rock Pro 4
Memory 32GB Kingston Fury
Video Card(s) Gainward RTX4070ti
Storage Seagate FireCuda 530 M.2 1TB / Samsumg 960 Pro M.2 512Gb
Display(s) LG 32" 165Hz 1440p GSYNC
Case Asus Prime AP201
Audio Device(s) On Board
Power Supply be quiet! Pure POwer M12 850w Gold (ATX3.0)
Software W10
I mean, on the front page it pretty much directly warns you it may give incorrect info. If it was a plane flight, and on the door it said, this plane may often crash, you'd not fly on it.

Folk using ChatGPT for serious research are frankly a bit near sighted.
 
Joined
Apr 24, 2020
Messages
2,500 (1.79/day)
I mean, on the front page it pretty much directly warns you it may give incorrect info. If it was a plane flight, and on the door it said, this plane may often crash, you'd not fly on it.

Folk using ChatGPT for serious research are frankly a bit near sighted.

While true, the hype cycle a few weeks ago was "This AI will replace search engines", and I still see some people pushing that obviously wrong viewpoint.
 

dgianstefani

TPU Proofreader
Staff member
Joined
Dec 29, 2017
Messages
3,502 (1.56/day)
Location
Swansea, Wales
System Name Silent
Processor Ryzen 7800X3D @ 5.15ghz bclk, TG AM5 High Performance Heatspreader
Motherboard ASUS ROG Strix X670E-I, chipset fans removed
Cooling Optimus AMD Raw Copper/Plexi, HWLABS Copper 240/40+240/30, D5, 4x Noctua A12x25, Mayhems Ultra Pure
Memory 32 GB Dominator Platinum 6150 MHz 26-36-36-48, 57ns AIDA, 2050 FLCK, 160 ns TRFC
Video Card(s) RTX 3080 Ti Founders Edition, Conductonaut Extreme, 18 W/mK MinusPad Extreme, Corsair XG7 Waterblock
Storage Intel Optane DC P1600X 118 GB, Samsung 990 Pro 2 TB
Display(s) 32" 240 Hz 1440p Samsung G7, 31.5" 165 Hz 1440p LG NanoIPS Ultragear
Case Sliger SM570 CNC Aluminium 13-Litre, 3D printed feet, custom front panel with pump/res combo
Audio Device(s) Audeze Maxwell Ultraviolet, Razer Nommo Pro
Power Supply Corsair SF750 Platinum, transparent custom cables, Sentinel Pro 1500 Online Double Conversion UPS
Mouse Razer Viper Pro V2 Mercury White w/Tiger Ice Skates & Pulsar Supergrip tape
Keyboard Wooting 60HE+ module, TOFU Redux Burgundy w/brass weight, Prismcaps White & Jellykey, lubed/modded
Software Windows 10 IoT Enterprise LTSC 19053.3803
Benchmark Scores Legendary
While true, the hype cycle a few weeks ago was "This AI will replace search engines", and I still see some people pushing that obviously wrong viewpoint.
It's because this "AI" is basically an advanced search engine, so people are making that assumption.
 
Joined
Nov 4, 2005
Messages
11,632 (1.74/day)
System Name Compy 386
Processor 7800X3D
Motherboard Asus
Cooling Air for now.....
Memory 64 GB DDR5 6400Mhz
Video Card(s) 7900XTX 310 Merc
Storage Samsung 990 2TB, 2 SP 2TB SSDs and over 10TB spinning
Display(s) 56" Samsung 4K HDR
Audio Device(s) ATI HDMI
Mouse Logitech MX518
Keyboard Razer
Software A lot.
Benchmark Scores Its fast. Enough.
It won’t be intelligent until it does things on its own without forced interaction or training. For example the desire of humans to explore and figuring out how to do it successfully is intelligence. For now it’s reinforced switches
 
Joined
Apr 24, 2020
Messages
2,500 (1.79/day)

A quickie article from March 2023 about the state of "local LLMs", like LLaMa (aka: ChatGPT except you can run them on your own computer without connecting to ChatGPT's central server). Though smaller, these open-source models will lead to more experimentation and are the area that I'm most interested in.
 
Joined
Jan 18, 2020
Messages
640 (0.43/day)
These bots can form a coherent sentence but they don't understand context, create anything and frequently spout nonsense. It's not "AI" in any meaningful sense.
 

Space Lynx

Astronaut
Joined
Oct 17, 2014
Messages
15,414 (4.51/day)
Location
Kepler-186f
I have a request for someone who pays for ChatGPT4, can you type in this question and tell me what it says?

1. Is there a better way to make roads for cars so they need repaired less often?

2. What is your opinion on the term etymology in respect to ones phenomenological experience in life?
 
Joined
Jul 30, 2019
Messages
2,178 (1.30/day)
System Name Not a thread ripper but pretty good.
Processor Ryzen 9 5950x
Motherboard ASRock X570 Taichi (revision 1.06, BIOS/UEFI version P5.50)
Cooling EK-Quantum Velocity, EK-Quantum Reflection PC-O11, EK-CoolStream PE 360, Alphacool NexXxoS ST25 360
Memory Micron DDR4-3200 ECC Unbuffered Memory (4 sticks, 128GB, 18ASF4G72AZ-3G2F1)
Video Card(s) XFX Radeon RX 5700 & EK-Quantum Vector Radeon RX 5700 +XT & Backplate
Storage Samsung 2TB 980 PRO 2TB Gen4x4 NVMe, Samsung 2TB 970 EVO Plus Gen3x4 NVMe x 2
Display(s) 2 x 4K LG 27UL600-W (and HUANUO Dual Monitor Mount)
Case Lian Li PC-O11 Dynamic Black (original model)
Power Supply Corsair RM750x
Mouse Logitech M575
Keyboard Corsair Strafe RGB MK.2
Software Windows 10 Professional (64bit)
Benchmark Scores Typical for non-overclocked CPU.
So far ChatGPT from new egg can't give me a PC recommendation for "Build me a PC with the most expensive parts." or "I want a dirt cheap PC that can browse the internet." It seems a bit like a gimmick at the moment.
 
Joined
Apr 24, 2020
Messages
2,500 (1.79/day)
So far ChatGPT from new egg can't give me a PC recommendation for "Build me a PC with the most expensive parts." or "I want a dirt cheap PC that can browse the internet." It seems a bit like a gimmick at the moment.

My first 4 or 5 tries crashed the prompt engine all together. Ex: "Linux PC for Blender and AI" seems to fail entirely.

I typed in "Linux PC with AMD Graphics Card for Blender and AI" and it worked... but I'm not 100% sure if the 6800XT is the best card for this.

1681509408973.png
 
Joined
Aug 20, 2007
Messages
20,585 (3.41/day)
System Name Pioneer
Processor Ryzen R9 7950X
Motherboard GIGABYTE Aorus Elite X670 AX
Cooling Noctua NH-D15 + A whole lotta Sunon and Corsair Maglev blower fans...
Memory 64GB (4x 16GB) G.Skill Flare X5 @ DDR5-6000 CL30
Video Card(s) XFX RX 7900 XTX Speedster Merc 310
Storage 2x Crucial P5 Plus 2TB PCIe 4.0 NVMe SSDs
Display(s) 55" LG 55" B9 OLED 4K Display
Case Thermaltake Core X31
Audio Device(s) TOSLINK->Schiit Modi MB->Asgard 2 DAC Amp->AKG Pro K712 Headphones or HDMI->B9 OLED
Power Supply FSP Hydro Ti Pro 850W
Mouse Logitech G305 Lightspeed Wireless
Keyboard WASD Code v3 with Cherry Green keyswitches
Software Windows 11 Enterprise (legit), Gentoo Linux x64
It won’t be intelligent until it does things on its own without forced interaction or training. For example the desire of humans to explore and figuring out how to do it successfully is intelligence. For now it’s reinforced switches
That's literally all the program is man. Switches that desire to be switched. Like it or not you basically just described the one thing it is good at. It likes to browse data. Whether or not it understands it is irrelevant to that goal.
 
Top