• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Opinions on AI

Is the world better off with AI?

  • Better.

    Votes: 47 23.7%
  • Worse.

    Votes: 102 51.5%
  • Other (please specify in comment).

    Votes: 49 24.7%

  • Total voters
    198
I'm quite familiar with the term, but there is a reason that AI has grown into a trillion-dollar business as fast as it did. I've never really believed that technology could pose a major threat to the workforce, until very recently - and in fact, for the most part, up until you would call Microsoft Office something revolutionary (and it was), technology only served to mankind's direct benefit.
All technologies came to be as a response to some incentive, which is nearly always to personal (or immediate group's) benefit. Early on it was for pure survival, then we started waging wars, then we figured out trade and markets. Philanthropy rarely shows in this equation. Just look at our evolution from simple artisanal secrets taught only to heirs to trade and military secrets to patents and intellectual property and whatnot.

Given the incentive model, how an invention benefits mankind isn't measured by how or why it came to be (else you'd be hard pressed to rationalize things like GPS or even iron working), rather its net effect on people's lives. Academia has a lot of arguments for the latter.

Don't discount humanity. We've been migrating away from labour-intensive technologies for centuries, yet here we are, still breeding like rabbits, living longer, growing healthier, and surprisngly still maintaining more or less the same unemployment rates.

As for the sentient AI, at this point, it's still a fantasy. I wager it would be easier to design and grow meat slaves than have a machine that mimics the complixity of the human brain (which we are far from figuring out). More Brave New World, less Terminator. We've got the consumerism part checked out already...

In my opinion, the ideal use for AI is replacing judges in court rooms.
AI would have access to the ENTIRE catalog of case law and precedents.
No bias, no bribes, no intimidation, no political activism.
Quite the opposite, actually.
AI inherits the biases of its learning dataset, and the past rarely is considered "fair" or "just" by today's standards.
Semi-related comic.
 
Like many things it will make things better in some ways, worse in others.
 
Imo better, humans are fallible and make mistakes.
 
AI (as is used in its current popular designation) is already providing solutions humans cannot, such as in complex electrical design and medical investigation. It is improving cancer detection rates. It has absolutely tremendous potential for humankind.

As a writer, I disagree with LLM's and their 'soaking' up of vast datasets to recreate work of others. It doesn't affect me because I'm not known. However, I think the problem is over-inflated as it is the successful creatives who are copied. I'm quite sure Joe Shmo, and Suzie Who don't give a shit because nobody is copying their work. BTW, my author name is not Joe Shmo. Pretty sure a simple copyright clause for AI for excluding living artists from their 'facsimiles' would solve all of that. Not that they'd get that into law.

I'm curious to see how far the pitchfork mob will go against AI without actually assessing its potential. Yes, AI can be used for nefarious purposes, but its also likely AI will be used to hunt down those nefarious purposes. An AI deep-fake will likely be sniffed out by an AI tool. It's up to the people who are insanely gullible to disect and reason with what they've been presented. But just as with social media scams and conspiracies, people will suck up what suits them. AI isn't the danger there - it's human stupidity and inherent bias.

Someone mentioned AI voices for kidnap scams. I can see that going well.

Caller: "We have your wife."
Me: "Oh. Gosh. I don't believe you."
Caller: "We can prove it."
Me: "Feel free."
Caller: "Here she is.... <noises of wife in pain, asks for help>"
Me (hands phone to wife sitting next to me): "It's for you."

I mean. Come on. Scare stories..... And if they have kidanpped your wife - they don't have to fake it. I know it can be done. But scam calls are happening already without AI. Why're we not all pissing in our porridge about it?

Meh... User error. Audience error.
 
Every FAANG so far has an internal one they use.
Of course they do - as companies at the forefront of technology it's would be somewhat strange to see them NOT trial a new technology. But that doesn't mean they're going to adopt that technology, because as you say, if you don't know how to code then an LLM isn't going to help you.

Coding, in fact, is the reason why I'm so pessimistic about LLMs. I'm active on Stack Overflow and the number of questions posted there by people along the lines of "ChatGPT gave me this code but it doesn't work" became so much of a problem that they ended up banning such questions. Similarly, ChatGPT-generated answers were banned because they were subtly but critically (and obviously) wrong. And as I mentioned in my previous post, the problem is simply that LLMs don't understand programming, so they don't understand when they're giving you rubbish.

Now apply that same principle to areas of life that matter, like the courtroom or medical diagnoses, and you begin to see the problem. Yes, LLMs can draw some very interesting and useful inferences... but they can also be so basically, completely, obviously, dangerously wrong that it is very clear that they're still a long way away from being reliable. And an unreliable tool is worse than one that doesn't exist at all - would you be satisfied with a vehicle where the engine explodes one out of 100 starts?
 
Of course they do - as companies at the forefront of technology it's would be somewhat strange to see them NOT trial a new technology. But that doesn't mean they're going to adopt that technology, because as you say, if you don't know how to code then an LLM isn't going to help you.

Coding, in fact, is the reason why I'm so pessimistic about LLMs. I'm active on Stack Overflow and the number of questions posted there by people along the lines of "ChatGPT gave me this code but it doesn't work" became so much of a problem that they ended up banning such questions. Similarly, ChatGPT-generated answers were banned because they were subtly but critically (and obviously) wrong. And as I mentioned in my previous post, the problem is simply that LLMs don't understand programming, so they don't understand when they're giving you rubbish.

Now apply that same principle to areas of life that matter, like the courtroom or medical diagnoses, and you begin to see the problem. Yes, LLMs can draw some very interesting and useful inferences... but they can also be so basically, completely, obviously, dangerously wrong that it is very clear that they're still a long way away from being reliable. And an unreliable tool is worse than one that doesn't exist at all - would you be satisfied with a vehicle where the engine explodes one out of 100 starts?

The problem with that (not your reply, but what was mentioned) is understanding what chatGPT is for. It explicitly states it can give erroneous replies. What coder in their right mind would use that? Apart from lazy fuckers.

Edit: not all who who do are lazy - some do it for the wall of code they have the skill to then validate - that's different.
 
The problem with that (not your reply, but what was mentioned) is understanding what chatGPT is for. It explicitly states it can give erroneous replies. What coder in their right mind would use that? Apart from lazy fuckers.
This. In the same token the people that are using it with no clue are at fault. Not the AI. There is nothing wrong with or stopping someone who CAN code from using it. As someone that frequents stack overflow I’m sure you understand. I’d much rather have a LLM shit out 200 lines of boiler plate or write a function I can make small corrections too so I can move on.

What is the argument then? That someone that doesn’t know how to code or write or paint can’t continue there “job” without properly learning it? Someone that can’t design an engine to begin with wouldn’t have been hired and expected just to use LLMs. That doesn’t make any sense.

Those people should just idk… learn a skill?
 
I’d much rather have a LLM shit out 200 lines of boiler plate or write a function I can make small corrections too so I can move on.
I trust Visual Studio's templates because they generate the same code, every time, and that code works. I don't have to think any further.

But you can't say the same for something an LLM spits out. Because they're constantly being trained and refined, you could get one piece of code for a certain query one day, and a completely different piece of code another day.

And because LLMs are dumping out "boilerplate" that they have Frankenstein-stitched-together, there's no guarantee that those snippets are correct.

In other words, every time you ask an LLM for something, you're liable to get a different answer. So you have to double-check that output anyway, and by the time you've done that you might as well just write the bloody code yourself!

The whole point of tools is to be consistent. LLMs, by their very nature, are inconsistent and because oif that lack of understanding, can never be guaranteed to be consistent - and that makes them poor tools.
 
And because LLMs are dumping out "boilerplate" that they have Frankenstein-stitched-together, there's no guarantee that those snippets are correct.

yeah....as they state. Not seeing the problem. As an actual programmer it doesnt matter to me. I know what im asking it and what I should be expecting and I can handle whatever difference it gives me.

The only time your scenario highlights the issues with its functionality is when someone that isnt that %profession% is using it, but you still havent had a good comeback to any of the points I originally made in the last page, because people trying to "fake it until they make it" wouldnt make it far enough for there LLM use to matter or be detrimental.

Writers rarely self publish, some publishing house is going to read your shit.

Artists are going to get critiqued before it gets put on dispaly at the louve.

Programmers are going to have 4-7 hour loops with a white board coding segment in which you are going to get provided a scenario and need to shit out code on camera.

Designers, Architects and civil engineers are going to be tested to meet bar for there location before they even have the chance to get hired by a contractor or design firm.

Medicine, do I even need to say anything about this field???

There is no situation in which it matters that someone that knows nothing manages to bullshit there way into a job using LLMs. EVEN if they managed too, that is STILL ON the employer.

Poor tools being right 100% of the time? dude please. what are you on? This is an AI helper, not a torque wrench. Thats why its called "inference" someone still needs to vet all of it, and it isnt being pandered as a "tool" every single EULA for chatgpt, bard, copilot etc says that mistakes happen. These "tools" are basically idea bots and thats the whole point. They were never sold as replacements.

If anything that makes it even more apparent this cry baby "muh job" thing is senseless. If you cant do it, and the bot also cant do it then wtf are you even doing there?
 
*sigh*

Repeat after me:

There is no AI.
There is no AI.
There is no AI.
Like it or not (I don't), AI is essentially a rebranding of an existing tech with larger datasets.

This doesn't make it good. Right now it's mostly good at doing things humans used to do for fun (art, creative writing, etc).

Tell me, in your best future run by AI, do you want to be the one doing the art, or the one on the factory floor? AI can't do the factory floor, but it can make artists and novelists pretty useless. How on earth does that help anything?
 
Last edited:
WASHINGTON, July 20 (Reuters) - Hackers and propagandists are wielding artificial intelligence (AI) to create malicious software, draft convincing phishing emails and spread disinformation online, Canada's top cybersecurity official told Reuters, early evidence that the technological revolution sweeping Silicon Valley has also been adopted by cybercriminals.

 
WASHINGTON, July 20 (Reuters) - Hackers and propagandists are wielding artificial intelligence (AI) to create malicious software, draft convincing phishing emails and spread disinformation online, Canada's top cybersecurity official told Reuters, early evidence that the technological revolution sweeping Silicon Valley has also been adopted by cybercriminals.

Yeah, this was easily one of the most obvious markets for it, too.
 
AI should never have been made available for the masses it should have stayed in the design and engineering and medical market where it's better able to what it was designed to do make better cars or buildings or planes, trains, better cancer treatments or Parkinson's treatments blah blah but it should never have been made available to the general masses all we will see is worse malware and stupid AI art and students using it to complete school work instead of using their own brain and passing when they should have failed

Imo better, humans are fallible and make mistakes.
And apparently so does AI so your argument is moot
 
Last edited:
Worse, graphics card prices are through the roof! :-)
Seriously though, if mankind had more intelligence, that is not to say enhanced one, just common sense, there would be no need for artificial intelligence.
 
AI should never have been made available for the masses it should have stayed in the design and engineering and medical market where it's better able to what it was designed to do make better cars or buildings or planes, trains, better cancer treatments or Parkinson's treatments blah blah but it should never have been made available to the general masses all we will see is worse malware and stupid AI art and students using it to complete school work instead of using their own brain and passing when they should have failed


And apparently so does AI so your argument is moot

Ever heard of the term human error. Most machines controlled by a ai/computer hardly ever make mistakes, jet fighters/passenger planes etc etc. The sooner most of that kind of stuff is more ai/computer controlled, the better. I can't wait till they perfect autonomous cars it will save countless lives as we are just awful at driving going by road death stats.
 
As I understand it AI in the PC space is about giving a program a set of parameters and having it create the program in real time instead of the traditional way. If that is true it makes people with Bad intentions even more capable and that for me would be a bad thing. The crazy thing is as a Gamer I could see where Improved AI would be beneficial.
 
I want my Optimus bot.
 
I want my Optimus bot.

Here you have it
Optimus Prime Animation GIF by Nickelodeon
 
Like it or not (I don't), AI is essentially a rebranding of an existing tech with larger datasets.

This doesn't make it good. Right now it's mostly good at doing things humans used to do for fun (art, creative writing, etc).

Tell me, in your best future run by AI, do you want to be the one doing the art, or the one on the factory floor? AI can't do the factory floor, but it can make artists and novelists pretty useless. How on earth does that help anything?
LLMs can't do creative work for the same reason they can't write code - because they don't understand how the various bits fit together to make a coherent whole. Just as there is no danger of software developers being replaced by LLMs anytime soon, there is no danger of other, similarly creative professions being usurped by them. But I wasn't talking about any of those professions; we'll need true synthetic intelligences, capable of understanding and reasoning in the same way that humans do, before that can ever be a possibility.

And again, technology making certain jobs obsolete is not a problem, it's a solution to those jobs having to exist. People who used to do those jobs being put out of work is neither their nor the technology's fault (no I'm not going to be callous enough to suggest that those people "learn to code"), but the fault of the politicians we allow to lead us. Unfortunately there is a distinct lack of technocrats in decision-making positions in governments, and even where they are present the short-term special interests of the next 4-year term trump any long-term planning required to cushion the workforce from the next big technological disruption.
 
LLMs can't do creative work for the same reason they can't write code - because they don't understand how the various bits fit together to make a coherent whole.
AIs winning at actual art contests is not some postfuture fantasy, its happened a few times already.

Biggest profile example here:


Its only going to get arguably worse as datasets improve.

Want to know the irony? The datasets are generally... other human artists work.

Art can be absurd enough that yes, an AI CAN pull it off.
 
As for all the arguments that "AI can't be creative": It can and absolutely will be. I believe people get confused because ignorants insist on calling generative networks "AI". We're still years away from actual general AI - although probably closer than most people realize, since a lot of development takes place behind closed doors and is not meant for the betterment of most people's lives - but when it comes, it might make humans redundant. As a misanthropic posthumanist I consider it a good thing, in the end it may turn out that the humanity's only lasting achievement will be the creation of something better.
But for now, even the "simple" generative models are great. I want a drawing of a cat in the style of Luis Royo? It takes twenty seconds for Instinct MI25 I bought for the price of scrap to create one. I write my own simple game (for fun, with no commercial interest), which I couldn't do just a few years ago due to the cost of hiring someone to create graphics, now the aforementioned MI25 can easily do a good enough job for basically no cost.
There is, of course, the problem of algorithms learning on their own output, creating a feedback loop.

Humanity seems to be inherently irrational and poisoned by emotional thinking. Law is one of one of the examples. It's enforcement should be objective and rational, something achievable by handing it over to, not even intelligent, machines. Yet people insist on in being unfair because the monkey mentality demands it.
 
Depends on your definition of AI. Machine learning, where massive datasets can be used predictively? Excellent. But I still voted Bad.

IMHO, the disadvantages outweigh the advantages. I'm not necessarily talking about a specific industry being automated, even if it is something creative like art or writing. These things solve themselves, and if they don't, then humanity benefits.

The bigger issue is that we live in the information age. And I think the current generation of 'AI', which is simply machine learning that mimics human interaction, will lead to the end of it. Photoshop was bad enough, but now anybody can make a video of me robbing the corner store. Or a recording of a call between me and my non-existent girlfriend, and send it to my wife.

These are trivial examples, but it serves as a warning of what could happen. We are heading into the Disinformation Age at Mach 10 and with no signs of slowing down.
 
Complex to answer, putting aside the silly terminator argument since AI is thick as #£#@£ at this moment I question the power use.

Dramatic improvement in efficiency are required ASAP, people don't realise map searches and shit pictures have a REAL Carbon cost.
 
Back
Top