• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Postulation: Is anyone else concerned with the proliferation of AI?

Does AI have you worried?

  • Yes, but I'm excited anyway!

    Votes: 11 7.6%
  • Yes, worried about the potential problems/abuses.

    Votes: 91 63.2%
  • No, not worried at all.

    Votes: 10 6.9%
  • No, very excited about the possibilities!

    Votes: 9 6.3%
  • Indifferent.

    Votes: 14 9.7%
  • Something else, comment below..

    Votes: 9 6.3%

  • Total voters
    144
Joined
Jul 5, 2013
Messages
30,581 (7.08/day)
The discussion has been taking place all over the net. Wondering what everyone here thinks.

How do you feel about AI? Excited? Concerned? Scared? Indifferent?

Share your input and opinion.

My vote was yes, I'm concern about the possible problems and abuses. Many of them are already rearing their ugly heads..
 
very concerned. for those using it for "good" theres too many using it for bad. guy took pictures of kids heads in parks and made child porn. thats only the tip of the bad that will come from it. thats the big problem. the ones using it for bad and thats why it cant be for the public to be used.

govts are way way too slow to regulate it. and its not that theyre slow its that fat envelopes are slipped under tables. they should not allow public the use of it. it has been shown over and over again. people can not be good when giving a free opportunity to be evil. most will be good, but enough will find a way to exploit scam and be evil. if you said to the world. "be nice, and no laws and rules for 24 hours", the world would be destroyed in 1 day.
 
Last edited:
guy took pictures of kids heads in parks and made child porn.
Yeah, that is one of the many things that is very worrisome. I mean, sure these kinds of things could be done before, but it took a lot of time and effort. Now it can be done in minutes.

Then there is the potential of people being impersonated without them even knowing about it. The potential for identity fraud is truly frightening.
 
Extremely concerned. I fully believe it has massive potentials to do good - especially in scientific (particularly health and medicine) research. But we have already seen how political adversaries, particularly "state-sponsored" adversaries are using it to influence public opinion with false information.

I am also very concerned how bad guys, weirdos and pervs are using it to exploit, humiliate and bully others - particularly children. For example, some are manipulating images to make it appear their "target" is in compromising scenarios and/or promoting products or services without any permission from the victim, or their knowledge - until it has gone viral and cannot be undone.

I am also very concerned our elected officials are already failing to get ahead of the technology and thus are allowing the exploitation of AI for dubious intentions to get too far out of control such that it becomes uncontrollable. This is due, in part, to the incompetence of our elected officials and their unwillingness to set aside political differences for a common cause. But it is also due in part because AI technologies are evolving so quickly (with the help of AI itself! :(), it is near impossible for lawmakers and "normal" laypeople to wrap their heads around it, and understand it.

And (let me don my tin-foil hat here), I am concerned, if safety measures are not developed and put in place now, AI will evolve to the point it figures out how to protect itself so securely, we mere humans will be blocked from pulling its plug. :( "I'm sorry Dave, I'm afraid I can't do that".
 
I am worried. Despite what is now being dubbed “AI” not actually being such in classic computer science terms (although that is a beehive I would not poke), it’s still a very interesting and potentially transformative technology that might enable leaps for humanity that were last seen 30 years ago or so with the proliferation of the Internet.

However, as I said, I do worry. Historically, unfortunately, every transformative tech we have ever came up with was used for less than savory purposes. And the scary thing is - each new breakthrough is potentially more destructive (self-destructive?) than the previous one. And the fact that most of the world is run by out of touch old men who really don’t seem to fully understand the implications of this technology and are lost on how to legislate it doesn’t inspire confidence either.
 
Twitter is full of generated AI videos to fool people. Facebook is going down the same road. The future does not look bright. The only place AI should improve in my opinion is language translation and AI for PC Games like strategy or Racing.
 
Last edited by a moderator:
I doubt anyone can ever regulate AI, as much as they can regulate the internet or they could regulate the alcohol or drug consumption. So that's my main concern, not that they don't regulate, but my feeling that it can't be done. We open the can of worms and now it's out there and out of control and we have no way to control it.
For the parts you can easily control, those are not my main concern.

if anyone has a practical feasible solution i would like to hear it
 
Like many others out there, I am concerned.

For one, a lot of people involved in LLM/ML/other current AI stuff seem to have little regard for ethics. Like that Amazon Alexa device imitating the voice of someone else. Like, "who needs a voice imitator? Why are you selling this to the masses as is? Who was the idiot that decided to promote this with the 'hear your dead loved ones' voice again' idea?"

There is Microsoft and OpenAI (and probably a few others as well) trying to push their idea of "if it's on the internet (or whichever site a company manages, like Twitter/X or Wix with DeviantArt) it's up for grabs, free use, copyright doesn't apply, etc."
Meanwhile, you decide to grab some 30 year old movie from some torrent site because you can't find a DVD nor a streaming site for it and you get a letter from your ISP borderline treating you like a criminal for a shitty movie that you literally can't find anywhere else.

Then there's the energy costs, and that this seems to put a strain on water reserves, increase CO2, etc. Environmental costs skyrocketing, basically.

There's also job concerns for a number of sectors.

There's a lack of care for truth and reality, looking at AI-generated articles and such.

There's people thinking "I don't need to know stuff, the AI will do it all for me", and this kind of thinking is threatening future generations development. How long until "thinking is not needed, AI will do it for me"?

And that's just the list of immediate/short term concerns. You start looking at long term and the concern increases.

I doubt anyone can ever regulate AI, as much as they can regulate the internet or they could regulate the alcohol or drug consumption.
Politicians: "ban the Internet"
 
Then there's the energy costs, and that this seems to put a strain on water reserves, increase CO2, etc. Environmental costs skyrocketing, basically.

There's also job concerns for a number of sectors.



Politicians: "ban the Internet"
Apparently there are Army of programmers trying to automate what I do for my Company. Good luck resolving material issues when every Cell is looking for Fibre and Bays.
 
For one, a lot of people involved in LLM/ML/other current AI stuff seem to have little regard for ethics. Like that Amazon Alexa device imitating the voice of someone else. Like, "who needs a voice imitator? Why are you selling this to the masses as is? Who was the idiot that decided to promote this with the 'hear your dead loved ones' voice again' idea?"

get voice imitation with facial imitation, now add some old people (the others will fall for it too but these ones OMG)... the scam opportunities are endless

btw this is already a thing

and how do you regulate this?
 
I'm not worried about AI itself, but rather what people will use it for. We are a dangerous species, especially in the way we harm each other. And we never learn.
 
Last edited:
There's also job concerns for a number of sectors.

No but it's cool because this survey asked 1000 companies with 14 million employees and they (companies that would benefit from AI, mind you) say it'll generate more jobs than it'll cost!
 
Misinformation is a big problem when it comes to affecting policy. AI makes it worse by making it so prolific, believable, and creating a fake crowd of apparent supporters. People believe what looks real and appears to have support. Look at any online discussion; the person with more upvotes is more believable even when they are wrong.
 
Last edited by a moderator:
i don't care what other people do with it but i do not want ANYTHING with "AI" in my OS, Software, Hardware or anywhere else.
 
AI has been deeply integrated in human life for a long time. You all are still here. Still have jobs.

I worry about if a human is going to walk into my sons middle school and shoot the place up while my son attends class....

i don't care what other people do with it but i do not want ANYTHING with "AI" in my OS, Software, Hardware or anywhere else.
You're way too late for that. XD
 
AI has been deeply integrated in human life for a long time.
Not like what has been happening in a last couple of years. This stuff is new.

You all are still here. Still have jobs.
I know three people(and counting) in the last year that have lost their jobs directly to AI run-time machines.
 
Last edited by a moderator:
AGI would but we're nowhere near it. Maybe in 2125, if we're lucky?

ML is not artificial intelligence, its just bullshit. Like nvidia trying to tell people 5070=4090. When no one properly questions the bullshit, you would, wouldn't you? Huang can claim anything and people lap it up.

I don't get why the proliferation of largely useless llms, upscaling and image creation algos would concern anyone. Other than about the possible financial consequences of it all collapsing in on itself.
 
AI rhetoric is a big problem and worth discussing. It hurts a lot more people than a pedo in the park does. If it suits you, ignore the wing and discuss the problem.
 
Last edited by a moderator:
Oh please let's not get into anything political. We all know how bad things like that can get.


Not like what has been happening in a last couple of years. This stuff is new.


I know three people(and counting) in the last year that have lost their jobs directly to AI run-time machines.
Only 3 people from 8 billion.
People loose jobs to other people, but this is non issue. If AI does it, it's now a problem? People change careers all the time. With and without AI influence.
 
Only 3 people from 8 billion.
People loose jobs to other people, but this is non issue. If AI does it, it's now a problem? People change careers all the time. With and without AI influence.
Do you have a Child in University? If you did you would understand what Chat GPT has done. I have a niece who is a high achiever. She became a Ballet Instructor at 20. She does not even want to go to school anymore because so many people are using AI to generate their work. This is a paradigm shift in our society.
 
Last edited by a moderator:
People loose jobs to other people, but this is non issue. If AI does it, it's now a problem? People change careers all the time. With and without AI influence.
I think you have missed the point. Sure, people lose jobs to other people. And people change careers all the time. I agree that is a non-issue.

The point is when a devious person uses AI to falsely discredit another person, and the victim then loses their job or even their careers due to that false information. That is a problem. This is especially true when the devious person cannot be held accountable because they can remain cowardly anonymous and untraceable.

Freedom of speech does NOT give us the freedom to say whatever we want whenever we want or to tell damaging lies about others.
 
Do you have a Child in University? If you did you would understand what Chat GPT has done. I have a niece who is a high achiever. She became a Ballet Instructor at 20. She does not even want to go to school anymore because so many people are using AI to generate their work. This is a paradigm shift in our society. Just like how with the influence of social media that DEI is bad. Please tell me if the very spirit of TPU is not Diversity, Equity and Inclusion?
Yes, studying to be a forensics officer. She learns about what humans do to each other. Blood splatters and stuff. AI doesn't make her less smart. Even if she used AI to assist writing papers, she's learned something from it.

Spirit of TPU? Does TPU web site have feelings? No, this is a man's career and makes money from this. It's a business adventure. Designed for humans. Only humans choose to be here. And forums are out dated forms of communication.

I believe if we are kind, it can be a kind world.

Worry more about diseases next time you're in a public bathroom.
 
I think you have missed the point. Sure, people lose jobs to other people. And people change careers all the time. I agree that is a non-issue.

The point is when a devious person uses AI to falsely discredit another person, and the victim then loses their job or even their careers due to that false information. That is a problem. This is especially true when the devious person cannot be held accountable because they can remain cowardly anonymous and untraceable.

Freedom of speech does NOT give us the freedom to say whatever we want whenever we want or to tell damaging lies about others.

People spread lies about others to discredit them since we were cavemen probably, not sure that's an AI issue. It's not new, it's not AI's fault, that's a human flaw finding a new vehicle to manifest itself.
Freedom of speech is under attack nowadays in my opinion, using lateral arguments just like you did there.
 
Back
Top