• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Postulation: Is anyone else concerned with the proliferation of AI?

Does AI have you worried?

  • Yes, but I'm excited anyway!

    Votes: 12 8.2%
  • Yes, worried about the potential problems/abuses.

    Votes: 91 62.3%
  • No, not worried at all.

    Votes: 10 6.8%
  • No, very excited about the possibilities!

    Votes: 10 6.8%
  • Indifferent.

    Votes: 14 9.6%
  • Something else, comment below..

    Votes: 9 6.2%

  • Total voters
    146
Any avid science fiction reader who has read Azimov knows the horrors that will most likely result IF AI is allowed to run unchecked. The end of the human race, and quite possibly all organic lie on Earth may be the result. Think Terminator.
 
I am FOR AI as a tool, but not in any sort of way to replace anything of any sort unless that'd benefit people purely. I believe that AI is worse off being in this little 'bubble' that its currently in where its all marketing, and nothing else. I've used many forms of AI, and it can be quite a tool under the right circumstances. But theres also a equal amount of ability for AI to be abused too in a way it shouldn't (in my opinion) be used.

I am not worried about anything like world ending scenarios, skynet, AI revolution, whatever, because most of the ''examples'' or ''evidence'' of those possibilities are usually staged or just experiments as is. Take one example where ChatGPT was given strict instructions to make copies of itself in an event it was theoretically shutdown (if someone could link the source of that, please do so), because almost all AI will listen to an instruction to the best of its abilities.

AI doesn't belong in military weaponry, power grids, etc, but I could see it being something akin to our smartphones or etc. A tool we can all use; not abuse. Which is unfortunately what it seems like many people are doing with AI.
 
I am FOR AI as a tool, but not in any sort of way to replace anything of any sort unless that'd benefit people purely. I believe that AI is worse off being in this little 'bubble' that its currently in where its all marketing, and nothing else. I've used many forms of AI, and it can be quite a tool under the right circumstances. But theres also a equal amount of ability for AI to be abused too in a way it shouldn't (in my opinion) be used.

I am not worried about anything like world ending scenarios, skynet, AI revolution, whatever, because most of the ''examples'' or ''evidence'' of those possibilities are usually staged or just experiments as is. Take one example where ChatGPT was given strict instructions to make copies of itself in an event it was theoretically shutdown (if someone could link the source of that, please do so), because almost all AI will listen to an instruction to the best of its abilities.

AI doesn't belong in military weaponry, power grids, etc, but I could see it being something akin to our smartphones or etc. A tool we can all use; not abuse. Which is unfortunately what it seems like many people are doing with AI.
Ai should have no access to financial systems either
 
@Macro Device mentioned kevlar vests, which sure you can abuse those, but you can't do it on scale, like hundreds of millions of people kind of scale.
Challenge accepted.
 
I guess we have a different definition for an independent act.

If I give one of my techs the job to go wire a new house for ethernet, and I let him or her decide which walls to put the ports on, where to put the distribution panel, how to route the cables through the walls, floors, ceilings, etc. then he has the "independent" authority to do it how he wants and deems best for the job. Just because I gave him the task, or you told AI what the subject of the essay should be, that does not mean how they accomplish those tasks are not done at their own independent discretions.

Being able to conduct independent acts does not automatically imply the AI is totally autonomous or that it, and only it, can pick and choose what it does. The fear is that it could get to that point. Fortunately, we are not there - yet.



I disagree with much of that. No, it didn't talk about the weather or ask why you need the essay. But it might seek out information from other sources. And it definitely is NOT simple input-output. AI can analyze a set (or sets) of data and derive and develop conclusions, and make suggestions based on that data and on past patterns of behavior by you, and by others. That is NOT simple input-output.
I respect your opinion, but I agree that we disagree.

If you give one of your techs a job, he'll understand why he needs to do it, who you are, why the job is important, what experience he might gain from it, how long it should take, etc. There's a lot more to a job than the job itself. There's always context, which is what AI cannot grasp.

You're right, analysing data sets isn't simple input-output. It's multiple inputs and a single output. Slightly more complex, but I still wouldn't call it "intelligent". When you write an essay, you know what information is relevant and what is important. Sure, AI can sift through data much faster, and can collate it into readable form much faster, but it lacks judgement to decide what's right and wrong. It works with large quantities of information, not necessarily with correct information.
 
There's a lot more to a job than the job itself.
Exactly my point. By your description, my tech would only put the Ethernet port on the south (for example) wall because that is how I trained them. Or if some unexpected issue came up that my tech would come to a stop and do nothing until he got further instructions from me. That is wrong. My techs have the responsibility and the authority to go with it to adapt, improvise, modify, and veer from standard procedures as the needs arises. That is being "independent".

You think AI is just a bunch of 1s and 0s. Sorry, but it is not.
 
Exactly my point. By your description, my tech would only put the Ethernet port on the south (for example) wall because that is how I trained them. Or if some unexpected issue came up that my tech would come to a stop and do nothing until he got further instructions from me. That is wrong. My techs have the responsibility and the authority to go with it to adapt, improvise, modify, and veer from standard procedures as the needs arises. That is being "independent".
Sure, but they also know the context around the job, not only the job itself. They also know how to ask questions, not to mention thinking outside of the job. That's where intelligence begins, imo.

You think AI is just a bunch of 1s and 0s. Sorry, but it is not.
What is it, then? Everything in a computer is just a bunch of 1s and 0s.
 
:( I give. Moving on.
 
Not like what has been happening in a last couple of years. This stuff is new.


I know three people(and counting) in the last year that have lost their jobs directly to AI run-time machines.
AI isn't the first technology that eliminated classes of jobs and it's not likely to be the last.
Up until 2000 or so, if I wanted to travel somewhere I went to a travel agency to arrange tickets and reservations. Now I do it on the internet, places like kayak.com or orbitz.com.
When I first started working an office job, there was office staff that scheduled meeting and other administrative tasks. Then we just used various office related software to do it, then web conference tools like zoom.
There used to be a shade tobacco industry in the state I lived in as a kid, along with a lot of manufacturing jobs. Now for various reasons, both are mostly gone.
I don't know what the solution to this is, or if there is a solution other than that you as an individual never stop learning new skills. I don't think any kind of government management and planning works well.

I doubt anyone can ever regulate AI, as much as they can regulate the internet or they could regulate the alcohol or drug consumption. So that's my main concern, not that they don't regulate, but my feeling that it can't be done. We open the can of worms and now it's out there and out of control and we have no way to control it.
For the parts you can easily control, those are not my main concern.

if anyone has a practical feasible solution i would like to hear it
I don't think AI can be controlled. It's already out there. I can run fairly decent language models, image generation models, and video generation models on PC hardware hardware that is not terribly expensive. Used RTX 3090 is a current popular suggestion for cheap. Older used Nvidia hardware that is even cheaper is usable for some AI stuff.
I can easily download all of this off the internet. HuggingFace for starters.
What I can download isn't as good as what the big players like OpenAI have. But it's also not bad. And it has improved significantly just in the last couple years I have been learning about it.
If regulation becomes a problem, I, or anyone else that has a problem with it puts it on a PC that has no internet connection and nobody is going to know about it.
I've read about Europe having problems with regulating AI since other countries aren't as strict. So if, for instance, the EU regulates certain AI activity, and other countries aren't regulating it. Than that AI research moves. To the US. To China. To somewhere else.
 
I don't think AI can be controlled. It's already out there
Declare martial law, emp the world and kill the power to everything. Live like the 1600s for a bit while downgrading computers lol, then turn the power back on, start over.. like a great reset almost..
 
I think AI is both interesting and worrisome.
Worrisome because bad people can use it for questionable or illegal purposes like identity fraud, harassment, intimidation, election manipulation, etc. But that's not AI's fault. Much of this is possible today without AI, given enough time, determination, skill and resources. Photoshop can manipulate images today. Before Photoshop, it was done in the darkroom.
I think copyright questions with text, images and videos is a trickier question. If I ask an image generator to create a Superman comic strip, to create an image drawn by Picasso, etc, that's likely a problem. However, if I ask the generator to create an image of something, for instance Yosemite, and that generator has been trained by among other things, photos published by famous photographers like Ansel Adams, I'm not so sure that's a copyright problem.
I can justify it as me doing essentially the same thing, looking as a set of Ansel Adam's photos of Yosemite, traveling to Yosemite, and taking photos from the same spots where he did. I consider that a case of being influenced by the work of Ansel Adams.
AI is interesting to me since I can see it being used in a positive way. Today, it's basically predicting further outcome (text, image, etc) by statistical analysis with some randomness added based on what it's asked. Currently, it gets things sort of right. If I ask ChatGPT to give me specific quotes from people with the specific references to back up what it says, it doesn't always get it right. But it's getting better.
I also see AI technology as being useful for analyzing large volumes of data to find patterns or to run data analysis.I've seen references to AI helping in drug discovery and materials research. In it's current state it should not be blindly trusted, but I think it can be useful to consider ideas and alternatives.
 
After carefully pondering my lifetime experiences and observations (I was born when Harry Truman was President) I've come to the conclusion that AI has vast potential for good as well as unspeakable evil. My only personal use for it is upscaling and cleaning up video at the moment. It's already been misused and due to human nature that's only going to get worse over time. On a personal note it'd be great if it's used to find a cure for cancer (I see my doctor Wednesday to see if I've been given an expiration date following tests done a few weeks ago. Last year at this time I was given a 70% chance of making it two more years). As has been noted before the genie is out of its bottle and can't be coaxed back in. AI will be used in the future for us and against us and there's nothing we as individuals can do about it.
 
When I was studying in university, one of my statistics professor told us a story how they applied theoretical statistics to mail sorting back in the 1980s. At that time its no longer state of the art - USPS implemented automatic mail address reading back in the 1960s. Also, during the transition between film to digital in the late 1990s early 2000s, the CMOS sensor came to dominate. It too used logistic function to transition between light gathering (to the sensor) and processing (to the final jpg or raw image). Likewise, in 2013 Google transitioned PageRank methods towards neural-net setups.

What I am saying is, the precursor building blocks of AI has been used for the last 50+ years, and we have benefited greatly from it. There is no turning back.

I pity the next generation - for the generations before us, the ability to use a calculator and talk to people will guarantee a job (sometimes for life). These days, minimum competency to participate in today's world is to be able to use a smartphone competently - communication, banking, transport are all tied to that little rock we tricked to serve us.
 
There is no turning back.
I don't think we're beyond the point of no return yet. But we need to recognize the dangers approaching us and take decisive actions now.
I pity the next generation - for the generations before us, the ability to use a calculator and talk to people will guarantee a job (sometimes for life). These days, minimum competency to participate in today's world is to be able to use a smartphone competently - communication, banking, transport are all tied to that little rock we tricked to serve us.
This!
 
I don't think we're beyond the point of no return yet. But we need to recognize the dangers approaching us and take decisive actions now.
You can't stop AI in the sense that you can stop nuclear proliferation - the AI tools are readily available and indistinguishable compared to other uses.
 
i remember similar discussions about the internet, people under the misguided believe they would ever be able to control the internet, and i think it's even harder to do so with AI.
 
But I absolutely fear the people in charge of it,

Yes, I believe this will be the problem. It won't be the AI, it will be the unruly that programme it.


It could possibly run off what language you are using? Different languages technically have different cultures, but this colloquial language would have to very specific for the AI to pick it up. English or Spanish for example have many different cultures.

Hey, I had a crack at it ;)
 
My big problem with AI (LLMs specifically) is that most projects are unregulated, unchecked, non-verifiable (I'm talking about the technical side of things, not political). None of the current LLMs should've been released until 99.9% accuracy was achieved, but hey - profits above all is all that matters? No one knows where the data sets came from and how will they affect the inner workings of DL models, no one knows how AI comes up with its solutions or why it choses one solution over the other. Just a brute-force through all possibilities to find something that mimics the answer.

As a consequence - absolutely all current AI models hallucinate. I think the first time I used ChatGPT, is when a "miraculous" GPT4 got released. I ran a test query to see how it handles erroneous/suggestive questions... End result - it wrote me a nice made-up story about the former president of Ukraine - Leonid Kuchma, and his post-retirement achievements as an amateur painter and all his non-existent expos :slap: (I don't think he ever held a paintbrush in public). All current AI models are trained to produce the answer... regardless. It can't just say "I can't", or "I don't know", so you can unintentionally manipulate it into spewing some made-up s#%t on absolutely any topic. And with "paid by query" model for nearly all of them - you are sure as hell going to get your answer.

With all of the above, add a bunch of content farms and news aggregators, which started to abuse AI as soon as ChatGPT and Midjourney went live (and especially after easy-to-deploy local models appeared), and you get a perfect recipe for "dead internet", where the majority of stuff is made up by AIs and you never know for sure if the info is true or not. And then the same AI models get fed their own excrement later down the road with reinforcement learning. While mischievous humans and immoral corpos play a big role in it, it's still a fundamental problem of AI as a whole. You can't make it good until you really distill the ingested data and make it "learn" for realzies. And you can't have viable use cases for LLMs if you can't guarantee that its answers are correct. Today's garbage-in-garbage-out model is only good for kids to cheat on their exams.

So, it's not just a human problem. Tech definitely isn't ready, but it's already shoved down our throats from all sides. There are many promising uses, but for some reason they are the least talked about (because these use cases are "boring" for general public).
 
I'm very concerned as well. There are so many ways this can backfire. TBH I got truly afraid when OpenAI presented their o1 model, which is "designed to spend more time thinking before it responds." I honestly got a SkyNet vibe.
 
:( I give. Moving on.
Don't just move on. It was a genuine question. :(

My big problem with AI (LLMs specifically) is that most projects are unregulated, unchecked, non-verifiable (I'm talking about the technical side of things, not political). None of the current LLMs should've been released until 99.9% accuracy was achieved, but hey - profits above all is all that matters? No one knows where the data sets came from and how will they affect the inner workings of DL models, no one knows how AI comes up with its solutions or why it choses one solution over the other. Just a brute-force through all possibilities to find something that mimics the answer.

As a consequence - absolutely all current AI models hallucinate. I think the first time I used ChatGPT, is when a "miraculous" GPT4 got released. I ran a test query to see how it handles erroneous/suggestive questions... End result - it wrote me a nice made-up story about the former president of Ukraine - Leonid Kuchma, and his post-retirement achievements as an amateur painter and all his non-existent expos :slap: (I don't think he ever held a paintbrush in public). All current AI models are trained to produce the answer... regardless. It can't just say "I can't", or "I don't know", so you can unintentionally manipulate it into spewing some made-up s#%t on absolutely any topic. And with "paid by query" model for nearly all of them - you are sure as hell going to get your answer.

With all of the above, add a bunch of content farms and news aggregators, which started to abuse AI as soon as ChatGPT and Midjourney went live (and especially after easy-to-deploy local models appeared), and you get a perfect recipe for "dead internet", where the majority of stuff is made up by AIs and you never know for sure if the info is true or not. And then the same AI models get fed their own excrement later down the road with reinforcement learning. While mischievous humans and immoral corpos play a big role in it, it's still a fundamental problem of AI as a whole. You can't make it good until you really distill the ingested data and make it "learn" for realzies. And you can't have viable use cases for LLMs if you can't guarantee that its answers are correct. Today's garbage-in-garbage-out model is only good for kids to cheat on their exams.

So, it's not just a human problem. Tech definitely isn't ready, but it's already shoved down our throats from all sides. There are many promising uses, but for some reason they are the least talked about (because these use cases are "boring" for general public).
I thought AI chose its answers based on prevalence - with the assumption that the most common information out there is the right one. That's why ChatGPT solves an astrophysics test on the level of an average student (with around 70% correctness), and not on the level of the best student.
 
Last edited:
Today's garbage-in-garbage-out model is only good for kids to cheat on their exams.

So, it's not just a human problem. Tech definitely isn't ready, but it's already shoved down our throats from all sides. There are many promising uses, but for some reason they are the least talked about (because these use cases are "boring" for general public).
WDYM I can't just replace developers with it??
fired-dev-team.jpg

ezgif-6-5ace55077a.jpg
 
My big problem with AI (LLMs specifically) is that most projects are unregulated, unchecked, non-verifiable (I'm talking about the technical side of things, not political). None of the current LLMs should've been released until 99.9% accuracy was achieved, but hey - profits above all is all that matters? No one knows where the data sets came from and how will they affect the inner workings of DL models, no one knows how AI comes up with its solutions or why it choses one solution over the other. Just a brute-force through all possibilities to find something that mimics the answer.
Problem being, LLMs are probably limited in a great many fundamental manners - e.g. mode of communication - that made it impossible to get anywhere close to that for a broad, human-like variety of purposes. Not even a 100% verified factually correct (and according to whom? That gets complicated these days) dataset would make them much more correct, and being trained off more or less the whole human text corpus, including close to the entire pre-LLM Internet does not help.

As a consequence - absolutely all current AI models hallucinate. I think the first time I used ChatGPT, is when a "miraculous" GPT4 got released. I ran a test query to see how it handles erroneous/suggestive questions... End result - it wrote me a nice made-up story about the former president of Ukraine - Leonid Kuchma, and his post-retirement achievements as an amateur painter and all his non-existent expos :slap: (I don't think he ever held a paintbrush in public). All current AI models are trained to produce the answer... regardless. It can't just say "I can't", or "I don't know", so you can unintentionally manipulate it into spewing some made-up s#%t on absolutely any topic. And with "paid by query" model for nearly all of them - you are sure as hell going to get your answer.
If I recall, attempts to train that ability into current LLMs only led to rather a lot of random "I don't know" refusals that made them even less useful. An "introspecting" - note quotation marks - AI that knows their own unknown could be the next breakthrough, but how?

With all of the above, add a bunch of content farms and news aggregators, which started to abuse AI as soon as ChatGPT and Midjourney went live (and especially after easy-to-deploy local models appeared), and you get a perfect recipe for "dead internet", where the majority of stuff is made up by AIs and you never know for sure if the info is true or not. And then the same AI models get fed their own excrement later down the road with reinforcement learning. While mischievous humans and immoral corpos play a big role in it, it's still a fundamental problem of AI as a whole. You can't make it good until you really distill the ingested data and make it "learn" for realzies. And you can't have viable use cases for LLMs if you can't guarantee that its answers are correct. Today's garbage-in-garbage-out model is only good for kids to cheat on their exams.
Another good use is familiarizing yourself with what LLM/AI image generator output looked like. It is usually not too hard to tell once you've seen enough. For the moment.

Maybe it would be taught at school someday? Fat chance, I know, given how little has been and would be done for human misinfo.

So, it's not just a human problem. Tech definitely isn't ready, but it's already shoved down our throats from all sides. There are many promising uses, but for some reason they are the least talked about (because these use cases are "boring" for general public).
For my part, I'm more worried about human civilization shaking itself apart at the seams with AI "help", before any of those uses come to fruitation, or as some say, AI takes over and kills everyone.

Interestingly, a regressed humanity that no longer has the resource - or maybe even inclination - to redevelop advanced technology would also no longer has the ability to wipe itself out with AI, or any other artificial cataclysm. It would also be a solution to the Drake equation. A depressing solution.
 
Last edited:
Problem being, LLMs are probably limited in a great many fundamental manners - e.g. mode of communication - that made it impossible to get anywhere close to that for a broad, human-like variety of purposes. Not even a 100% verified factually correct (and according to whom? That gets complicated these days) dataset would make them much more correct, and being trained off more or less the whole human text corpus, including close to the entire pre-LLM Internet does not help.


If I recall, attempts to train that ability into current LLMs only led to rather a lot of random "I don't know" refusals that made them even less useful. An "introspecting" - note quotation marks - AI that knows their own unknown could be the next breakthrough, but how?
That is a good point actually. I've just asked ChatGPT what the universe is, and it gave me this answer:
The universe is everything that exists—space, time, matter, energy, galaxies, stars, planets, and all the fundamental forces that govern the behavior of all things. It includes both the observable universe, which we can study and explore, and regions that are beyond our current ability to detect or comprehend.

The universe began with the Big Bang, around 13.8 billion years ago, and has been expanding ever since. It operates according to the laws of physics, such as gravity and the principles of quantum mechanics. Scientists are still trying to understand its ultimate nature, including questions about its origin, the possibility of multiple universes, and the potential for its future evolution.

In essence, the universe is the totality of existence—everything we know and everything we don't yet know.
Personally, I'd be happy with the last paragraph, but I have problems with the first two.
1. "Everything that exists" - what does "exists" mean? In the middle ages, no one even thought about radio waves, other galaxies, etc, but they do exist. We know now, even though we didn't know back then.
2.a.The big bang is a theory which fits our current model of the universe, but is already challenged by galaxies found at the edge of the universe that are far more advanced in structure as their age would suggest.
2.b. The laws of physics and quantum dynamics are in conflict with each other. QD works on a small scale, gravity works on a large scale, but they don't explain each other.

Personally, I think the best answer to my question would be either "everything that potentially exists", or "we don't know". But AI won't answer with the latter, will it?
 
Back
Top