• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

Postulation: Is anyone else concerned with the proliferation of AI?

Does AI have you worried?

  • Yes, but I'm excited anyway!

    Votes: 12 8.2%
  • Yes, worried about the potential problems/abuses.

    Votes: 91 62.3%
  • No, not worried at all.

    Votes: 10 6.8%
  • No, very excited about the possibilities!

    Votes: 10 6.8%
  • Indifferent.

    Votes: 14 9.6%
  • Something else, comment below..

    Votes: 9 6.2%

  • Total voters
    146
Science Fiction is about stories about the human condition, using "non-human AIs" to help the audience disconnect from the machines. Its a literary technique, not a scientific one.

It leads to great stories and is a good starting point for philosophical debates. But just remember, its science fiction, emphasis on the fiction. I believe Issac Asimov himself have said that the "3 laws of Robotics" are actually about how humans make decisions (Don't hurt others, listen to your boss, and protect yourself). The story is how this simple heuristic can violate human values despite the individual following the three rules as closely as they can (even with superhuman advanced fictional AIs).

----------

EDIT: In the real world, I truly believe that the human ego is so powerful that we'd never allow "strong AI" to exist. We will continue to come up with new excuses for why (new AI accomplishments) are "not intelligence". Just an eternal treadmill of changing definitions.
Interesting.

I'm a sentient complex organism. With a very powerful and efficient brain. Equipped with all needs for survival if you take AI away from me.

I can live without it as my grandfather and his father before him.

If its intelligent, it must mean life. Can be low intelligence, like a fish with a 5 second memory too.

Would the "AI community" call AI a form of life? Sentient intelligent life form?

Or just a really good computer program? It solved a puzzle a human couldn't. So did a hand saw.....
 
I think "thinking" also involves reasoning, logical deductions, comparing different topics, making value judgements, forming opinions, recognising the context of the task, thinking outside of the task, making own thoughts without previous input, asking questions, and knowing the limits of our knowledge. I haven't seen any AI do any of this.

Yeah, I doubt AI will ever dream, love, lose its inhibitions if it gets drunk or ever have a conscious. I've probably only scratched the surface here but let's face it, this is not what they will be built for.
 
I changed my initial vote because I didn't quite understood the subject. As I said in another thread, what we have now it's not AI (that "I" implies "intelligence", which is not the case at the moment).

LLM is exactly that - Large Language Model. It's not "Model Lingvistic General" (approximate translation ). Hence some (not so insurmountable) issues, as someone wrote it before, including that nation culture itself.

These are not some small issues to get by, they're quite huge. Hence the suddenly appeared "system agents" (bleh). Nothing more than software robotics and used for quiiiite a while (with some success or not).

At my workplace 80% of financial personnel disappeared because it was only data input and nothing more, at HR we had 6!!! recruiters, we have 1 now. PM's are kinda stressed about "AI" and I keep tell them that at least for a few years no AI can predict/adapt on the fly to changes that happens on the field (not software, to be clear).

Back to the real topic - AI itself has 3 ways of expressing itself (my guess) - Galactic Center style (wipe-out), Culture series style (nurturing?), or just leaving us to die like useless forms of life (doubt that but meh). Taking into account the constant meddling that we as a species make, the future is bleak.

Anyway, a Mandarin AI versus an English AI versus a Spanish AI (most spoken languages, don't pick the forks!!!) would be something to see...neah, no thx.
 
Regarding chess,
Many people NEVER called it "thinking"(because it isn't), myself included as we knew how that victory was achieved.
Totally agree - it (early chess programs) is NOT, and never was called "thinking" - except by total laypeople.

While the number is HUGE, the number of possible moves in chess in finite. Therefore, when a computer is determining its next move, it is simply looking at a bunch of if/then statements - essentially following a flow chart, determining then the shortest path to victory. That is NOT thinking.

AI problem solving today can be (didn't say is) different because in many cases, especially involving living beings where each is unique, there are infinite number of possible outcomes, and what may be best for one may not be for another.

Is artificial intelligence real? I say, yes. Just as man-made diamonds are real. Synthetic motor oil is not only real, it is better than petroleum based oil in almost every measure. Is AI at Einstein level? Probably not. Does it understand emotion? No. Does it make mistakes? Yes. But so do we.

Can it reason? Yes - to some level.

And the biggie, IMO, can it learn? Absolutely and that alone, by many definitions, indicates intelligence.
 
While the number is HUGE, the number of possible moves in chess in finite. Therefore, when a computer is determining its next move, it is simply looking at a bunch of if/then statements - essentially following a flow chart, determining then the shortest path to victory. That is NOT thinking.

An 80B parameter LLM is just that, 80Billion neurons (which are just Number * Another number + stuff).

It's a very large number but it's still the same fundamentally. As it turns out, the neural network (today represented by A*B+C matrix multiplications) can emulate anything, including if/else statements as long as you have enough neurons. (A fact that has been known in the AI community since the 1960s. But only today do we have computers powerful enough to train a 80B model).

The calculation is simply 'what should the next word be, given all the other words in this discussion so far?'. It's a new and intriguing way to use classical ANNs and seems to have some degree of applications. But it's really not as complex as people are hyping it to be.

All these AI computers are just optimizing this matrix multiplication problem in different ways.
 
Some here are convinced they totally understand AI.
I totally understand it conceptually. It's just lists of weighted variables and parameters. But of course, this is like telling someone you know binary and therefor are an IT expert.
 
Last edited:
I watch a lot of Anime. Most people know about the Ghost in the Shell. movie but there is a show in that series. It is called Stand Alone Complex. Near the end of the Season there were some AI based Robots that they got. War pushes technology and both Ukraine and Russia have autonomous Drones that look for each other's tech to destroy.
 
I totally understand it conceptually. It's just lists of weighted variables and parameters. But of course, this is like tellimg someone you know binary and therefor are an IT expert.

The other tidbit is that neural nets are differentiable, as in calculus can be applied to them.

That means that if you know the 'Truth', then you can use derivative of (Output - Truth) / partial-derivative-of-(weight#1) or weight#2 (etc. etc for each 80-Billion weights).

That's means you now have a self learning system as long as you have enough computers to calculate this error, and enough training data to make these weights fit to something useful.

The innovation is how to write the correct questions. As I've listed earlier, an LLM is simply a text predictor. Given earlier text, what should the next word be?

EDIT: the LLM is basically a system that converts words to numbers. Or more precisely, portions of words (ex: por-tion-space-of-word-s), then converts that into a math problem for the self learning neurons, and then using the entire internet as a training set.
 
But of course, this is like tellimg someone you know binary and therefor are an IT expert.
LOL Or how some swap out a power supply or add RAM and suddenly they are electronics technicians! :rolleyes:
 
Completely indifferent to it currently. I do think the term is being used way too loosely though.

I'll probably be long dead but I'd be super concerned once they create an actual AI that can think for itself becuase it's going to realize real quick that the earth is better off without Humans...
 
I'll probably be long dead but I'd be super concerned once they create an actual AI that can think for itself becuase it's going to realize real quick that the earth is better off without Humans...
I don't think that's true.

I think the world would be better off without the shit bags that make it shitty to live here.

Edit:

In the end it comes down to money, power, and hoarding knowledge..
 
I think the whole "AI will destroy us!" thing is overblown and borders on crackpot status. I do think the risk of AI being used for fraud and manipulation is a concern, but not in a self acting way. But overall I think it has immense positive potential.
 
using the entire internet as a training set.
and this is 99% of it's biggest issue re "hallucinations" IMO.

I think the whole "AI will destroy us!" thing is overblown and borders on crackpot status. I do think the risk of AI being used for fraud and manipulation is a concern, but not in a self acting way. But overall I think it has immense positive potential.
It's a crackpot theory for the tech at the moment. But the issue is where it will lead. Even Stephen Hawking was concerned about that, conceptually.
 
Can we use automotive as an AI example?

The vehicle uses radar and camera systems as inputs to predict events to enable autonomous driving.

It is only "intelligent " to do this function. But it's not intelligent. It's using inputs and outputs only.

The AI isn't looking through the window at 2 vehicles in a head on collision and make the choice based off what it sees.

Scene.

AI is 80 thousand pound dump truck loaded non avoidable head on collision of 2 vehicles. 1 vehicle has 3 children a mother and grand mother. The other a single elderly male. The AI cannot make the choice of which car to destroy and doesn't know who's going to live or die, it just chooses left or right based on braking capability and distance relative to speed. The mother, grandmother and children died because the vehicle was 12" further away and lowers impact.

At what point are we going to focus on intelligence? If you as a sane human, you'd have to make that call with inputs that are the same as AI, but a much higher state of reasoning. Perhaps you have better ethics and understand saving the children makes more sense. A valuable gift for intelligent humans.
 
and this is 99% of it's biggest issue re "hallucinations" IMO.

Hallucinations exist because LLMs are fundamentally a text-prediction machine. If the LLM doesn't know what the next word should be, it will reliably 100% of the time make something up that sounds correct.

There's no differentiation between text-prediction or truth, or any other values we humans ascribe to our text. The LLM is simply predicting (and with enough "temperature settings"), possibly randomizing its predicted results. With a hot enough temperature, LLMs will favor less-correct results (useful for casual poetry or other casual fun kinds of text manipulation)
 
I‘m loving it so far. Been a massive time saver in writing documents and creating presentations.

Bring my fully intelligent personal assistant, I can’t wait!
 
I watch a lot of Anime. Most people know about the Ghost in the Shell. movie but there is a show in that series. It is called Stand Alone Complex. Near the end of the Season there were some AI based Robots that they got. War pushes technology and both Ukraine and Russia have autonomous Drones that look for each other's tech to destroy.
Multiple iterations of GITS
 
Can we use automotive as an AI example?

The vehicle uses radar and camera systems as inputs to predict events to enable autonomous driving.

It is only "intelligent " to do this function. But it's not intelligent. It's using inputs and outputs only.

The AI isn't looking through the window at 2 vehicles in a head on collision and make the choice based off what it sees.

Scene.

AI is 80 thousand pound dump truck loaded non avoidable head on collision of 2 vehicles. 1 vehicle has 3 children a mother and grand mother. The other a single elderly male. The AI cannot make the choice of which car to destroy and doesn't know who's going to live or die, it just chooses left or right based on braking capability and distance relative to speed. The mother, grandmother and children died because the vehicle was 12" further away and lowers impact.

At what point are we going to focus on intelligence? If you as a sane human, you'd have to make that call with inputs that are the same as AI, but a much higher state of reasoning. Perhaps you have better ethics and understand saving the children makes more sense. A valuable gift for intelligent humans.
There is no ethical choice in who lives and who dies during a road incident. The elderly male might be loved by just as many people as the mother with 3 children. Saying that one has more right to live than the other is highly unethical, imo. If you choose to kill him instead of her, I bet you won't feel any better thinking that you chose the lesser of two evils. I see the point you're trying to make, but this was a bad example. There is no such thing as a lesser evil.
 
Before my role change into automation I noticed an interesting problem with AI in IT support: new engineers relying far too much on it and not actually learning anything or worse, making mistakes because they took what the AI said as gospel and just started punching in commands without understanding them. The amount of times I walked past a desk with a ChatGPT window open on one screen and a PowerShell window on the other screen full of angry red error messages on client servers using the global administrator account beggars belief.

For my last couple of trainees I told them "absolutely no *GPT. You are to ask questions, test/trial and research problems yourself. Otherwise you won't learn a good troubleshooting methodology". I'm not sure if that philosophy has stuck as they're now training trainees, I didn't think to ask. But I do know that I trained some very competent engineers :)

There are a myriad of other reasons I don't like AI but the majority of them are either pretty well known or covered already. I can summarize my opinion as "we (as in the royal "we") are rushing headfirst into technology without considering the consequences of it, especially in terms of future human competency, privacy and security. But Pandora's Box is already open (especially at the corporate level) and there's no going back now".
 
The AI isn't looking through the window at 2 vehicles in a head on collision and make the choice based off what it sees.
I say that is exactly what it is doing! Well, not through the windows, but through its sensors. And actually, it is through the window as there are several sensors on the backside of the rear-view mirror that do indeed, look out the window.

And not only that, in the near future (with IoT) all cars will be "connected" and not only will AI be looking ahead at what's heading straight at it, but it will be looking at side streets, parallel lanes, out the back and constantly "listening" for traffic notices like emergency response vehicles, accidents, disabled cars, etc. These inputs will be used to control speeds, stopping at lights, and to prevent you from being T-boned by a drunk running the light at the next intersection. Or slow down and move traffic out of the way of a fire truck on a call.

No doubt in a few years, when all cars are self driving and connected, "bumper to bumper" will be the standard. When the light turns green, the entire block of cars will "step off" at once, in military precision, like a platoon on the march being given the command, "Forward, march!"

AI is 80 thousand pound dump truck loaded non avoidable head on collision
:( That anecdotal 80,000 dump trunk scenario is a one-off anecdotal exception and doesn't render moot the whole point! And sorry, but frankly, it really makes no sense!

If AI was involved, how could the dump truck or the other vehicles get themselves into an unavoidable situation in the first place?

A much more likely scenario is, for starters, big heavy vehicles would have fail-safe features in place to automatically ensure greater separation distances when moving. AI would "see" unsafe conditions and if necessary adjust speeds, change lanes or even slam on the brakes. AI would detect faults and initiate safety features immediately. AI in the other vehicles (and other traffic) would have "seen" the dump truck coming, perhaps miles ahead and taken preventative measures. AI in the other cars would have been alerted to problems with the dump truck and already taken evasive actions. Would that guarantee there would never ever be an accident? Of course not. But once again, one-off anecdotal exceptions don't make the rule.
 
I say that is exactly what it is doing! Well, not through the windows, but through its sensors. And actually, it is through the window as there are several sensors on the backside of the rear-view mirror that do indeed, look out the window.

And not only that, in the near future (with IoT) all cars will be "connected" and not only will AI be looking ahead at what's heading straight at it, but it will be looking at side streets, parallel lanes, out the back and constantly "listening" for traffic notices like emergency response vehicles, accidents, disabled cars, etc. These inputs will be used to control speeds, stopping at lights, and to prevent you from being T-boned by a drunk running the light at the next intersection. Or slow down and move traffic out of the way of a fire truck on a call.

No doubt in a few years, when all cars are self driving and connected, "bumper to bumper" will be the standard. When the light turns green, the entire block of cars will "step off" at once, in military precision, like a platoon on the march being given the command, "Forward, march!"


:( That anecdotal 80,000 dump trunk scenario is a one-off anecdotal exception and doesn't render moot the whole point! And sorry, but frankly, it really makes no sense!

If AI was involved, how could the dump truck or the other vehicles get themselves into an unavoidable situation in the first place?

A much more likely scenario is, for starters, big heavy vehicles would have fail-safe features in place to automatically ensure greater separation distances when moving. AI would "see" unsafe conditions and if necessary adjust speeds, change lanes or even slam on the brakes. AI would detect faults and initiate safety features immediately. AI in the other vehicles (and other traffic) would have "seen" the dump truck coming, perhaps miles ahead and taken preventative measures. AI in the other cars would have been alerted to problems with the dump truck and already taken evasive actions. Would that guarantee there would never ever be an accident? Of course not. But once again, one-off anecdotal exceptions don't make the rule.
That's not how autonomous driving works. Not even close.

Let's say you the person could avoid by swerving off the road. Current configuration keeps the vehicle between the lines at all times.

Even my wife's Suburu will fight you, the user, if you need to go over the line. In order to go over a line, you must use the turn signal switch, which deactivates the lane usage.

I digress. It was about AI making a choice NOT based on the very strict rules that are in place now.

It has no reasoning. It follows its rules and that's that.
 
For the first time in 25 years my Company is migrating to SAP HANA. SAP has always been modules before that like PE1, G50 and the like. This confirms the rumour that there are an army of programmers trying to have AI take over our functions. The last town hall we had the VP told us to get Job training on other areas of the Company.
 
For the first time in 25 years my Company is migrating to SAP HANA. SAP has always been modules before that like PE1, G50 and the like. This confirms the rumour that there are an army of programmers trying to have AI take over our functions. The last town hall we had the VP told us to get Job training on other areas of the Company.
Sad. I hope you find something good soon!
 
That's not how autonomous driving works. Not even close.
:(

I really don't understand you. You get stuck on a thought and refuse to see anything beyond that - even when you quote me, you refuse to see what I am saying.

First, those sensors do indeed look through the window - but to this point you just made, did I say that is how autonomous driving works? Nope!

I said, and you quoted me, "in the near future" then I went on to explain how things "will be" with AI working with IoT and how it "will be when" all cars are connected.

But apparently you refuse to accept what the future will bring because, according to you, that is "not even close" how autonomous driving works today. :(

I guess AI will never "evolve" beyond what it is today. :rolleyes:
 
Back
Top