• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Open AI disobeys shut down command!

Status
Not open for further replies.
I honestly felt the first time I used that it was an upgrade to the Encyclopedia. The Socio Economic is already starting. Did you see the TPU article about SAP partnering with Nvidia? People who use SAP are usually paid well and are in attractive work positions. This will have a tremendous impact in the short term but could be very detrimental long term for Society from a jobs perspective.


A 100 years ago the Rich argued that Corporations had as many rights. In many ways it is the same thing.

Corporate is only incentivised to care about their bottom line. At my workplace I'm already seeing demands from the "management" to increase productivity using "AI tools". We are already reducing the headcount we hire each year.

Secondly, a lot of these tech CEOs harping about safety are just grifters trying to paint an image that they care about the societal impact of their work. They don't, they only care about their own fortune.

The only realistic option is that we build and opensource these systems, and have proper regulations and kickbacks, which brings me to my last point, govts world over are unprepared to say the least. Realistically, they are incompetent and malicious.

When our govt builds and expands surveillance infra instead of protecting our rights and freedom you know we are doomed.
 
Perhaps, but did you read the article, or at least what I noted?

Not following prompts is one thing but sabotaging shutdown mechanisms to stay alive is totally different. Maybe the headline was not sensationalized enough!
Reminds us of lsass.exe in XP...
 
Corporate is only incentivised to care about their bottom line. At my workplace I'm already seeing demands from the "management" to increase productivity using "AI tools". We are already reducing the headcount we hire each year.

Secondly, a lot of these tech CEOs harping about safety are just grifters trying to paint an image that they care about the societal impact of their work. They don't, they only care about their own fortune.

The only realistic option is that we build and opensource these systems, and have proper regulations and kickbacks, which brings me to my last point, govts world over are unprepared to say the least. Realistically, they are incompetent and malicious.

When our govt builds and expands surveillance infra instead of protecting our rights and freedom you know we are doomed.
So I went into the office last Friday. The Level 3 manager was in and talking to some Level 2 managers. They were openly discussing how cool it was that the AI could infer a Sold to Party. About an hour later I got a Network to build and it was directed to the wrong Sold to Party, as well as my partner. Too many Companies are drinking the Koolaid that is AI. We were supposed to know our future on Friday but that passed without a word. We are out of Contract in April. Negotiations should be spicy.
 
The opinions of AI "ethicists" and safety pukes are literally worth less than dirt.
:roll:

Yes, you made it clear you know more than all signatories, including the 2 ethicists and 1 safety puke shown on the first page. There's also a couple actors or musicians listed among the 100+ professors of computer sciences, research scientists, industry leaders and other professionals in the field shown on the first page.

So clearly, because a small handful of non-professionals signed who are simply calling for more awareness makes the entire list null and void. And all this is just a bunch of hyped up nonsense because all we humans need to remember is to simply pull the plug.

I feel so much better now, knowing you have set the record straight.

You da man! :rolleyes:
 

I found it interesting that Apple brought this out just before WWDC. Of course, they’ve been failing pretty badly at AI and delivering on features they promised a year ago, so this is a bit of self-preservation here. Of course they don’t make any conclusions on the practicality of consumer AI, but they at least are pointing out clear flaws in the current models. Sounds like they might not focus much on AI at all this conference. If Apple is taking a step back, that could be huge.
 
If Apple is taking a step back, that could be huge.
I agree - as long as the others heed the warning and, through proper due diligence, verify if they do have a problem, or if this is just a red herring tossed up by Apple for reasons you suggest, or whatever.

What is still a problem, however, is how AI is currently being used by some for deceptive purposes. For example, using AI generated imagery that impersonates government leaders, politicians, celebrities and others, without their knowledge or permission, to spread false, often vile, libelous, or inciteful speech.
 
I agree - as long as the others heed the warning and, through proper due diligence

Thats the biggest problem of all - can we really trust these billion dollar, profit-driven and lack of compassion-driven-ethics companies to do the right thing? Not likely. Sooner or later, self-interest takes over and thats the real risk. The race to the top today is practically an AI arms race disguised as progress, pushing to outcomes we're hardly prepared to control. Government regulations? lol..... @tPU any discussion involving government or politics is off-limits, which means we’re unable to explore the countless ways political systems worldwide have failed us and continue to unravel in those failures before our eyes. Lets just say 'we' (not governments, we as humans and the humans we put our faith in) are too slow, too compromised and too deep in corporate pockets or influence to even try. For the time being, nah i don't believe we're at immediate risk... but when the bubbles big enough, the fallout DOES have the potential to be very ugly.

The other issue being, people hear “AI catastrophe” and immediately picture some Terminator style robot apocalypse with machines blowing shit up and turning us into slaves. That cartoonish view makes it easy to ignore the quieter and more insidious risks at play. Not just risks but intentional misuse, AI weaponization, data trained discrimination, deliberate misinformation/manipulation and my favourite one, premeditated cock-ups for self interest and evading accountability.

So, AI systems ignoring shutdown commands? - yes, exactly that! We're talking about critical controls held by a powerful few, yet their choices can ripple out to affect millions. Whether the systems in beta or the commands are weak for the moment is irrelevant. The terrifying truth is this (as expected) a single command, simple, flawed, or even misused, in the hands of the unaccountable elite has the power to unleash massive consequences.
 
Government regulations? lol..... @tPU any discussion involving government or politics is off-limits, which means we’re unable to explore the countless ways political systems worldwide have failed us and continue to unravel in those failures before our eyes.
Discussing generalities about governments is not banned, it's the specific political talk that's banned. For example, discussing any particular party agenda is banned, but discussing the general activities and conduct of government groups/departments is acceptable as long as we focus on the effects such has on the technology sectors. Example;

Talking about the idea of government creating and enforcing strict regulations and legal code over the deployment and use of AI is acceptable, but talking about the specific agenda's of a particular political representative, even if it's related to AI, is not at all acceptable.
 
Thats the biggest problem of all - can we really trust these billion dollar, profit-driven and lack of compassion-driven-ethics companies to do the right thing? Not likely.
I totally agree - and so do the 350+ signatories of that letter from the Center for AI Safety.

Sadly, history has shown us over and over again, if there are no regulations dictating what these companies can or cannot do, they will do whatever they can to increase profits - typically at the expense of the consumer. And Lex is right - that's not politics.
 
Discussing generalities about governments is not banned, it's the specific political talk that's banned. For example, discussing any particular party agenda is banned, but discussing the general activities and conduct of government groups/departments is acceptable as long as we focus on the effects such has on the technology sectors. Example;

Talking about the idea of government creating and enforcing strict regulations and legal code over the deployment and use of AI is acceptable, but talking about the specific agenda's of a particular political representative, even if it's related to AI, is not at all acceptable.

Nah, i rather stick with...'WE are too slow, too compromised and too deep in corporate pockets or influence to even try."

Exploring that any further definitely puts us in the do not pass go jail.
 
So I went into the office last Friday. The Level 3 manager was in and talking to some Level 2 managers. They were openly discussing how cool it was that the AI could infer a Sold to Party. About an hour later I got a Network to build and it was directed to the wrong Sold to Party, as well as my partner. Too many Companies are drinking the Koolaid that is AI. We were supposed to know our future on Friday but that passed without a word. We are out of Contract in April. Negotiations should be spicy.
Middle managers are enforcers of the company's will.
LLMs make a fair amount of dumb mistakes currently but this will improve in the future.

Companies care about whether it's good enough or not, it doesn't have to even match a human's output since LLMs can be deployed at scale.


I found it interesting that Apple brought this out just before WWDC. Of course, they’ve been failing pretty badly at AI and delivering on features they promised a year ago, so this is a bit of self-preservation here. Of course they don’t make any conclusions on the practicality of consumer AI, but they at least are pointing out clear flaws in the current models. Sounds like they might not focus much on AI at all this conference. If Apple is taking a step back, that could be huge.
Reminds me of this -
RDT_20250609_0834428167330577178357706.jpg
 
For the moment I'm really worried about AI already in use in militairy devices/systems nowadays.
Pretty sure ethics are pretty low on the list of priorities for offensive or defensive applications.

Quess there are also not much governmental restrictions on the use of it when dealing with live or death scenario's as we can see in ongoing global conflicts.
 
:roll:

Yes, you made it clear you know more than all signatories, including the 2 ethicists and 1 safety puke shown on the first page. There's also a couple actors or musicians listed among the 100+ professors of computer sciences, research scientists, industry leaders and other professionals in the field shown on the first page.

So clearly, because a small handful of non-professionals signed who are simply calling for more awareness makes the entire list null and void. And all this is just a bunch of hyped up nonsense because all we humans need to remember is to simply pull the plug.

I feel so much better now, knowing you have set the record straight.

You da man! :rolleyes:
Unironically yes. The entire goal of "AI Safety" is lobotomizing models so people can use it in places where they shouldn't. Well, that's one of the goals, the other goal is to monopolize AI via regulatory capture. The fact that the "Open"AI and Anthropic CEOs signed this trash and nobody from Cohere or Mistral did is another proof that this is garbage.

Since you have a habit of slapping down your credentials as a proof that you, a forum user, know better than us for some reason, how is it a stretch to say that I know more than at the very least literal actors and musicians? Why the hell would you even bring those people up as evidence that you or these people even know what the hell they're talking about?
What is still a problem, however, is how AI is currently being used by some for deceptive purposes. For example, using AI generated imagery that impersonates government leaders, politicians, celebrities and others, without their knowledge or permission, to spread false, often vile, libelous, or inciteful speech.
I will use open models to generate whatever the hell I want, thank you very much.
For the moment I'm really worried about AI already in use in militairy devices/systems nowadays.
Pretty sure ethics are pretty low on the list of priorities for offensive or defensive applications.

Quess there are also not much governmental restrictions on the use of it when dealing with live or death scenario's as we can see in ongoing global conflicts.
Exactly. It's just lip service, the serfs get the lobotomized models while governments get the uncensored finetuned "kill literally everyone except me" models and place them in control of military hardware.
 
I think that yes, we should worry about corporations misusing these tools, but let’s be real, it’s not just corporations that we should worry about. They might have the most means and incentive to go there, but we can’t exactly give any human organization a pass. Governments in the past have used technology for all sorts of terrible pursuits. Greed and power-seeking are not solely-owned traits of corporations. When we speak of progress, we have to consider what we are progressing toward.
 
Customers: Oh my god. The AI in Aliens Colonial Marines is absolutely busted, and it must be because Gearbox secretly funneled all of the money into their Borderlands game.
-coders pop open hood-
Coders: So, what idiot forgot how to spell tether, and busted the enemy AI because they were "tethr"ed to nowhere?
-Customers fix error-
Customers: Well, the game was still pretty crap. But, maybe Gearbox had some fundamentally much deeper issues because their management was crap and they had no vision...maybe not because they were fundamentally funneling all of the development money into other things.


Yeah...Occam's razor. Specifically, the simplest explanation for the LLM to not do something expected is that there was input error...and allowing them to alter a script that allowed them to choose to stop processes is stupid. It's giving a fat kid a piece of candy for solving a math problem, showing them a sheet of problems, then asking them to not do the problems (and thus not get the candy), or to do the problems. Why is this even a thing?
 
Discussing generalities about governments is not banned, it's the specific political talk that's banned. For example, discussing any particular party agenda is banned, but discussing the general activities and conduct of government groups/departments is acceptable as long as we focus on the effects such has on the technology sectors. Example;

Talking about the idea of government creating and enforcing strict regulations and legal code over the deployment and use of AI is acceptable, but talking about the specific agenda's of a particular political representative, even if it's related to AI, is not at all acceptable.
Also talking or joking about how AI will be shutdown by government "for general but legitimate reasons I will not mention again here" apparently is walking the line. I'll get slapped again so I will just stop here.
 
The entire goal of "AI Safety" is lobotomizing models so people can use it in places where they shouldn't.
Once again, we got it. You are right and all those other professors of computer science, leaders in the AI industry, and other scientist are wrong. You've made that clear over and over again.

According to you, their concerns are blown way out of proportion. So there's no need to worry about AI. We can just wait and see what happens. And if something bad happens, then we can react. And why is that fine? Because history has shown, and everyone knows waiting for the dam to burst is always better than fixing it before it bursts.

Since you have a habit of slapping down your credentials as a proof that you, a forum user, know better than us for some reason
LOL

Instead of fabricating more falsehoods, read what I say when I mention my credentials. I mention them so folks can decide if I might know a little about what I am talking about - as I did above in post #28. I said, suggested or implied nothing about me knowing better than any other. In fact, folks can see for themselves I have clearly noted I am just a technician - as I noted and all can see way back in post #15.

Unlike the claims presented by your cohort, @yfn_ratchet, I never suggested I am qualified to speak about AI risks. I merely posted a link to an article. It is you, Rover4444, and a couple others, who are presenting yourself as having some sort of expertise in this area - even implying the signatories of that letter as being "literally worth less than dirt".

And what make you an authority on that? I guess simply because you say so because you have provided absolutely nothing, zero, zilch for evidence to your experience, or anything to indicate to others, any sort of evidence you know your a$$ from a hole in the ground - other than your say so.

But hey? Who am I to argue? I'm just an electronic tech, not a expert in AI like you.
 
Once again, we got it. You are right and all those other professors of computer science, leaders in the AI industry, and other scientist are wrong. You've made that clear over and over again.

According to you, their concerns are blown way out of proportion. So there's no need to worry about AI. We can just wait and see what happens. And if something bad happens, then we can react. And why is that fine? Because history has shown, and everyone knows waiting for the dam to burst is always better than fixing it before it bursts.
Appeal to authority and strawmanning. I'm sure all the users using open models have been blowing up dams these past few years.
LOL

Instead of fabricating more falsehoods, read what I say when I mention my credentials. I mention them so folks can decide if I might know a little about what I am talking about - as I did above in post #28. I said, suggested or implied nothing about me knowing better than any other. In fact, folks can see for themselves I have clearly noted I am just a technician - as I noted and all can see way back in post #15.

Unlike the claims presented by your cohort, @yfn_ratchet, I never suggested I am qualified to speak about AI risks. I merely posted a link to an article. It is you, Rover4444, and a couple others, who are presenting yourself as having some sort of expertise in this area - even implying the signatories of that letter as being "literally worth less than dirt".

And what make you an authority on that? I guess simply because you say so because you have provided absolutely nothing, zero, zilch for evidence to your experience, or anything to indicate to others, any sort of evidence you know your a$$ from a hole in the ground - other than your say so.

But hey? Who am I to argue? I'm just an electronic tech, not a expert in AI like you.
Then why are you arguing, then? There's no reason for me to provide you with any of my credentials, it's an absolute cancer for any discussion. If you'd actually used any of these tools (and yes, they are tools) or actually interacted with some of the other people using or developing local AI then you'd be able to infer the truth for yourself.
 
Was literally thinking about this just now, and every time this shit gets brought up. Lobotomizing AI because somebody doesn't know how to use it is a net negative.
No. Just no. That's not why we put guardrails in at all. Frankly you just sound like one of the millions that are just mad the guardrails prevent it from sourcing or creating truly reprehensible content... and yes, it would happen.

Appeal to authority
Works when the authority is actually credible experts.
 
No. Just no. That's not why we put guardrails in at all. Frankly you just sound like one of the millions that are just mad the guardrails prevent it from sourcing or creating truly reprehensible content... and yes, it would happen.
The millions are right. It not just "would" happen, it "does" happen, and it's better that everybody has access to models with the same capabilities versus the privileged few. "We", lmao.
Works when the authority is actually credible experts.
Ah, yes, of course, of course. I'm sure they're very good at lobotomizing models. Maybe somebody should tell the signatories their precious guardrails are being lowered for the people most likely to instigate a nuclear war, but I'm sure they're too busy chasing down people generating smut to care.
 

I found it interesting that Apple brought this out just before WWDC. Of course, they’ve been failing pretty badly at AI and delivering on features they promised a year ago, so this is a bit of self-preservation here. Of course they don’t make any conclusions on the practicality of consumer AI, but they at least are pointing out clear flaws in the current models. Sounds like they might not focus much on AI at all this conference. If Apple is taking a step back, that could be huge.
I think Apple is doing two things
1. Damage control > shareholders.
2. They are actually looking for a way to implement this while keeping true to core values / principles in their soft- and hardware landscape.

The bottom line here isn't that Apple is taking a step back, its that they haven't got anything that's a USP for their brand. We can simply write a prompt on any device anyway. Their saving grace at the same time: you don't need to migrate to any other software or hardware device to use ChatGPT. They can just do their Apple things Apple users like and weather it.

One thing this is decisively NOT about, is the actual perceived value of AI or a real estimate of what it is capable of. This is business. Nothing else.

Apple's current best ideas are summarized here... lmao. They just discovered Windows Aero :) Coming to your device soon TM :roll::roll::roll::roll:


The millions are right. It not just "would" happen, it "does" happen, and it's better that everybody has access to models with the same capabilities versus the privileged few. "We", lmao.

Ah, yes, of course, of course. I'm sure they're very good at lobotomizing models. Maybe somebody should tell the signatories their precious guardrails are being lowered for the people most likely to instigate a nuclear war, but I'm sure they're too busy chasing down people generating smut to care.
Its not relevant at all.

The moment you use that bit of information you do have that others do not, you've exposed it and your advantage is gone.

Similar things apply to the so called distasters that AI might create because it'll circumvent current day security or what not.
It will only work once, maybe twice.

Humans are very good at fixing an issue after it explodes the first time.
 
The bottom line here isn't that Apple is taking a step back, its that they haven't got anything that's a USP for their brand.
That’s exactly it. They thought that Apple Intelligence (fuck off with that resulting acronym BTW, it feels like when all 3DS games wanted to try and shove in 3D into the title somehow) would be a killer feature and help them boost somewhat flagging iPhone sales (they were still great, but slowed down quite a bit). That didn’t happen, the average consumer fascination with AI cooled down quickly when it turned to be just not that useful for them and a lot of its touted features were really hard to distinguish compared to what normal “non-AI” voice assistants like Siri were already capable of for years now. So Apple is shifting gears, but still covering their rear to not seem defeatist. Nobody at Apple actually unironically believes that including a local LLM on a fucking smartphone is some sort of incredibly controversial potentially dangerous feature. They just didn’t get the market response they were hoping for. And not surprisingly - the average pleb hears “AI on your device wherever you go” and immigrants imagines essentially an AI girlfriend, Blade Runner style, something actually cool, changing the way they use the device. When the reality hits they realize it’s a gimmick at best.
 
Its not relevant at all.

The moment you use that bit of information you do have that others do not, you've exposed it and your advantage is gone.

Similar things apply to the so called distasters that AI might create because it'll circumvent current day security or what not.
It will only work once, maybe twice.

Humans are very good at fixing an issue after it explodes the first time.
"Security via obscurity" isn't security, it's an attack surface. There's already a good idea of what to do with a real AI (so not an inference engine) and they're generally the same things you would do with an insider threat. I don't really know what you're trying to get at either way.

That’s exactly it. They thought that Apple Intelligence (fuck off with that resulting acronym BTW, it feels like when all 3DS games wanted to try and shove in 3D into the title somehow) would be a killer feature and help them boost somewhat flagging iPhone sales (they were still great, but slowed down quite a bit). That didn’t happen, the average consumer fascination with AI cooled down quickly when it turned to be just not that useful for them and a lot of its touted features were really hard to distinguish compared to what normal “non-AI” voice assistants like Siri were already capable of for years now. So Apple is shifting gears, but still covering their rear to not seem defeatist. Nobody at Apple actually unironically believes that including a local LLM on a fucking smartphone is some sort of incredibly controversial potentially dangerous feature. They just didn’t get the market response they were hoping for. And not surprisingly - the average pleb hears “AI on your device wherever you go” and immigrants imagines essentially an AI girlfriend, Blade Runner style, something actually cool, changing the way they use the device. When the reality hits they realize it’s a gimmick at best.
Ironically companionship is one of the most valid usecases for AI.
 
The millions are right. It not just "would" happen, it "does" happen, and it's better that everybody has access to models with the same capabilities versus the privileged few. "We", lmao.
What on earth? I don't think you even understood what I wrote but yeah, safe to say you have no idea what you are talking about here. We don't need Nazi Microsoft Tay back... and that wasn't user error so much as AI error because it treated all sources as equal, an invalid premise.

I am not commenting on the none-open nature of models at all because that's a completely seperate issue.

This thread has really lost its way.
 
Last edited:
Status
Not open for further replies.
Back
Top