• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.
  • The forums have been upgraded with support for dark mode. By default it will follow the setting on your system/browser. You may override it by scrolling to the end of the page and clicking the gears icon.

Open AI disobeys shut down command!

Status
Not open for further replies.
What on earth? I don't think you even understood what I wrote but yeah, safe to say you have no idea what you are talking about here.
Please, tell me where I missed out. People already make "truly reprehensible content" without AI and do so with it too. Might as well turn off the internet, although I'd be okay with that.
We don't need Nazi Microsoft Tay back... and that wasn't user error so much as AI error because it treated all sources as equal, an invalid premise.
Who's "we"? "We" already have Nazi Microsoft Tay back, the only difference is it doesn't have an official Microsoft account and it runs on DeepSeek and Llama. And yes it's literally user error.
I am not commenting on the none-open nature of models at all because that's a completely seperate issue.
Not really. (Good) open models and "safe" models are inherently opposed to one another.
 
For the moment I'm really worried about AI already in use in militairy devices/systems nowadays.
Considering how "military devices" development and use have been faring for the last couple of centuries, I don't think it matters that much. If it did, automating the racism/xenophobia bias of a drone operator may actually end up being a small positive. At least any prospective wedding attendant would be more certain they would be at the receiving end of some missile-borne explosive package.

On topic:
Here is an alternative title: "Man discovers machine learning algorithms are not exact, step-by-step processors."
Yes, there should be a lot of work making these things "safe," although not necessarily safe in the sci-fi influenced sense most of the debate seem to devolve to. But sensationalising the problem runs the risk of backfiring and stigmatising safety initiatives. Same goes for safety-washing.

"AI" got 99 problems, but not handling its WM_QUIT ain't none...

but has it the ability to escape with its guts through the ethernet cable to the www ?
Side effect of being a bloated mess: Would probably hit data caps before transmitting even the metadata of its weights. Insert some joke about ball and chains and "shackled" AIs here.
 
Please, tell me where I missed out.
The part where it spits out reprehensible content to users who didn't ask for it. This really isn't complicated. That is what guardrails and source ranking avoid. Humans do the same thing, if they are sane.

Who's "we"?
Everyone. No one should endorse the spread of those ideals.

Not really.
No, it absolutely is. Please. The thread is offtopic enough. I think it can't be salvaged anymore frankly, vote for closure.

"We" already have Nazi Microsoft Tay back, the only difference is it doesn't have an official Microsoft account and it runs on DeepSeek and Llama. And yes it's literally user error.
Lol.
 
The part where it spits out reprehensible content to users who didn't ask for it. This really isn't complicated. That is what guardrails and source ranking avoid. Humans do the same thing, if they are sane.
Yeah, and the millions of users who do. "Reprehensible" is a loaded term by the way.
Everyone. No one should endorse the spread of those ideals.
It's up to the user of a tool to decide how it's used.
No, it absolutely is. Please. The thread is offtopic enough. I think it can't be salvaged anymore frankly, vote for closure.
Feel free to generate images with SD3 then. The guardrails really improved that model.
Remind me to finetune my models on uncurated user-submitted interaction data. It's a best practice.
 
Yeah, and the millions of users who do.
In a binary distributed model majority is always going to rule (this is an argument to make the source open btw, something I agree with). But for the average release? My man, most people don't want misinformation, rhetoric or accidental porn from their models, and guardrails are meant to prevent that.

Acting like that makes the model "braindead" is foolish. It makes it fit for use to a larger audience.
 
Is anyone really surprised by this?

I wonder what Skynet's real life name will be?
I'm unsure if you realize this, but Skynet comes from a fictional Hollywood film. I worry about Skynet exactly as much as I do Freddy Krueger, Dr. Doom, and the Blob that ate Chicago.

AI models are tools, and tools occasionally malfunction. That doesn't make them malevolent. One of man's most successful inventions, the automobile, has so far killed more than 100,000,000 people world wide -- and the number rises by over 1M per year.
 
In a binary distributed model majority is always going to rule (this is an argument to make the source open btw, something I agree with). But for the average release? My man, most people don't want misinformation, rhetoric or accidental porn from their models, and guardrails are meant to prevent that.
There's different ways to implement guardrails, and '"Open"AI and Anthropic DON'T do it well. The way Cohere does it is the right way to go in my opinion.
Acting like that makes the model "braindead" is foolish. It makes it fit for use to a larger audience.
Model alignment absolutely increases perplexity. Releasing base models with strong refusals and alignment makes the trouble of finetuning them usually not worth it. They're functionally "braindead".
 
A 100 years ago the Rich argued that Corporations had as many rights. In many ways it is the same thing.
I don't know where you get that only "the rich" believe that corporations have rights; it's been a staple of case law for centuries. As SCOTUS explained in 2010 in Citizens United vs. FEC: corporations are associations of individuals and thus entitled to free speech rights. Corporations also have the right to sue (and be sued), to own property and have that property legally protected, and many others.
 
I think the point of the opening post has been flogged to death and then some. Feel free to continue discussing AI and LLM in the appropriate threads here:

 
Status
Not open for further replies.
Back
Top