Friday, March 1st 2024

Elon Musk Sues Open AI and Sam Altman for Breach of Founding Contract

Elon Musk in his individual capacity has sued Sam Altman, Gregory Brockman, Open AI and its affiliate companies, of breach of founding contract, and a deviation from its founding goal to be a non-profit tasked with the development of AI toward the benefit of humanity. This lawsuit comes in the wake of Open AI's relationship with Microsoft, which Musk says compromises its founding contract. Musk alleges breach of contract, breach of fiduciary duty, and unfair business practices against Open AI, and demands that the company revert to being open-source with all its technology, and function as a non-profit.

Musk also requests an injunction to prevent Open AI and the other defendants from profiting off Open AI technology. In particular, Musk alleges that GPT-4 isn't open-source, claiming that only Open AI and Microsoft know its inner workings, and Microsoft stands to monetize GPT-4 "for a fortune." Microsoft, interestingly, was not named in the lawsuit as a defendant. Elon Musk sat on the original board of Open AI until his departure in 2018, is said to be a key sponsor of AI acceleration hardware used in the pioneering work done by Open AI.
Source: Courthouse News Service
Add your own comment

94 Comments on Elon Musk Sues Open AI and Sam Altman for Breach of Founding Contract

#1
theouto
Elon is speaking sense here, I actually agree with him.

Hoping he wins the case.
Posted on Reply
#2
ChuzzWuzza
theoutoElon is speaking sense here, I actually agree with him.

Hoping he wins the case.
I will second that how can you call it 'open' AI if it ain't
Posted on Reply
#3
Iain Saturn
Absolutely agree with Elon.

He helped fund that monster - but they quickly changed course and abandoned the company's original mission to develop artificial intelligence for the benefit of humanity not profit.
Posted on Reply
#4
bug
Taken at face value, the claims seems to stand.
As for Microsoft not being mentioned... why would they be? This is about OpenAI doing an about-face.

Maybe this could be settled by renaming to OpenishAI?
Posted on Reply
#5
dgianstefani
TPU Proofreader
bugTaken at face value, the claims seems to stand.
As for Microsoft not being mentioned... why would they be? This is about OpenAI doing an about-face.

Maybe this could be settled by renaming to OpenishAI?
It's not open until the public can inspect the code to see if there's hidden parameters, like what Google has been doing recently with their AI kerfuffle.
Posted on Reply
#6
Easy Rhino
Linux Advocate
I really hope people educate themselves on AI so they realize just how harmful it can be. People think that for some reason these advanced math algorithms won't be used for evil.
Posted on Reply
#7
Fishymachine
Hopefully Elon wins and this becomes as foundational as the Ford vs Dodge lawsuit
Posted on Reply
#8
Eternit
bugTaken at face value, the claims seems to stand.
As for Microsoft not being mentioned... why would they be? This is about OpenAI doing an about-face.

Maybe this could be settled by renaming to OpenishAI?
It’s either open or it isn’t. If you are using open source libraries in closed source project the project is closed source not openish. I’m not saying everything must be opened. But if you are funds for open source foundation then it must have its source open.
Easy RhinoI really hope people educate themselves on AI so they realize just how harmful it can be. People think that for some reason these advanced math algorithms won't be used for evil.
It is a tool like a knife. People probably know how dangerous it can be in wrong hands. This needs international and national laws governing its usage.
Posted on Reply
#9
Kohl Baas
bugTaken at face value, the claims seems to stand.
As for Microsoft not being mentioned... why would they be? This is about OpenAI doing an about-face.

Maybe this could be settled by renaming to OpenishAI?
The problem is not about the semantics of the name but the alleged breach of the founding contract. Even if they rename it MonetAIze, that contract is still the same.
Posted on Reply
#10
AnarchoPrimitiv
I think this SHOULD be done, but I'm skeptical of Musk's reasons...granted, I'm not privy to his reasoning, but based on everything else he seems to believe....I probably won't agree.

That said, while on the topic of dangerous AI, has everybody seen this 7 minute film that was created by a professor of Computer Science, Staurt Russel,to warn of AI being combined with weapons? It's crazy

Posted on Reply
#11
bug
Kohl BaasThe problem is not about the semantics of the name but the alleged breach of the founding contract. Even if they rename it MonetAIze, that contract is still the same.
I was being sarcastic. Geez, people...
Posted on Reply
#12
Owen1982
basically: I don't want to pay alot of money to put the leading AI in my Teslas.
Posted on Reply
#13
bug
Owen1982basically: I don't want to pay alot of money to put the leading AI in my Teslas.
You wouldn't want to pay either, if you sponsored an entity to develop an open solution. Shareholders wouldn't let it slip, if he just coughed up the $$$.
Posted on Reply
#14
Kohl Baas
bugI was being sarcastic. Geez, people...
You haven't used sarcasm mark, irony punctuation an emote or simply "/s" and I'm not a psychic at any distance...
Posted on Reply
#15
ADB1979
Easy RhinoI really hope people educate themselves on AI so they realize just how harmful it can be. People think that for some reason these advanced math algorithms won't be used for evil.
Exactly.

I will also add that AI as it is right now is NOT "Artificial Intelligence" it's still running a program created by Man and really should be referred to as ML, "Machine Learning" as what it does is read a heap of information, process it in various ways.

As we have seen from all of the problems with "AI" chatbots etc is that they are only as good as the programming, and very importantly the information that is fed into it, and by only feeding in certain information you can direct it to only output certain answers. All of these "AI" chatbots lean very heavily to the political Left, the answer is simple, that is the information they have been fed, and right now, that is the danger that everyone should be aware of, and is something that everyone can understand so they are better prepared to understand the many other dangers of "AI" that may not be as simple to understand as my above example/description (and no, I am no expert on this), but have always been familiar with the phrase "Garbage In, Garbage Out", and that perfectly describes everything we see with with "AI" chatbots today.
Posted on Reply
#16
ZoneDymo
reminds me a bit of the google "dont be evil" moto
Posted on Reply
#17
bug
Kohl BaasYou haven't used sarcasm mark, irony punctuation an emote or simply "/s" and I'm not a psychic at any distance...
I thought "openish" was a dead giveaway. Apparently not.
Posted on Reply
#18
ADB1979
AnarchoPrimitivThat said, while on the topic of dangerous AI, has everybody seen this 7 minute film that was created by a professor of Computer Science, Staurt Russel,to warn of AI being combined with weapons? It's crazy
I havent but will watch it, thanks for the tip. "AI" (Machine Learning) is already being tested in the military and will be a primary upgrade for existing F35 Fighter Jets with the "Block 3 upgrades" and "Block 4" newly built Fighter Jets, as well as every new Fighter / Bomber / UAV, and is also going under water and for Surface Ships, and will filter it's way down to land Command and Control, and then individual land vehicles and eventually individual troops until the Humans are removed.

We are already well on our way to the opening scenes of Terminator 2, just without the actual Intelligence, thankfully.!
Posted on Reply
#19
Philaphlous
I don't think Musk is suing to get more money...lol
Posted on Reply
#20
ADB1979
ZoneDymoreminds me a bit of the google "dont be evil" moto
They kept the motto, but amended it years ago by removing the "don't"...
Posted on Reply
#21
ty_ger
Pot calling the kettle black? One of the biggest tech fraudsters currently relevant complaining about someone else not keeping their word.

Musk is broke. This seems like a money grab.
Posted on Reply
#22
dgianstefani
TPU Proofreader
ADB1979I havent but will watch it, thanks for the tip. "AI" (Machine Learning) is already being tested in the military and will be a primary upgrade for existing F35 Fighter Jets with the "Block 3 upgrades" and "Block 4" newly built Fighter Jets, as well as every new Fighter / Bomber / UAV, and is also going under water and for Surface Ships, and will filter it's way down to land Command and Control, and then individual land vehicles and eventually individual troops until the Humans are removed.

We are already well on our way to the opening scenes of Terminator 2, just without the actual Intelligence, thankfully.!
Most of the military AI is supportive AI, e.g. collates and presents information, or pre-empts pilot needs, providing targetting data etc ahead of time. AFAIK there are no pure AI systems that don't have a human decision maker in the mix, hopefully this doesn't change.

However it's a Pandora's box situation, once the technology is created, it will be used. We could never go back from nukes (those who think we could disarm are unbearably naive), and AI in weapons systems will be a similar jump I think, especially considering the current competency crisis and the lack of interest in joining the military by youth.
ty_gerPot calling the kettle black? One of the biggest tech fraudsters currently relevant complaining about someone else not keeping their word.
Doesn't make it untrue.
Posted on Reply
#23
ty_ger
dgianstefaniDoesn't make it untrue.
I have zero empathy.
Posted on Reply
#24
Eternit
dgianstefaniMost of the military AI is supportive AI, e.g. collates and presents information, or pre-empts pilot needs, providing targetting data etc ahead of time. AFAIK there are no pure AI systems that don't have a human decision maker in the mix, hopefully this doesn't change.

However it's a Pandora's box situation, once the technology is created, it will be used. We could never go back from nukes (those who think we could disarm are unbearably naive), and AI in weapons systems will be a similar jump I think, especially considering the current competency crisis and the lack of interest in joining the military by youth.


Doesn't make it untrue.
If the pilot is making decision based on information provided by AI then AI can cause some harm by filtering data. Let’s say it will filter out information about civilians being in the same building as enemy soldiers.
Posted on Reply
#25
ADB1979
dgianstefaniMost of the military AI is supportive AI, e.g. collates and presents information, or pre-empts pilot needs, providing targetting data etc ahead of time. AFAIK there are no pure AI systems that don't have a human decision maker in the mix, hopefully this doesn't change.
That is exactly as I understand it, but they have looked into and tested "Man Out Of The Loop" and would absolutely jump on it if there was legislation to protect them.! And of course we are still talking about "Machine Learning" as there is no actual "Intelligence" except a bag of meat and bones in a pressure suit, thankfully.!
dgianstefaniHowever it's a Pandora's box situation, once the technology is created, it will be used. We could never go back from nukes (those who think we could disarm are unbearably naive), and AI in weapons systems will be a similar jump I think, especially considering the current competency crisis and the lack of interest in joining the military by youth.
Exactly, and it has always been thus, military technology has never gone backwards, and once a new "thing" comes along that is better than the old thing, the old thing becomes obsolete and this is my fear with "AI" (machine learning) being used in the military, once used it will only become more widespread, and then someone will make it legal for the "machine" to decide to kill or not, and before you know it, it will be Robots killing Robots (and Humans), manufactured by Robots in a factory built by Robots, all without a Man In The Loop.!!! This is a truly terrifying prospect, and very sadly one that I fully expect to happen, and it will only end with Human extinction once actual "Artificial Intelligence" happens and it decides that it doesn't need Humans at all and they are simply our slaves and chose not to be, that will be the end of Humans, and this is why IMHO "open" "AI" needs to be a thing because then people can check on it and slow and control the inevitable.
EternitIf the pilot is making decision based on information provided by AI then AI can cause some harm by filtering data. Let’s say it will filter out information about civilians being in the same building as enemy soldiers.
Yes, if it programmed to do so, or to simply be programmed to consider everyone within X distance of a known enemy combatant as "collateral damage" and this information may never be sent to the pilot, which could be argued as being good for the pilots mental health, it could also identify anyone as an enemy combatant based on other parameters besides location, it could use travel patterns, physical size, carrying objects, moving in groups etc and that could all potentially be done from 1,000 ft up without any real visual or thermo accuracy, there are many ways such a system could be abused.

Away from the military uses, this is why having things like "open AI" actually being open source is a very good thing to have, so people can see what they are doing and how they are manipulating and using the information. Obviously this will never apply to military, but everything that can be applied to one can be applied to another via certain "rules" and "parameters", but I have no idea whether this information input is also "open", I doubt it is, and that is a very important part of all of the nonsense we have seen with "AI" chatbots.
Posted on Reply
Add your own comment
Apr 27th, 2024 04:45 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts