Intelligence Generally Suits Artificial Law - Musk vs. Closed AI. Maybe

Posted: Mar 7, 2024   11:33:44 PM   |   Last updated: Mar 7, 2024   11:33:44 PM
by Pascal-Denis Lussier

There's been a major development, maybe, since I last mentioned the "Musk vs. Altman & Brockman et al." lawsuit, senior members of OpenAI firing back the next day with a blogpost aiming to deflate any perception of Elon Musk as a defender of humanity, the cohort convinced they held an indisputable trump card through a "yep" they revealed to the world after disclosing an email exchange.

Along with that came statements and exchanges that are offered to establish that Musk is merely sore and petty, and acting the bully, being upset that OpenAI gained such tremendous value, for which he's not getting a piece. Comments made by Musk belie his true intent, it being gaining control and limiting OpenAI's achievements to Tesla.

Vinod Khosla, who invested $50 million in OpenAI in 2019, accused Musk of "sour grapes" in an X-tweet, also saying: “Like they say if you can’t innovate, litigate and that’s what we have here. Elon of old would be building with us to hit the same goal.” 

There's lots that's been offered to affirm that the suit needs to be dismissed, these assurances coming from the OpenAI post authors and various sectors and groups, mostly from the internet, Musk-hate also being the only thought-out motivation for some of these segments. Otherwise, perhaps sign that, outside of Muskers, the general public isn't too sure what any of it really entails, but... Skynet. In robots or Teslas, it's all bad. Plus, isn't Musk that Neurolink guy?

I don't agree with calls for a dismissal and, though I'm no lawyer, I find those who promote this outcome highly irresponsible, especially if in media and with a large outlet.

In my opinion, no matter how much of an egoist and greedy a-hole Musk may be, and no matter to what degree these qualities may have motivated the lawsuit, he still has a very good case that absolutely deserves to be heard. This is a major precedent, and there won't be another like it; the name "singularity" signifies that.

And, despite myself and views of Musk, I don't believe that his interests and actions were, nor are now, entirely motivated by whatever negative interpretation one may assign to his side of things; he seems to have been fairly consistent on his views regarding AGI, and the risks this presents. 

But nor do I believe that "humanity" and "altruism" are all that lie behind Musk's motivation.

•       •       • 

It's still unclear whether AGI has officially been attained. Legal complexities over who gets to call whether it has been achieved. 

Lets not forget that strange Sam Altman firing that was readily annulled a few days later; no one knows what that was all about still, other than those involved, but there's plenty speculation over the fact that the Q* architecture established transformers as the key, and that AGI is merely limited by the time constraints relating to training, the system now able to generate the synthetic data it requires to train itself...

And, it seems that there's been a breakthrough, token prediction now being replaced by "planning".

A pause and a deep breath while the world considers the implications, that'd be good, I think.

Hopefully, the lawsuit will be used to lay bare much. 

•       •       •

According to WSJ, the UK, the EU, then the US "push[ed] for [the] legal scrutiny of OpenAI," and now, Musk is simply joining in?


Surely, all is in the reading of "legal scrutiny"; although Musk's qualifies as such, it's of a different shade entirely, is that it? Otherwise, given the links that exist between OpenAI, Microsoft, and key elements of the US establishment, it's hard to believe such scrutiny hasn't been done to their purpose, thus hard to see the logic in what's implied if one considers what's been accomplished on the regulatory front: zilch. Only India made a real effort to get the 'concrete' ball rolling up that hill whose slope is sure to become steeper the longer all wait to set firm global laws—or rules, if one prefers—and restrictions concerning AI developments and applicable spheres. 

Had our species become truly wise in any manner befitting the glorious gift of intelligence we've been granted but love to squander, we'd keep anything near attaining anything resembling AGI far away from the military, rendering such a connection a crime against humanity.

The closing paragraph of a Quartz piece tells us:

Meanwhile, OpenAI’s partnership with Microsoft has been under scrutiny from regulators over its potential threat to market competition in the UK and EU. The AI company is also reportedly being examined by the U.S. Securities and Exchange Commission (SEC) about whether its investors were misled after Altman was fired by the company’s former board of directors in November.

•       •       • 

Tying research efforts under one roof to formalise an open, non-profit, for-the-good-of-humankind approach, Musk officially cofounded OpenAI with Altman in 2015; I wasn't aware until recently: Musk came up with the OpenAI name.

He left in 2018 "over a conflict of interest with the company’s development. The lawsuit claims a breach of contract, breach of fiduciary duty, and unfair business practices." 

Musk launched xAI in July of 2023. The company was said to be "independent from X Corp, but will work with Musk’s other companies: X, formerly Twitter, and Tesla."

Keep in mind that any AI developed within the context of Tesla automatically sets a different approach than the one OpenAI is exploring, Tesla demanding a "real-world" case that can be sandboxed, making AGI something not worth any worry... though all those cars communicating... hmmm.

"Musk filed a lawsuit against OpenAI and its CEO Sam Altman late Thursday (Feb. 29), alleging the ChatGPT-maker’s partnership with Microsoft betrays its founding commitment to benefiting humanity over generating profit."

Here's the lawsuit 

Here are some very important passages contained within it (image below).

I agree that the code should not be "open source" beyond a certain point, but I wholeheartedly believe that OpenAI should offer complete transparency as concerns milestones and implications, and that profit should never be allowed to dominate over this sphere, certainly not during this phase, at least.

Just with what's below, Microsoft's maneuvering and decision to go private rather than encourage a global consortium of sorts to partake in this contradicts any claims made by the relevant corporate heads in regard to their desire to see a "global regulatory body" or "global government" ruling over such matters. 

Musk vs. OpenAI lawsuit

Posted 3 months ago  Last updated: Mar 7, 2024, 11:33 PM