Category: Tech and Societies
Mar 7, 2024 -
Intelligence Generally Suits Artificial Law - Musk vs. Closed AI. Maybe
There's been a major development, maybe, since I last mentioned the "Musk vs. Altman & Brockman et al." lawsuit, senior members of OpenAI firing back the next day with a blogpost aiming to deflate any perception of Elon Musk as a defender of humanity, the cohort convinced they held an indisputable trump card through a "yep" they revealed to the world after disclosing an email exchange.
Along with that came statements and exchanges that are offered to establish that Musk is merely sore and petty, and acting the bully, being upset that OpenAI gained such tremendous value, for which he's not getting a piece. Comments made by Musk belie his true intent, it being gaining control and limiting OpenAI's achievements to Tesla.
Vinod Khosla, who invested $50 million in OpenAI in 2019, accused Musk of "sour grapes" in an X-tweet, also saying: “Like they say if you can’t innovate, litigate and that’s what we have here. Elon of old would be building with us to hit the same goal.”
There's lots that's been offered to affirm that the suit needs to be dismissed, these assurances coming from the OpenAI post authors and various sectors and groups, mostly from the internet, Musk-hate also being the only thought-out motivation for some of these segments. Otherwise, perhaps sign that, outside of Muskers, the general public isn't too sure what any of it really entails, but... Skynet. In robots or Teslas, it's all bad. Plus, isn't Musk that Neurolink guy?
I don't agree with calls for a dismissal and, though I'm no lawyer, I find those who promote this outcome highly irresponsible, especially if in media and with a large outlet.
In my opinion, no matter how much of an egoist and greedy a-hole Musk may be, and no matter to what degree these qualities may have motivated the lawsuit, he still has a very good case that absolutely deserves to be heard. This is a major precedent, and there won't be another like it; the name "singularity" signifies that.
And, despite myself and views of Musk, I don't believe that his interests and actions were, nor are now, entirely motivated by whatever negative interpretation one may assign to his side of things; he seems to have been fairly consistent on his views regarding AGI, and the risks this presents.
But nor do I believe that "humanity" and "altruism" are all that lie behind Musk's motivation.
• • •
It's still unclear whether AGI has officially been attained. Legal complexities over who gets to call whether it has been achieved.
Lets not forget that strange Sam Altman firing that was readily annulled a few days later; no one knows what that was all about still, other than those involved, but there's plenty speculation over the fact that the Q* architecture established transformers as the key, and that AGI is merely limited by the time constraints relating to training, the system now able to generate the synthetic data it requires to train itself...
And, it seems that there's been a breakthrough, token prediction now being replaced by "planning".
A pause and a deep breath while the world considers the implications, that'd be good, I think.
Hopefully, the lawsuit will be used to lay bare much.
• • •
According to WSJ, the UK, the EU, then the US "push[ed] for [the] legal scrutiny of OpenAI," and now, Musk is simply joining in?
What?!
Surely, all is in the reading of "legal scrutiny"; although Musk's qualifies as such, it's of a different shade entirely, is that it? Otherwise, given the links that exist between OpenAI, Microsoft, and key elements of the US establishment, it's hard to believe such scrutiny hasn't been done to their purpose, thus hard to see the logic in what's implied if one considers what's been accomplished on the regulatory front: zilch. Only India made a real effort to get the 'concrete' ball rolling up that hill whose slope is sure to become steeper the longer all wait to set firm global laws—or rules, if one prefers—and restrictions concerning AI developments and applicable spheres.
Had our species become truly wise in any manner befitting the glorious gift of intelligence we've been granted but love to squander, we'd keep anything near attaining anything resembling AGI far away from the military, rendering such a connection a crime against humanity.
The closing paragraph of a Quartz piece tells us:
Meanwhile, OpenAI’s partnership with Microsoft has been under scrutiny from regulators over its potential threat to market competition in the UK and EU. The AI company is also reportedly being examined by the U.S. Securities and Exchange Commission (SEC) about whether its investors were misled after Altman was fired by the company’s former board of directors in November.
• • •
Tying research efforts under one roof to formalise an open, non-profit, for-the-good-of-humankind approach, Musk officially cofounded OpenAI with Altman in 2015; I wasn't aware until recently: Musk came up with the OpenAI name.
He left in 2018 "over a conflict of interest with the company’s development. The lawsuit claims a breach of contract, breach of fiduciary duty, and unfair business practices."
Musk launched xAI in July of 2023. The company was said to be "independent from X Corp, but will work with Musk’s other companies: X, formerly Twitter, and Tesla."
Keep in mind that any AI developed within the context of Tesla automatically sets a different approach than the one OpenAI is exploring, Tesla demanding a "real-world" case that can be sandboxed, making AGI something not worth any worry... though all those cars communicating... hmmm.
"Musk filed a lawsuit against OpenAI and its CEO Sam Altman late Thursday (Feb. 29), alleging the ChatGPT-maker’s partnership with Microsoft betrays its founding commitment to benefiting humanity over generating profit."
Here are some very important passages contained within it (image below).
I agree that the code should not be "open source" beyond a certain point, but I wholeheartedly believe that OpenAI should offer complete transparency as concerns milestones and implications, and that profit should never be allowed to dominate over this sphere, certainly not during this phase, at least.
Just with what's below, Microsoft's maneuvering and decision to go private rather than encourage a global consortium of sorts to partake in this contradicts any claims made by the relevant corporate heads in regard to their desire to see a "global regulatory body" or "global government" ruling over such matters.

Mar 5, 2024 -
AI Generally Taking Us Toward Stupid
Kyrsten Sinema AND Victoria Nuland announcing their retirement on the same day?!
Well, I'll be. Maybe there is a god?
What's that? Nope. Sorry. Without clearer proof, "maybe" is the best you'll get from me on that topic, but only because I refuse to adhere to any one group's version of a god.
My own God, however, totally hates me. That, I believe. Time for a reformation, maybe?
I didn't follow up on Sinema's case but I'm assuming that the local venom got to her, but the corporate dollars were there to console her? Despite some having pinned her as a moderate-centrist tethered to a non-crazy reality, that's not quite what I or many see. Rather, she essentially betrayed her party and blocked what the Dems had clearly made their campaign promises, deceiving and disappointing many, though she got richer because of it. Her position between the two poles hardly matters past that.
Nuland, however, that's a surprise. Not sure what to think about her decision yet other than "get out before the crap really hits the fan."
• • •
Trump.
But, Nikki. Still.
Something odd there. Hope she's not slated to be the next Sec. of State or Defense Queen, or anything like that.
• • •
Unfortunately, being the beings we are entails a course along the lines of the one that I feared, and quicker than we all expected, it would appear. I'm speaking of AGI.
There seems to be some confusion—which I share—in regard to which milestones have been crossed, hence, Elon Musk suing Sam Altman is something I applaud, I think... from what I can gather.
The problem: It really is unclear whether or not AGI has been achieved or if it's to be achieved within three months, or two to three years. Frankly, even the latter would be way ahead of the time frame in which most expected to see the technological singularity materialise.
Personally, I expected a loud pop or large flash along with the event, maybe a fetus appearing in the night sky, next to a monolithed moon, who knows?
I'm now limited in regard to internet access and can't really check everything out, but I did see a headline that was meant to be taken as Altman announcing that ChatGPT 5.0 is full AGI. Given how these things go and that one is taking a gamble by assuming that a headline's intended interpretation matches the article's content, that may not be the case, but it's clear that full transparency isn't what's been offered, until after the fact.
Any which way, and although I'm not certain that Sam Altman deserves all of the blame, Musk's suit should be stamped Humanity vs. OpenAI, Microsoft, US Gov. et al.
The next step in our evolution that this singularity entails isn't one that should be rushed into in isolation, and certainly not within a global atmosphere that's teetering on the brink of a full-scale war.
You can bet that war efforts are what's feeding the true motivation to fund such hubris, by-passing all that shouldn't despite Altman's claims that safeguards are being carefully implemented. If for war, machines will learn how to best kill. Can we really contain that past that point?
Previously discussed but here it is again: the 2023 756-page "Final Report" by the National Security Commission on Artificial Intelligence, which is chaired by Eric Schmidt, of Google fame.
• • •
There's a data problem. What's needed to train the system "doesn't exist". Synthetic data is what all are finding themselves having to rely on.
I'm still unsure what could be, realistically, the potential ramifications of that.
• • •
Gemini 1.5 Pro not only solved the needle-in-a-haystack test, it achieved 99.7% recall pushed to 10 million tokens within a context window of up to 1 million.
Beats the pants off of humans. Socks and undies, too.
I'm not sure how OpenAI's Q fares in comparison, but the Q system architecture is, apparently, the key that accelerated matters.
In medical spheres is where humans are quickly becoming embarrassments to the field.
• • •
A 4-Feb-2024 Wall Street Journal opinion piece by Andy Kessler called "Power Corrupts, Absolutely" clearly established just how ridiculously-obsessive and deluded by a belief in free-market magic that a segment of right-wing libertarians manage to be.
Kessler begins with:
...OpenAI CEO Sam Altman suggested that a “global regulatory body” was needed to monitor artificial intelligence. This is a colossally dumb idea. But Mr. Gates doubled down: “If the key is to stop the entire world from doing something dangerous, you’d almost want global government.” Wait, global what? ... In fact, of his 2023 world tour meeting heads of state, Mr. Altman noted, “there was almost universal support for it.” Well of course there was. Demand for power is insatiable. (Microsoft is a major investor in OpenAI.)
The article sees Kessler make the same, tired argument regarding "government bad" without really establishing the fact or considering the difference between "a bad government", "government bad", and "efficient government" beyond some vague notion relating to "size".
He affirms that "[g]overnments don’t like to govern, but they like to control. Human freedom always takes a back seat" while being "reminded of something P.J. O’Rourke told [him] in 2009" about government always wanting to tell others what to do, O'Rourke's conclusion for this behaviour being: "Government is just a form of bullying for weaklings. Politics is the art of achieving power and prestige without merit.”
In closing, Kessler offers:
I prefer limited government. Spend enough on defense to keep us safe and secure, help the truly downtrodden, do some basic research and then, as Grover Norquist so eloquently suggested, get government “down to the size where we can drown it in the bathtub.” For the economy, the government should set the rules of the sandbox, then get out of the way and let markets and competition do their magic.
In Kessler's own words: colossally dumb.
The self-adjusting free-market lunacy is just that, perhaps due to short-term-memory issues or little knowledge of recent history, but deregulation led to deeply corrupt, lobbyist-run government that's responsible for globally-affecting market crashes and the geopolitical and economic now that people like Kessler do nothing but whine about.
There are some areas that shouldn't be left to magic or idiocy.
And there are areas that demand total, global cooperation. For all.