Asked by Senator Jacky Rosen (D-Nevada) in a Senate Armed Services Committee hearing about the DoD’s dispute with Anthropic and whether he could guarantee a human would be in the loop on any targeting decisions made with AI, Hegseth focused on Amodei and his company’s refusal to “accept our terms of service.”
AI
Artificial intelligence is more a part of our lives than ever before. While some might call it hype and compare it to NFTs or 3D TVs, generative AI is causing a sea change in nearly every part of the technology industry. OpenAI’s ChatGPT is still the best-known AI chatbot around, but with Google pushing Gemini, Microsoft building Copilot, and Apple adding its Intelligence to Siri, AI is probably going to be in the spotlight for a very long time. At The Verge, we’re exploring what might be possible with AI — and a lot of the bad stuff AI does, too.
- RELATED /

Jared Birchall, Musk’s money manager, answered a question he wasn’t supposed to.

Emails going as far back as 2015 give a glimpse into the foundations of OpenAI and the early tensions at the company.
Latest In AI
Among the evidence released publicly, there’s this email exchange (Exhibit No.844) between Valve founder Gabe Newell and Elon Musk about, of all things, trying to get a SpaceX tour and OpenAI introduction for Hideo Kojima.
Musk also wrote that he’d lost confidence in OpenAI competing with Google/Deepmind, and decided to attempt that through Tesla instead,” while pumping up Neuralink’s progress. Newell has since launched his own BCI company, Starfish.
After Birchall said he had no first-hand knowledge of the xAI bid for OpenAI’s assets, OpenAI’s lawyer asked that his testimony from the direct examination be struck. We are going to hear about that now, outside the presence of the jury.
He sees the OpenAI for-profit term sheet and writes to Shivon Zilis: “Pretty plain vanilla for-profit structure. So kinda hard to push a narrative that doesn’t involve investors being very focused on ROI. I’m a super fan of capitalism and making tons of money doing great things but not sure if this correlates with the ‘noble cause for humanity, not doing it to make money’ narrative. Did he/would he [Altman, lower on the email chain] offer E a board seat?”
We saw Chris Clark’s email about pausing donations yesterday during Musk’s testimony. Today, we see an email from Birchall: “This was ready to go out when I was told that Elon informed Greg and Ilya that the funding would be on pause until they came to terms on the right path moving forward.”
This was while there were discussions of how to put the for-profit Musk wanted together.
And we are talking with Birchall about tax deductions for charitable giving.
Users who enroll in the startup’s advanced account security settings can sign into their ChatGPT and Codex accounts using passkeys or physical security keys, and will also receive alerts about new logins to their accounts. OpenAI also automatically excludes users with the setting enabled from AI model training.
OpenAI and Musk’s counsel need to discuss something... Back in 15.
He was used, I think, to get financial documents into the record. We are now on the cross, and he is giving mercifully brief and direct answers.
There is a confirmation document from Chris Clark to Elon Musk showing the donation values. The donation was to OpenAI, as agreed in the stipulated facts, and in conflict to what Musk testified today.
We’re looking at a summary of about 60 donations to OpenAI, which Birchall says were directed by Musk, with Birchall helping execute all of them.
I guess she isn’t into Birchall’s testimony about Musk’s charitable contributions?
Jared Birchall’s testimony will begin. Birchall runs Musk’s family office, Excession LLC, and generally serves as his fixer.
Savitt asked about Musk’s $1 billion funding commitment. When did Musk stop funding OpenAI? 2020. And that was when they broke the deal? No, I was uncomfortable. (some crosstalk)
Musk: “I understand leading questions. That’s a leading answer.”
YGR: He can lead. He can lead all he wants. Let’s remind everyone you are not a lawyer and you’ve never taken a class in evidence.
Musk: “I did take law 101 technically, but yes I am not a lawyer.”
Musk’s back from break, reiterating that he had reason for waiting as long as he did to file suit against OpenAI — and saying his initial understanding of OpenAI’s agreement with Microsoft was that it didn’t violate the mission of the charity. “I don’t think I had a basis for filing a lawsuit before I did,” Musk says. He also refers to xAI as the smallest of the AI players, coming after Anthropic, OpenAI, Google, and Chinese AI models.
In the document of stipulated facts — that is, what everyone has agreed on — it’s said that Musk gave Teslas to OpenAI as an in-kind contribution. In response to questioning from YGR, Musk says that he gave the Teslas to individuals, personally, and not to OpenAI: “I bought at full price and gave them to individuals. It was a reward to the individuals.”
I don’t know if this matters, but it sure is interesting.
Apparently he wasn’t 100 percent confident in yesterday’s clarification, because Molo asks Musk to clarify whether the “AI enabled robot army” mentioned in cross-examination is a military army. “No, we do not make any weapons,” Musk says. The point of his using the term was that “if we made a lot of robots we need to make sure they’re safe and don’t turn into a Terminator situation … you see int he movie, it’s not a good situation.”
Judge Gonzalez Rogers asks Musk to sum up the plot of Terminator in one sentence. “Worst case situation is AI kills us all I suppose,” he says.
With that, the jury leaves for a break.
Claude Security uses the Opus 4.7 model to scan a business’s codebase for vulnerabilities and issue a fix. This tool is rolling out to enterprise customers globally and isn’t to be confused with Anthropic’s Mythos, a powerful AI model that can identify and exploit vulnerabilities across operating systems and web browsers.


Under questioning from Molo, his own lawyer, Musk tries to establish that he wasn’t causing harm to OpenAI. He says that as far as he knows, OpenAI wasn’t unable to cover any critical expenses because he ended his donations. He didn’t ask Andrej Karpathy to leave and join Tesla, only hired him after he said he was leaving OpenAI. Neuralink (while it was authorized to do so apparently) didn’t poach anyone from OpenAI as far as he knows. Did he seriously recruit anyone from OpenAI for Tesla besides Karpathy? “I don’t think so.” He reiterates that Tesla isn’t currently working on AGI, despite a recent tweet indicating it would achieve it.
Musk also repeats that he “did not read the fine print” on the term sheet for OpenAI’s for-profit wing. Molo brings up an email from Altman (forwarded to Musk by Zilis) about the draft that reads: “We did this in a way where all investors are clear they should never expect a profit, see purple box below.” On the stand, Musk says “I assumed he meant what he said.”
On the cross exams by OpenAI and Microsoft, there was minimal (though still some) bickering, and we are now getting many more yeses and nos as full answers. I’m not sure whether Musk was trying to run out the clock yesterday or what, but he’s clearly rethought his strategy.
In the final section of cross-examination, Musk is asked about speaking with Altman in 2020. Musk apparently told Altman that OpenAI looked “hypocritical” after the deal with Microsoft and suggested he change the name of OpenAI. “He reassured me they were staying on mission,” Musk says on the stand — and therefore, Musk didn’t sue. Following that, cross-examination wraps up.
Introduced last year, the GUARD Act would ban kids under 18 from accessing chatbots, while implementing age checks for everyone else. The Senate Judiciary Committee unanimously voted to advance the bill on Thursday, and now it’s headed to the Senate floor.
Savitt mentions an X post where Musk says “the future is going to be amazing with AI and robots enabling sustainable abundance for all” and asks if he thinks it’s accurate. “Well, I’ve also said there are many possible futures. Some futures are good, and some are not good,” Musk says. “I think it’s generally better to err on the side of optimism than pessimism.” Musk agrees that “aspirationally,” he promotes xAI with the message that the future is going to be amazing.
Savitt then goes through a list of Musk’s companies — asking, one after another, if they’re for-profit. At a little prodding from Judge Gonzalez Rogers, Musk admits they all are. So, Savitt asks, they’re socially beneficial and for-profit? Musk agrees. Savitt then points out that Musk hasn’t started any nonprofits himself since OpenAI, despite having the money to do so. “Well, I thought I had started a nonprofit with OpenAI, but they stole the charity,” Musk says.
Savitt is bringing up some previously released email exchanges where Musk appeared okay with discussing what OpenAI would and wouldn’t make open source — including one where he replied “yup” to a comment that it would make sense to start being less open as AI advanced. He asks if Musk has made xAI’s own advanced versions of Grok open source — “No, but it will,” Musk says.
Savitt then mentions a letter Musk signed in 2023 asking to pause development of giant AI models out of safety concerns. Musk signed the letter shortly before he incorporated his own xAI, and Savitt asks why he didn’t disclose that fact; Musk says it was “just an open, non-binding letter” signed by hundreds of other people.


Musk has explained that he didn’t object to the proposed introduction of a capped profit structure initially at OpenAI (and also didn’t review it very closely), and Savitt is asking if he knew what the cap was for Microsoft’s investments in the company — Musk doesn’t seem clear on it. Savitt asks whether Musk had a lawyer set terms and conditions for his donations. Musk answers: “No, but it was obviously started as a nonprofit, and in the founding charter it says it will not be to the financial benefit of any person?” The apparent intended gist is that Musk didn’t set clear terms he can point to OpenAI or Altman violating.
Elon Musk is on the stand, to continue the cross-examination from yesterday. If you’ve read Musk depositions or heard previous crosses, this kind of arguing and filibustering is pretty standard behavior. But I think this is the jury’s first encounter with it, and it’s hard to know how they’re going to take it.
We are still dealing with the pretrial motions about the boundaries on safety questions.
We are having an argument about which expert issues are going to be allowed. “We aren’t going to get into issues of catastrophe or extinction,” YGR says. Musk’s lawyers are not happy about this: “We all could die as the result of artificial intelligence.”
YGR has just sat down on the bench. Jury’s not here yet, so we are dealing with some motions and issues.

This crop of smart glasses is the most stylish, affordable, comfortable, and capable yet. They still don’t make sense.


Launched last year, Preferred Sources allows you to customize the outlets you see the most often in Google Search’s “top stories” section. Now, this feature is available in “all supported languages globally,” according to Google.
















