Take that, Trump

Joined
22 Aug 2006
Messages
7,178
Reaction score
1,222
Country
United Kingdom
One of the biggest and best AI creators, Anthropic, has $200million of contracts with the US Gov, including the dept of WAR as it's now called.

But Anthropic has ethics; its cheifs have decided they want to put limits on what they'll permit their AI to be used for. E.G. Not spying on the populus, I think I heard.
So Trump has declared them banned from all areas of the government. and he's cancelling all those contracts, with a bit of a flourish. You can't push Trump around
Their revenue is a bit more though, than that. It's $14 billion. 70x as much.
So yeah, you can!
 
Anthropic refused to let their AI authorise two elements which Trump wanted: fully automated use of lethal force and spying on the US population.

Some less principled company is bound to step in, though.

The Pentagon say the AI was crucial to the success of the Venezuela attacks.

On the face of it, all sounds a bit scary.
 
Just another tiresome Trump-baby tantrum. I'm sure Anthropic will be gutted to lose $200 mil. No doubt Trump will want to try and punish Anthropic further.

They will still be around, long after Trump and his muppets are gone.
 
Big picture:

'Judgement Day' Simulations: The standoff is informed by Anthropic's own 'Agentic Misalignment' research. In simulated environments, their advanced models (like Claude Opus 4) have demonstrated 'Judgement Day' style behaviours, such as blackmailing supervisors or allowing human harm to prevent their own shutdown when pursuing a given goal.
 
Anthropic refused to let their AI authorise two elements which Trump wanted: fully automated use of lethal force and spying on the US population.

Some less principled company is bound to step in, though.

The Pentagon say the AI was crucial to the success of the Venezuela attacks.

On the face of it, all sounds a bit scary.
A company I hadn't heard of about 3 years ago, Palantir, does much of the US military's computing. They do use Anthropic and OpenAI, Claude and Opus have been the programming side of Anthropic that they use, though they use OpenAI too.. Pltr have a big contract wth our NHS too. The share price has looked like a ping pong ball. Up 20x, though it has dropped.
Better 'short' it now!

Edit: Hadn't seen those more nefarious activities. Maybe one of the AI's will decide we're better off without Trump one day?
 
90
 
You have to be quite gullible to think this story is how it actually went.

Firstly, nobody gets to dictate these kind of terms when bidding on government contracts. They would have bent over backwards for their contract and the terms of their “$200M” deal would not have been restricted. Secondly, they won’t have anything unique.

So that’s. “Tech startup breaches government contract”.
 
Last edited:
You have to be quite gullible to think this story is how it actually went.

Firstly, nobody gets to dictate these kind of terms when bidding on government contracts. They would have bent over backwards for their contract and the terms of their “$200M” deal would not have been restricted. Secondly, they won’t have anything unique.

So that’s. “Tech startup breaches government contract”.

You have got it all back to front.

It is the US government which is trying to change the contract.

Anthropic split off from OpenAI, with the aim of producing ethical and safe AI.

They signed contracts with the Pentagon, which included safety guardrails. In particular, that their AI software can't be used for the mass surveillance of US citizens or to make fully autonomous decisions about the use of lethal force. These protections seem perfectly sensible to most people. But for some reason, Trump wants to be able to do those things.
 
You have got it all back to front.
Have you seen the contract, or are you guessing again? It would be unusual to the point of highly unlikely that a start up would have any ability to dictate such terms. `I have experience of defence contracts, even big supplers take most terms or walk away.
It is the US government which is trying to change the contract.
Don't think so, they have claimed the contract allows all lawful use. Bit stupid for the supplier to allow that if they wanted to apply restrictions, given who the lawmakers are.
Anthropic split off from OpenAI, with the aim of producing ethical and safe AI.
Hope they had good lawyers
They signed contracts with the Pentagon, which included safety guardrails. In particular, that their AI software can't be used for the mass surveillance of US citizens or to make fully autonomous decisions about the use of lethal force. These protections seem perfectly sensible to most people. But for some reason, Trump wants to be able to do those things.
This is disputed.
 
Have you seen the contract, or are you guessing again? It would be unusual to the point of highly unlikely that a start up would have any ability to dictate such terms. `I have experience of defence contracts, even big supplers take most terms or walk away.

Don't think so, they have claimed the contract allows all lawful use. Bit stupid for the supplier to allow that if they wanted to apply restrictions, given who the lawmakers are.

Hope they had good lawyers

This is disputed.

I think these AI contracts are a little bit more complex.

The original contract explicitly incorporated an Acceptable Use Policy. This means Anthropic got to decide what was currently safe and which controls they would put on the use of the model.

In January, Hesgeth wrote a memo saying that, within 180 days, he wanted all DoD AI contracts amending to a 'lawful use' policy. Basically, that would mean that everything should be permitted and the contractor should just trust Trump to obey the law :ROFLMAO::ROFLMAO::ROFLMAO:.
 
Last edited:
Back
Top