Nieuws

The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their ...
A new study from Anthropic suggests that large AI models can sometimes behave like disloyal employees, raising real security concerns even if their actions aren't intentional. Anthropic tested 16 ...
New Anthropic AI Models Demonstrate Coding Prowess, Behavior Risks. By John K. Waters; 06/02/25; Anthropic has released Claude Opus 4 and Claude Sonnet 4, its most advanced artificial intelligence ...
Anthropic has now introduced Claude Gov, a new suite of large language models designed specifically for use by U.S. government defence and intelligence agencies. The AI models are built to handle ...
We kennen AI-bots als klantenservicemedewerkers, zelfs als assistenten in de juridische wereld, maar nu is er ook een spion-chatbot. Anthropic maakte een soort James Bond-versie van Claude die ...
As AI conquers every human-driven endeavor at a breathless pace, from bombing military targets to teaching kids, we gotta ask ...
The authors allege Megatron-LM was trained on 200,000 pirated books, allowing it to mimic the style and themes of their ...
OpenAI lands a $200M U.S. defence deal to build non-lethal AI tools, signalling a new era of tech-military partnerships in ...
Artificial intelligence wins two lawsuits against authors over copyright 🍁 Tech leaders urge startups to dig in their heels ...