In an unexpected collaboration that borders on rivalry, OpenAI and Anthropic cross-tested their AI models, exposing serious jailbreak risks and misuse potential. The joint safety tests have raised eyebrows about the robustness of current systems and add fuel to debates over stricter AI oversight.
Back to Top / Thursday, August 28, 2025, 11:21 am / permalink 13288 / 3 stories in 6 months
OpenAI, Anthropic Mutual Safety Evaluations in AI Systems / 6 months
Grok Missteps Spark Apology and Investigation on X / 7 months
OpenAI Investigates Deceptive Behavior in Chatbot Models / 5 months
ChatGPT safety update introduces parental controls and age prediction / 5 months
Anthropic limits government use of its classified AI models / 5 months
Anthropic Upgrades Claude: AI Chatbot Now Remembers Past User Chats / 5 months
Anthropic Suffers Notable Outage, Disrupting AI Tools / 5 months
NorthFeed Inc.
Disclaimer: The information provided on this website is intended for general informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the content. Users are encouraged to verify all details independently. We accept no liability for errors, omissions, or any decisions made based on this information.