Brandon Vigliarolo / theregister - Just 250 malicious training documents can poison a 13B parameter model - that's 0.00016% of a whole dataset Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. …
Back to Top / Thursday, October 9, 2025, 4:20 pm / permalink 15053 / 4 stories in 4 months
OpenAI Investigates Deceptive Behavior in Chatbot Models / 5 months
ChatGPT safety update introduces parental controls and age prediction / 5 months
OpenAI Unveils GPT-5 Codex Model for Smarter Programming / 5 months
OpenAI launches GPT‑5‑Codex to revolutionize AI‐assisted coding / 5 months
OpenAI Trials ‘Thinking Effort’ Feature for ChatGPT / 6 months
OpenAI rolls out GPT-5 tone, voice, and performance updates / 6 months
GPT-5 free release sparks talk on hidden system prompt / 6 months
NorthFeed Inc.
Disclaimer: The information provided on this website is intended for general informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the content. Users are encouraged to verify all details independently. We accept no liability for errors, omissions, or any decisions made based on this information.