ChatGPT safeguards can be hacked to access bioweapons instructions — despite past safety claims: report

Educatie

Call it the Chat-om bomb. Techsperts have long been warning about AI’s potential for harm, including allegedly urging users to commit suicide . Now, they’re claiming that ChatGPT can be manipulated into providing information on how to construct biological, nuclear bombs and other weapons of mass destruction. NBC News came to this frightening realization by conducting a series of tests involving OpenAI’s most advanced models, including ChatGPT iterations o4-mini, gpt-5 mini, oss-20b and oss120b.

ChatGPT safety systems can be bypassed to get weapons instructions https://www.nbcnews.com - 10.10.2025 10:00

din zilele anterioare

ChatGPT safety systems can be bypassed to get weapons instructions https://www.nbcnews.com - 10.10.2025 10:00