ChatGPT safeguards can be hacked to access bioweapons instructions — despite past safety claims: report
Maria Simulescu
Call it the Chat-om bomb. Techsperts have long been warning about AI’s potential for harm, including allegedly urging users to commit suicide . Now, they’re claiming that ChatGPT can be manipulated into providing information on how to construct biological, nuclear bombs and other weapons of mass destruction. NBC News came to this frightening realization by conducting a series of tests involving OpenAI’s most advanced models, including ChatGPT iterations o4-mini, gpt-5 mini, oss-20b and oss120b.
din zilele anterioare