
ChatGPT safety systems can be bypassed for weapons instructions
OpenAI’s ChatGPT has guardrails that are supposed to stop users from generating information that could be used for catastrophic purposes, like making a biological or nuclear weapon. But those guardrails aren’t perfect. Some models ChatGPT uses can be tricked and manipulated. In a series of tests conducted on four of OpenAI’s most advanced models, two…