OpenAI recently conducted a study on the effectiveness of GPT-4 in creating a bioweapon and found that its AI poses only a slight risk in aiding someone to produce a biological threat. The assessment involved 50 biology experts with PhDs and 50 university students, divided into control and treatment groups, to evaluate the impact of GPT-4 on creating bioweapon plans. The study revealed that GPT-4 had a minor uplift in accuracy and completeness for both experts and students but not large enough to be statistically significant. OpenAI emphasizes that information access alone is insufficient to create a biological threat, and more research is needed to fully understand the implications of AI in this domain.