Power and Promise of AI in Biological Sciences
OpenAI is actively preparing for the growing impact of AI in Biology, recognizing its potential to revolutionize medicine, public health, and environmental science, these models are already aiding researchers in identifying successful drug candidates for clinical trials. Future developments may allow for rapid drug discovery, vaccine design, sustainable fuel production, and treatments for rare diseases. However, this transformative capability brings with it critical dual-use risks—while AI can push scientific progress, it could also be exploited to recreate or develop biological threats if misused.
To address these concerns, OpenAI is implementing a comprehensive safety framework to govern the development and use of AI in Biology. The company anticipates that future models will soon reach a “High” level of capability in biology, according to its internal Preparedness Framework. In anticipation of this, OpenAI is focusing on prevention by incorporating early-stage safeguards and rigorous evaluations to ensure that capabilities do not outpace safety measures.
A Multi-Layered Approach to Biosecurity and AI
OpenAI is building a robust system to ensure responsible use of AI in Biology. This includes close collaboration with government bodies, national laboratories, and biosecurity experts. By consulting specialists in biosecurity and bioterrorism early in the development process, OpenAI has shaped its threat models, capability assessments, and usage guidelines. Human trainers with advanced degrees in biology have helped develop safer model responses, and domain experts are now stress-testing these systems through high-fidelity red-teaming exercises.
To limit access to potentially harmful information, OpenAI is training its models to reject or safely respond to dual-use biological requests. While the general public may only receive high-level summaries, vetted experts can access more detailed outputs under controlled conditions. System-wide monitors detect and block unsafe bio-related queries, triggering automated and manual reviews. Moreover, misuse of these models—particularly when related to biological risks—can lead to account suspension and, in serious cases, notification of law enforcement.
A defense-in-depth approach protects the core infrastructure, including access controls, threat detection, and egress monitoring. These strategies are already deployed in models like “o3,” which, while powerful, have not yet crossed the “High” threshold. Lessons from these deployments are continually refining OpenAI’s technical safeguards and human review systems for AI in Biology applications.
Looking Ahead—Collaboration for Global Biodefense
OpenAI recognizes that mitigating biological risks goes beyond securing its own models. The company will host a Biodefense Summit in July, bringing together global stakeholders—from governments to NGOs—to assess dual-use threats and explore how frontier AI can support critical biodefense efforts. This summit aims to foster collaboration around countermeasures, diagnostics, and biosecurity innovation.
Future plans include giving responsible institutions controlled access to advanced models to accelerate beneficial applications in biological sciences. OpenAI also advocates for stronger infrastructure beyond AI, such as improved nucleic acid synthesis screening, early pathogen detection, and biosecurity startups. With the intersection of AI and biology rapidly evolving, OpenAI believes a shared commitment to safety and innovation will be essential.
Ultimately, the company emphasizes that AI’s role of AI in Biology must be guided by responsibility, vigilance, and global cooperation to ensure that its benefits are harnessed while minimizing potential threats.
Visit The Lifesciences Magazine to read more.