Improved governance and pre-release safety evaluations needed for biological AI models

Concerns over the biosecurity risks posed by artificial intelligence (AI) models in biology continue to grow. Amid this concern, Doni Bloomfield and colleagues argue, in a Policy Forum, for improved governance and pre-release safety evaluations of new models in order to mitigate potential threats.

"We propose that national governments, including the United States, pass legislation and set mandatory rules that will prevent advanced biological models from substantially contributing to large-scale dangers, such as the creation of novel or enhanced pathogens capable of causing major epidemics or even pandemics," write the authors.

Advances in biological AI models hold great promise across many applications, from speeding up drug and vaccine design to improving crop yields and resilience. However, alongside these benefits, biological AI models also pose serious risks. Because of their general-purpose nature, the same model that designs a harmless viral vector for gene therapy could be used to create a more dangerous, novel viral pathogen that evades vaccines. Although developers of these systems have made voluntary commitments to evaluate the dual-use risks of these models, Bloomfield et al. highlight how these measures are insufficient to ensure safety on their own.

According to the authors, there is a notable lack of governance to address risks, including standardized, mandatory safety evaluations for advanced biological AI models. Although some policy measures exist, such as the White House Executive Order on AI, and the Bletchley Declaration signed at the UK AI Safety Summit in 2023, there is no unified approach to evaluating the safety of these powerful tools before they are released. Here, Bloomfield et al. call for policies focused on reducing the biosecurity risks of advanced biological models, while preserving scientific freedom to explore their potential benefits. Policies should require pre-release evaluations only for advanced AI models posing high risks.

These evaluations can use existing frameworks for dual-use research and should include proxy tests to avoid directly synthesizing dangerous pathogens. Additionally, oversight should address the risks of releasing a model's weights, which could enable third-party modifications to a model after its release. Moreover, policies must ensure responsible data sharing and restrict access to AI systems with unresolved risks.

Source:
Journal reference:

Bloomfield, D., et al. (2024). AI and biosecurity: The need for governance. Science. doi.org/10.1126/science.adq1977.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Researchers use XAI to unlock secrets of drug discovery