The very long read we were expecting from White House setting guardrails around AI was released this past week as a 111-page Executive Order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” President Joe Biden and his administration say the goal is to establish a framework that sets “new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world.”Â
Here’s the fact sheet about the Executive Order, summarizing its main points, if you’re not up to scanning the entire EO. But here are five of the top takeaways:
Testing safety and security before AI tools are released: There’s much debate about whether OpenAI should have done a little more prep work before releasing its groundbreaking and potentially paradigm shifting ChatGPT to the world a year ago because of the opportunities and risks posed by the generative AI chatbot. So now AI developers will be required to “share their safety test results” and other critical information with the US government.Â
“Companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model and must share the results of all red-team safety tests.” Red-team testing refers to having a dedicated group specifically targeting the AI system, trying to find security vulnerabilities.Â
Expanding on the testing requirement, the National Institute of Standards and Technology is tasked with creating “rigorous standards for extensive red-team testing to ensure safety before public release.” NIST will also help design tools and tests to ensure AI systems are safe, secure and trustworthy.
Protecting against potentially harmful AI-engineered biological materials: Agencies that fund “life-science projects” will be…
Read the full article here