A new bill that seeks to track security issues by mandating the creation of a database recording all breaches of AI systems has been introduced in the Senate.
O Safe Artificial Intelligence Act, introduced by Senators Mark Warner (D-VA) and Thom Tillis (R-NC), would establish an Artificial Intelligence Security Center at the National Security Agency. This center would conduct research on what the bill calls “counter-AI,” or techniques for learning how to manipulate AI systems. This center would also develop guidance to prevent measures to combat AI.
The bill will also require the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency to create a database of AI breaches, including “near misses.”
The bill proposed by Warner and Tillis focuses on techniques to combat AI and classifies them as data poisoning, evasion attacks, privacy-based attacks, and abuse attacks. Data poisoning refers to a method where code is inserted into data extracted by an AI model, corrupting the model’s output. It emerged as a popular method to prevent AI imagers from copying art on the Internet. Evasion attacks alter the data studied by AI models to the point that the model becomes confused.
AI safety was one of the key items in the Biden administration’s AI executive order, which directed NIST to establish “red team” guidelines and required AI developers to submit safety reports. Red teaming occurs when developers intentionally try to make AI models respond to requests they shouldn’t.
Ideally, developers of powerful AI models test the security of platforms and run them through extensive red teaming before being released to the public. Some companies, like Microsoft, have created tools to help make it easier to add security protections to AI projects.
The Secure Artificial Intelligence Act will have to pass through a committee before being approved by the larger Senate.