On October 30th, 2023, President Biden released an Executive Order (EO) meant to help America take leadership in managing risks of Artificial Intelligence (AI). The EO is not exclusive to generative AI, but it applies to the definition of AI which was defined in 15 USC 9401(3) as any machine-based system that can make recommendations, predictions, or decisions influencing real or virtual environments. The EO establishes principles around AI security and safety, with the goal to stand up for the privacy, equity, and civil rights of all Americans.
Previous research by Community Solutions covering the use of AI focused on its application in behavioral health and public benefits, notable for state related applications and approaches to AI technology in health and human services. This EO is significant however because the policy implications that follow from its issuance will set the foundation for federal regulation in regard to AI technology moving forward.
The goal: secure, safe, and ethical development of AI systems
The EO seeks to establish a comprehensive strategy that can produce robust and responsible innovation. It also expands upon previous efforts by the Biden Administration to administer secure, safe, and ethical development of AI systems. Through the implementation of the EO, actions supporting safety, privacy, civil rights, consumers, students, patients, workers, leadership, and innovation are centered in the national conversation around how AI systems develop.
The National Institute of Standards and Technology (NIST) will be instrumental in the process of developing standards to ensure that AI systems are trustworthy, safe, and secure.
With these efforts in place, Congress can begin to pass legislation to help America take the lead in responsible innovation and development. The National Institute of Standards and Technology (NIST) will be instrumental in the process of developing standards to ensure that AI systems are trustworthy, safe, and secure. The NIST will also play a key role in administering directives issued under the EO. The agency will also lead the development of key AI guidelines.
An ambitious agenda for AI oversight policy
Response to the EO has generally praised the administration for taking first steps in implementing comprehensive policy on AI, but some experts felt that the EO fell short. James Lewis, senior vice president at the Center for Strategic and International Studies stated that the AI executive order does a good job at addressing potential risks of AI, but the U.S. will need support from international partners and the private sector to secure buy-in. Helen Toner, Director of Strategy and Foundational Research Grants at the Center for Security and Emerging Technology at Georgetown University gave insight into the matter when she was interviewed by the Miami Herald. Toner stated, “The Administration has laid out a very ambitious agenda, but figuring out how to implement it is left to a swath of different federal agencies.”
Opportunity for innovation can’t be hindered by restrictive policy.
Opportunity for innovation can’t be hindered by restrictive policy, but the federal government should also have protections for citizens from AI’s harms and risks. This helps develop necessary protections for the rights of Americans but also ensures that innovation and development can happen in a way that fosters innovation, collaboration, and competition.
As the ongoing work with AI continues, international collaboration as well as federal action will continue to ensure best practices for companies to ensure accountability and protection for citizens. For now, this Executive Order is a good starting point for the ongoing efforts to regulate AI.
Read the full Executive Order here.