White House Announces Three AI Policies to Safeguard Public Welfare

The Biden administration unveiled three new policies on Thursday aimed at shaping the federal government’s utilization of artificial intelligence (AI), positioning the standards as a potential model for global action amidst the swift evolution of the technology.

These policies, building upon an executive order signed by Biden in October, address mounting concerns regarding the impact of AI on various fronts, including the U.S. workforce, privacy, national security, and the potential for discrimination in decision-making processes.

According to the White House, the Office of Management and Budget will mandate that federal agencies ensure the use of AI does not jeopardize the “rights and safety” of Americans. To enhance transparency, agencies will be required to publicly list the AI systems they employ, along with an assessment of associated risks and risk management strategies.

Additionally, the White House directive calls for all federal agencies to appoint a chief AI officer with expertise in the field to oversee AI-related endeavors within their respective agencies.

VP Kamala Harris, highlighted the collaborative effort behind the policies, which drew input from various stakeholders including public and private sectors, computer scientists, civil rights leaders, legal scholars, and business figures.

Harris emphasized the administration’s intent for these domestic policies to set a precedent for global AI governance, stressing the ethical imperative of ensuring AI adoption aligns with public welfare while maximizing its benefits for all.

The federal government has disclosed over 700 instances of current and planned AI usage across agencies, with applications ranging from documenting suspected war crimes to diagnosing medical conditions and combating illicit activities.

To address safety concerns, agencies must implement measures by December to assess, test, and monitor AI’s impacts on the public, mitigate risks of algorithmic discrimination, and provide transparency regarding AI usage.

Harris illustrated the need for rigorous scrutiny, citing an example where AI employed in Veterans Administration hospitals must demonstrate unbiased diagnostic outcomes.

Biden’s earlier executive order, utilizing the Defense Production Act, mandates that companies developing advanced AI platforms notify and share safety test results with the government. These safety assessments, conducted through “red-teaming” risk assessment processes, aim to establish standards ensuring safety before public release, overseen by the National Institute of Standards and Technology.