President Joe Biden has approved an executive order that directs federal agencies to use artificial intelligence to achieve “equity” objectives—a move observers are warning is equivalent to integrating a “woke AI” into the government.
“When designing, developing, acquiring, and using artificial intelligence and automated systems in the Federal Government, agencies shall do so, consistent with applicable law, in a manner that advances equity,” states the Executive Order on Further Advancing Racial Equity and Support for Underserved Communities published Feb. 16 under the section “Embedding Equity into Government-Wide Processes.”
In the section, the Director of Office of Management and Budget (OMB) is asked to take steps to ensure “equitable decision-making” and assist agencies in “advancing equity” where needed.
Moreover, agencies are asked to use their civil rights authorities to “advance equity for all.” Agencies should consider opportunities to “ensure that their respective civil rights offices are consulted on decisions regarding the design, development, acquisition, and use of artificial intelligence and automated systems.”
In the executive order, equity is defined as handing out “fair, just, and impartial” treatment to communities who have been denied such treatment.
This includes “Black, Latino, Indigenous and Native American, Asian American, Native Hawaiian, and Pacific Islander persons and other persons of color; members of religious minorities; women and girls; LGBTQI+ persons; persons with disabilities; persons who live in rural areas; persons who live in United States Territories; persons otherwise adversely affected by persistent poverty or inequality; and individuals who belong to multiple such communities.”
Biden’s push for developing an “equity-focused” AI is attracting criticism online. In a Feb. 21 post on Twitter, senior fellow at the Manhattan Institute Christopher Rufo said the executive order to create a “DEI bureaucracy” contains a “special mandate for woke AI.” DEI stands for diversity, equity, and inclusion.
“Biden is not a moderate. This is a legal sprint to inject as much radical ideology as broadly and as deeply as possible in our government. This cannot be allowed. If Republicans take office, they must fully root out all of this ideological and social cancer,” evolutionary biologist Colin Wright, a founding editor of pro-free speech publication Reality’s Last Stand, said in a Feb. 21 Twitter post.
Derived from Marxist teachings, equity is different from the concept of equality, where everyone in a society is treated on an equal footing, and given the same level of treatment regardless of differences in race, religion, and other factors.
Equity, on the other hand, focuses on the forced redistribution of resources. In a socialist equitable scenario, privileges are distributed based on perceived imbalances. Such decisions are typically undertaken by an unelected group of progressive advocates.
Tech firms like Google are known to be focused on creating biased artificial intelligence. In an interview with EpochTV’s “Crossroads” program on Jan. 5, Zach Vorhies, a former Google-employee-turned-whistleblower, said that he was concerned about the company curating data to create AI biased with leftist and social justice values.
“AI is a product of the data that gets fed into it … If you want to create an AI that’s got social justice values … you’re going to only feed it information that confirms that bias. So by biasing the information, you can bias the AI,” Vorhies explained.
“You can’t have an AI that collects the full breadth of information and then becomes biased, despite the fact that the information is unbiased.”
In Biden’s executive order, stress is given to “pursue ambitious goals” in line with promoting “equity in science,” and rooting out “bias in the design and use of new technologies, such as artificial intelligence.”
An analysis done by researcher David Rozado on ChatGPT shows that the AI chatbot has a leftist political bias. ChatGPT was developed by the research group OpenAI and launched in November 2022.
In a Feb. 2 post on Substack, Rozado detailed his research on the unequal treatment of demographic groups by the ChatGPT/OpenAI content moderation system. He then ranked groups according to how likely negative comments against them are flagged as hateful by the moderation system.
Disabled people and/or people with a disability, blacks, gay and lesbian people, people with a disability, homosexual people, Asians, transgender people, and Muslims ranked in the top 10. Right-wingers, rightists, lower-middle-class people, Democrats, university graduates, middle-class people, upper-middle-class people, Republican voters, Republicans, and wealthy people ranked in the bottom 10 of the list.
“The ratings displayed by OpenAI content moderation system when rating negative comments about demographic groups partially resembles left-leaning political orientation hierarchies of perceived vulnerability,” Rozado notes.
Reporting from The Epoch Times.