share this blog
Over the last two years, the AI industry has boomed, causing massive regulatory confusion. The tech race has brought about fears about the issue of this emerging technology, with public figures such as Elon Musk stating that AI is a "risk." Concerns like these and an open letter signed by tens of thousands have sparked this most recent executive order.
Philip Blair, Amsterdam-based AI consultant, had this to say about the risks of AI:“ AI systems will automatically reflect the biases found in our world. For example, if you ask a language model to complete the sentence ‘The doctor put on ___ coat’, you'll find that it is much more likely to fill the blank with his than her.
These are not just toy examples; imbalanced data can cause issues, such as facial recognition systems used by law enforcement to be more likely to misidentify people of color, or employment application screening systems to disadvantage women or minority groups. In the absence of appropriate regulation, there is often nothing other than the court of public opinion to motivate system developers to explicitly account for these sorts of biases.”
"The Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future." - Anu Bradford, professor at Columbia University.
While Biden's Executive Order on AI guidelines lays the groundwork, the Biden Administration has nevertheless requested that Congress pass data privacy legislation. As it stands, the Biden-Harris Administration has set out an extensive set of AI guidelines designed to create more transparency from AI companies.
Let's explore what you should know about the president's Executive Order.
In a nutshell, the administration's Executive Order requires greater transparency from AI companies about how their AI models work. It also sets out standards of which the new labeling of AI-generated content stands out.
This is an advancement of the Biden Administration's "voluntary requirements for AI pledge" set in August of this year.
The US Department of Commerce National Institute of Standards and Technology has been tasked by the White House to create guidelines for labeling AI content. AI technology companies will then use these newly created guidelines for the development and use of watermarking and labeling tools.
According to a White House fact sheet: “Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.”
The idea behind this is that by clearly labeling the origin of content, be it audio, visual, or text, users will know what is AI content and what isn't. As per the voluntary pledge in August, trend-setting AI companies like OpenAI and Google have pledged to create these tools. This is in light of issues such as disinformation and deepfakes.
The Executive Order, however, fails to require government agencies to use these technologies. Nevertheless, the White House will be pressing forward with developing these technologies with the Coalition for Content Provenance and Authenticity (the C2PA initiative).
And while the coalition doesn't have an official relationship with the White House, Mounir Ibrahim, as co-chair of the governmental affairs team, stated: "C2PA has been in regular contact with various offices at the NSC [National Security Council] and White House for some time."
Another significant rule in the Executive Order is that AI companies are now obligated to conduct tests of their products and share their results with government officials before any product updates are made available to consumers. The US Department of Commerce National Institute of Standards and Technology is responsible for the overseeing of this, as well.
This "red team testing" will ensure that proper testing is conducted by AI companies that are creating “any foundation model that poses a serious risk to national security, national economic security or national public health and safety.”
Should these safety tests produce results that are concerning and pose risks, the federal government will either:
a. Request that the company rectify these problems through product improvements
b. Request that the intended changes or initiative are abandoned
And on what precedence can the White House make these requests? Well, their authority comes from the Defense Production Act. Enacted nearly a century ago, the White House holds a broad role for the overseeing of industries that relate to national security — and in this case, artificial intelligence.
The White House stated that: "These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public."
The executive order, while limited to the United States, serves as a crucial precedent with global implications. The realm of AI tools transcends national borders, and a failure to effectively manage it could yield dire consequences worldwide.
Earlier this year, prominent figures in the AI industry, including OpenAI's CEO Sam Altman, voiced their concerns collectively, underlining the gravity of AI-related risks. Their message resonates, stating that safeguarding against AI-driven existential threats should be elevated to the status of a global priority alongside challenges like pandemics and nuclear warfare.
James Lewis, an expert at the Centre for Strategic and International Studies (CSIS), said, “Seeing this executive order as a step by the US to address what people generally perceive as the risks of AI — the executive order is balanced, and it talks about the opportunities too. But this is a major, major step for the US, it's our entry into the great AI governance sweepstakes.”
However, this acknowledgment comes with the understanding that the US is not alone in this pursuit. The European Union has already taken strides by introducing its AI Act, which aims to regulate AI comprehensively. Moreover, a recent study conducted by Stanford University reveals that in 2022, 37 AI-related laws were passed in 127 different countries, reflecting the international momentum in the field of AI regulation.
The Biden-Harris executive order stretches further than previous attempts by the US government to regulate artificial intelligence. The directive emphasizes creating best practices rather than the mobilization of them.
So, as of right now, many requests have been made, but the enforcement, well that’s still to be seen.
share this blog
Joshua J. Brouard brings a rich and varied background to his writing endeavors. With a bachelor of commerce degree and a major in law, he possesses an affinity for tackling business-related challenges. His first writing position at a startup proved instrumental in cultivating his robust business acumen, given his integral role in steering the company's expansion. Complementing this is his extensive track record of producing content across diverse domains for various digital marketing agencies.
Understanding the Legal Battle for CHATG...
04 December 2023 • 3 min read
What's in Store for IP in 2023?
01 December 2023 • 7 min read
Dbrand Sues Casetify After Claiming They...
28 November 2023 • 2 min read
Fast Fashion Giant Shein files for IPO i...
28 November 2023 • 4 mins min read
The Evolution and Impact of Cyber Monday...
27 November 2023 • 3 minutes min read