President Joe Biden on Monday issued an executive order on artificial intelligence, outlining how the technology will be managed in America moving forward. The order seeks to address almost every AI-related issue that has emerged in the past 12 months. This includes mandates to test advanced AI models to ensure they cannot be weaponized, place mandatory watermarks on AI-generated content, and reduce the risk of workers losing their jobs due to AI.
Although the scope of the system is limited to the United States — specifically the federal government — the country currently leads the world in artificial intelligence, so any shifts are likely to impact the global discussion about how to regulate the technology. By arguing that US federal government agencies can only work with companies that comply with these new standards, Biden is leveraging $694 billion in federal contracts to drive broader compliance for the industry.
The AI Executive Order isn't short at all — at 20,000 words, a quarter of the length of Harry Potter and the Sorcerer's Stone — so we saved you the pain and read it to you. Here are the main takeaways and impressions of the order.
AI will need to be more ethical and properly tested
The order states that government agencies will need to conduct “robust, reliable, repeatable, and standardized evaluations of artificial intelligence systems.” Historically, there has been some confusion about how AI models are evaluated, with professionals forced to agree or risk using AI in the shadows.
Agencies will also have to ensure that any AI models they purchase have been ethically developed, properly tested, and watermarked AI-generated content. This would also be an interesting requirement for government agencies. Right now, the top 10 AI enterprise model companies are embarrassed to share the “ethical” way their AI was created — even Anthropic’s Claude, an “ethical AI” company, lacks this regard, and the only “open” thing In OpenAI it is in the name.
Additionally, the US government will strengthen and enforce consumer protection laws regarding AI, and pursue those engaging in unintentional bias, discrimination, privacy violations, and “other harms caused by AI.”
“The interests of Americans who increasingly use, interact with, or purchase AI-enabled products in their daily lives must be protected,” the order states. “The use of new technologies, such as artificial intelligence, does not relieve organizations of their legal obligations, and hard-won consumer protections are more important than ever in moments of technological change.
According to Pluralsight author and AI expert Simon Allardyce, companies building AI-driven applications should ensure their applications are as transparent as possible, in response to the executive order.
“If you're a company that's building AI-driven applications that relate to aspects of finance, healthcare, or human resources — for example, a tool that has any potential impact on loans, hiring decisions, or access to medical services — I would be very careful now about planning,” Simon said. To explain any decisions based on artificial intelligence.
“Similarly, we can expect new levels of control and higher expectations regarding data privacy in general.”
There are clear timelines for both the US government and tech companies to meet
Biden's executive order is not just “general guidance” on non-technical aspects. It gets incredibly specific about generative AI, and what it needs to do.
“While it is not a law in the traditional sense, we can expect to see a significant amount of legislation quickly follow,” Simon said. “The document is filled with dozens of requests for specific outcomes over detailed time frames, such as new guidelines and best practices from NIST on generative AI and baseline models within 270 days, and reports and recommendations on how to detect and classify artificial content within 240 days.” .
However, Simon said that because much of the matter delegates details to agencies, some parts lack detail. For example, there is a section that states that companies developing incorporation forms must notify the federal government if something poses a “serious risk to the national security, the national economic security, or the national public health and safety.”
“Suppose you are a company developing a new foundation model. Who decides whether this poses a serious risk?”
AI companies must now demonstrate cybersecurity best practices
Within 90 days of the order, the Secretary of Commerce will require all companies developing incorporation forms to report on an ongoing basis that they have taken appropriate security measures. This includes consistent red team and cybersecurity tests, the results of which they will have to disclose.
According to Pluralsight's senior director of security and generative AI skills, Aaron Rosenmund, the move was “very wise” but could reduce competition among small companies in the AI space.
“I felt like this was an ideal move, especially when focusing on how foreign militaries might look to use AI or vulnerabilities in AI systems as part of their plans to disrupt the technology,” he said. “For most organizations, there seems to be a reasonable expectation of free use of existing capabilities.”
The US government is very concerned about the use of artificial intelligence in biological weapons
A key part of it is the threat assessment of using AI to help with chemical, biological, radiological and nuclear threats – whether chemical, biological, radiological or nuclear. However, Biden seems to see AI as the solution as well, just like how it is used offensively and defensively in cybersecurity.
The same technology used to make strides in AI drug discovery could be reused in chemical and biological weapons, researchers say. In one fictional scenario of a plague epidemic, a large linguistic model provided advice on likely biological agents to use, taking into account budget constraints and success factors. She suggested “obtaining samples infected with Yersinia pestis and distributing them while identifying variables that could affect the expected number of deaths.”
Since AI is currently the worst it has ever been, this is a very real concern. The executive order will put pressure on companies operating these tools to prove that they cannot be used to inadvertently create these chemical, biological, radiological, and nuclear threats.
There is a strong focus on improving AI skills to lead the world and prevent job losses
For most people, the rise of AI comes with the fear of losing your job to a machine. According to research, 69% of people are worried that AI might take over their tasks. Developers are not immune either, with 45% of developers experiencing “AI skills threat,” or feeling that core skills have become redundant with the advent of AI.
The executive order focuses a lot on encouraging the workforce to upskill their employees to maximize the benefits of AI and minimize job losses (on a side note, it almost sounds as if Biden is reading from Navigate's Aaron Sconnard's speech last week about human intelligence). The United States sees improving skills as key to continuing to dominate the AI field, and says it will “invest in AI-related education, training, development, research, and capabilities,” as well as attracting AI talent from abroad.
The US government will train non-technical employees in AI as well. According to her order, “employees who do not serve in traditional technical roles, such as policy, administrative, procurement, or legal areas” will be “eligible to receive funding for programs and courses focused on artificial intelligence, machine learning, data science, or other areas of interest.” Relevance.”
“I was pleased to see that it seems to be fully embracing the idea that AI represents a transformative shift for the workforce in general – there is a recognition that AI is no longer just a technical skill for a technical role – it impacts everyone.” Simon said.
According to Aaron Rosenmund, AI should “allow humanity as a whole to augment the work” they do, not replace it. However, this will only be possible if training becomes readily available to everyone, which is what the regime appears to be trying to achieve.
“But for it to open up in this way, training on how to work with AI tools needs to become available everywhere,” Aron said. “This is likely the biggest and most widespread demand for workforce upskilling we will see in our lifetime, as AI capabilities begin to touch all aspects of our lives.”
Conclusion: Riding the wave of future change in AI
So far, most countries have been trying to catch up with AI, especially the US government. Now, if this ambitious order is enacted by various US federal agencies, it seems there will be more stringent requirements on AI development, and more of a push to improve the skills of everyone, regardless of background, in recognizing the technology.
Also, this demand will likely mean that there will be plenty of jobs in cybersecurity roles to properly vet AI-based products and services, a field that was already at the top of Pluralsight's Best Tech Careers of 2023. In early 2023, there are only enough cybersecurity professionals to fill 68% of all open jobs in the United States, so earning a cybersecurity certification (perhaps with CIISP, CCSP, or CEH) is a wise career choice.
The Executive Order is not legislation, and nothing will change overnight. But it's a strong indicator of where the AI winds are blowing, and what the industrial landscape will increasingly look like in 2024, not just for tech professionals or AI companies, but for the global workforce more broadly.
Additional Resources:
The Impact of Artificial Intelligence: Cybersecurity Challenges and Opportunities
Generative AI Toolkit: How to prepare your organization for AI adoption
California publishes first report on generative AI risks and potential use cases