State legislatures are taking the lead in regulating artificial intelligence after a quarter-century in which Congress failed to come up with substantive laws governing the technology.
The specter of artificial intelligence and its potentially widespread impact on every aspect of life in the United States has lawmakers, shocked by their failure to police social media and protect consumer data, scrambling to act.
“Consensus has not yet been reached, but Congress can look to state legislatures — often referred to as laboratories of democracy — for inspiration regarding how to address the opportunities and challenges posed by artificial intelligence,” said the Brennan Center for Justice, a nonpartisan think tank that focuses on Ali said in a statement regarding law and poverty.
More than twenty states and territories have introduced bills, and a number have already enacted legislation. At least 12 states — Alabama, California, Colorado, Connecticut, Illinois, Louisiana, New Jersey, New York, North Dakota, Texas, Vermont, and Washington — have enacted laws delegating research obligations to government or government-regulated entities in order for organizations to increase knowledge of artificial intelligence and better understand its potential consequences.
Meanwhile, Florida and New Hampshire were among several states considering bills that would govern the use of artificial intelligence in political advertising, particularly “deepfake” technology that digitally manipulates a person’s likeness. Proposed legislation in South Carolina would limit the use of this technology within 90 days before an election and would require disclaimers.
“There is a trend of regulators wanting to get ahead of the technology. In a way, the rush to regulate AI is very similar to what we've seen before: in the 1990s, the Internet, [in the] “In the early 2000s, it was smartphones and the Internet of Things,” Manisha Mithal, a founding member of the AI group at Silicon Valley law firm Wilson Sonsini and a former FTC staffer, said in an interview.
“Lawmakers are trying to move forward on an issue they don't understand,” Appian Corp. APPN,
CEO Matt Calkins said in an interview. “But jumping ahead could lead to the wrong rules and hamper trade, ceding too much influence to big tech companies and not… [protect] Property rights. We work to advance the individual rights of creators.
But consumers say they want some kind of legislative action. Pew Research Center surveys show that a majority of Americans are increasingly cautious about the growing role of artificial intelligence in their lives, with 52% saying they are more concerned than excited, compared to 10% who say they are more excited than concerned.
“The first dominoes to fall”
Government use, algorithmic discrimination and fake election ads are among the top AI priorities for state lawmakers heading into the 2024 legislative season, James Maroney, a Democratic senator from Connecticut, told attendees at the inaugural Global Artificial Intelligence Governance Conference organized by the International Association of AI Governance Professionals. Privacy. In Boston last year.
“California’s new proposal to regulate automated decision-making technology and the EU’s agreement on the upcoming AI law framework are just the first dominoes to fall around AI regulation,” said Gal Ringel, CEO of Mine, a global data privacy firm. The management company said in an email.
The European Union is several steps ahead of the United States, and has already presented a potential model for federal regulation through the Artificial Intelligence Act, which is expected to be passed this year and take effect in 2026.
“We want national legislation, especially since it is compatible with international law,” said Peter Guagenti, president of AI startup Tapnin, which has more than a million customers globally. “But if it takes the states to get the job done, so be it. We need clear guidelines on what constitutes copyright protection.”
Thirty states have passed more than 50 laws over the past five years to address AI in some capacity. In California, Colorado, Connecticut, Virginia, and Utah, these additions have been made to existing consumer privacy laws.
Last year, Montana, Indiana, Oregon, Tennessee, and Texas passed consumer privacy laws that include provisions regulating artificial intelligence. Laws typically give consumers the right to opt out of automated profiling and impose data protection assessments if automated decision-making poses an increased risk of harm.
Local Law 144, the first of its kind in New York City, which took effect on July 5, 2023, regulates the use of artificial intelligence to reduce bias in hiring. California, Colorado, Connecticut, Massachusetts, New Jersey, Rhode Island, and Washington, D.C., are also implementing laws governing artificial intelligence in employment this year.
“You can't let AI make the final decision. It can't make the critical decisions,” Calkins said.
“You have to keep humans in the loop” when making the final hiring decision, agrees Cliff Yurkiewicz, vice president of global strategy at Phenom, an HR technology company. The fear is that robots, not humans, will hire based solely on data. This can lead to discrimination.
A “complicated patchwork” of laws
Meanwhile, at the federal level, things appear calm again.
National Privacy Law Project, US Data Protection and Privacy Act, It sets out rules for assessing AI risks that directly affect companies that develop and use the technology. However, the bill stalled during the last session of Congress, and is now — like most tech legislation before it — in limbo.
President Joe Biden's executive order on artificial intelligence provided a blueprint for the responsible use of artificial intelligence outside government agencies. The executive order requires the technology industry to develop safety and security standards, introduce new consumer protections, and reduce barriers to immigration for highly skilled workers.
“Based on President Biden’s executive order on AI, decision makers in government agencies will evaluate and develop more concrete regulations to reduce the risks of AI and harness its benefits,” predicts Hitesh Sheth, CEO of Vectra AI, a cybersecurity company.
Read more: Biden's executive order on artificial intelligence could reshape the technology's impact on the economy and national security
However, the patchwork of state laws — in the absence of a uniform federal law — results in an inconvenient solution, and tech companies and their customers are complaining. They say the proliferation of different regulations will cause compliance headaches.
“without [federal law]“Companies are likely to face a complex patchwork of regulations, leading to increased risks of non-compliance, especially for those operating across state lines,” said Volker Smid, CEO of the software company. Acrolinks said in an email.
“There needs to be some national legislation” around data protection, adds Dan Schiappa, chief product officer at cybersecurity firm Arctic Wolf Networks. “The Internet doesn't work from one state to another.”