Laws governing artificial intelligence are increasingly different depending on where you are in the United States, creating confusion for companies competing to take advantage of the rise of AI.
This year, the Utah Legislature is considering a bill that would require certain companies to disclose whether their products interact with consumers without human intervention.
In Connecticut, the state legislature is considering a bill that would place stricter limits on transparency into the inner workings of AI systems deemed “high risk.”
These states are among 30 states (and the District of Columbia) that have proposed or adopted new laws that directly or indirectly impose constraints on how AI systems are designed and used.
The legislation covers everything from child protection and data transparency to reducing bias and protecting consumers from AI decisions in health care, housing, and employment.
“This is really just a disruption for business,” said Gori Mahdavi, an attorney at Bryan Cave Leighton Paisner, regarding the still-developing legislation and new legislation. “It’s just a lot of uncertainty.”
The patchwork of laws across the country stems from a lack of action from Washington to directly federally regulate rapidly evolving technology. The main reason is that not all US lawmakers agree that new laws are needed to crack down on AI.
The situation is different in other parts of the world. The European Union passed a comprehensive AI law this year called the AI Act. And China has enacted a more politically focused AI law that targets AI-generated news outlets, deepfakes, chatbots, and datasets.
But state laws being debated or enacted in the U.S. reflect priorities set by the federal government, Mahdavi said.
For example, President Biden issued an executive order last October directing AI developers and users to apply AI “responsibly.” And in January, the government added a requirement for developers to disclose safety test results to the government.
Last October, President Joe Biden signed an executive order on the development and use of artificial intelligence, and Vice President Kamala Harris applauded. (Photo by: Brendan SMIALOWSKI/AFP) (BRENDAN SMIALOWSKI, Getty Images)
Although state laws have some common themes, their nuances can make compliance difficult for your business.
California, Colorado, Delaware, Texas, Indiana, Montana, New Hampshire, Virginia, and Connecticut opt out of automated decision-making notifications and profiling technologies used to create legally significant effects. Adopted a consumer protection law that gives consumers rights.
The law broadly prohibits companies from applying automated decision-making technology to consumers without the consumer’s consent.
story continues
For example, companies cannot profile consumers based on their work performance, health, location, financial status, or other factors unless they explicitly consent.
Colorado’s law further expands on prohibiting AI from generating discriminatory insurance rates.
However, the definition of the term “automated decision-making” as it appears in most laws varies by state.
In some cases, decisions about employment or financial services will no longer be considered automated as long as the decisions are made with some level of human involvement.
New Jersey and Tennessee have so far failed to enact opt-out provisions. However, states require companies that use AI for profiling and automated decision-making to perform risk assessments to ensure that consumers’ personal data is protected.
In Illinois, a 2022 law restricts employers from using AI to video evaluate job applicants. Employers need candidate consent to use AI to evaluate candidate video images.
Senate hearing at the Illinois State Capitol in Springfield, Illinois (Antonio Perez/Chicago Tribune/Tribune News Service via Getty Images) (Chicago Tribune via Getty Images)
In Georgia, a narrow law specific to the use of AI by optometrists went into effect in 2023. The law stipulates that AI devices and equipment used to analyze eye images and other eye assessment data cannot be relied upon to create an initial prescription or solely to write a prescription. First prescription renewal.
New York has become the first state to require employers to conduct bias audits of AI-powered hiring decision tools. This law came into effect in July 2023.
Several states are following this trend more broadly, requiring organizations and individuals using AI to conduct data risk assessments before using the technology to process consumer data.
Scott Babwer Brennen, director of online expression policy at the UNC-Chapel Hill Technology Policy Center, said the “historic level of one-party effort” is helping many states pass these laws through Congress. “Domination,” he says.
Last year, state legislatures in about 40 states were controlled by a single party. That number has more than doubled from 17 in 1991.
Click here for the latest technology news impacting the stock market.
Read the latest financial and business news from Yahoo Finance