By JESSE BEDAYN, SUSAN HAIGH, TRÂN NGUYỄN and BECKY BOHRER (Associated Press/Report for America)
DENVER (AP) — Artificial intelligence is being used to determine which Americans get the job interview, the apartment, and even medical care, but the initial major suggestions to control bias in AI decision making are encountering obstacles from every direction. first major proposals to reign in bias in AI decision making are facing headwinds from every direction.
Lawmakers working on these bills, in states including Colorado, Connecticut and Texas, are coming together Thursday to argue the case for their proposals as civil rights-oriented groups and the industry play tug-of-war with core components of the legislation.
Organizations including labor unions and consumer advocacy groups want more transparency from companies and greater legal recourse for citizens to sue over AI discrimination. The industry is offering tentative support but resisting those accountability measures.
The bipartisan lawmakers caught in the middle — including those from Alaska, Georgia and Virginia — have been working on AI legislation together in the face of federal inaction. The goal of the press conference is to highlight their work across states and stakeholders, reinforcing the importance of collaboration and compromise in this first step in regulation.
The lawmakers include Connecticut’s Democratic state Sen. James Maroney, Colorado’s Democratic Senate Majority Leader Robert Rodriguez and Alaska’s Republican Sen. Shelley Hughes.
“At this point, we don’t have confidence in the federal government to pass anything quickly. And we do see there is a need for regulation,” said Maroney. “It’s important that industry advocates, government and academia work together to get the best possible regulations and legislation.”
The lawmakers argue the bills are a first step that can be built on going forward.
While over 400 AI-related bills are being debated this year in statehouses nationwide, most target one industry or just a piece of the technology — such as deepfakes used in elections or to make pornographic images.
The biggest bills this team of lawmakers has put forward offer a broad framework for oversight, particularly around one of the technology’s most perverse dilemmas: AI discrimination. Examples include an AI that failed to accurately assess Black medical patients and another that downgraded women’s resumes as it filtered job applications.
Still, up to 83% of employers use algorithms to help in hiring, according to estimates from the Equal Employment Opportunity Commission.
If nothing is done, there will almost always be bias in these AI systems, explained Suresh Venkatasubramanian, a Brown University computer and data science professor who’s teaching a class on mitigating bias in the design of these algorithms.
“You have to do something explicit to not be biased in the first place,” he said.
These proposals, mainly in Colorado and Connecticut, are complex, but the core thrust is that companies would be required to perform “impact assessments” for certain AI systems. Those reports would include descriptions of how AI figures into a decision, the data collected and an analysis of the risks of discrimination, along with an explanation of the company’s safeguards.
The main argument is about who can see those reports. Having more access to information about the AI systems, like the impact assessments, means more accountability and safety for the public. However, companies are concerned that it a