Politics

Colorado, the first state to move forward in trying to regulate the hidden role of AI in American life

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


DENVER – Early attempts to regulate artificial intelligence programs that play a hidden role in hiring, housing and medical decisions for millions of Americans are facing pressure from all sides and failing in statehouses across the country.

Only one of seven bills aimed at stopping AI’s tendency to discriminate in making important decisions — including who gets hired, money for a house or medical care — has passed. Colorado Gov. Jared Polis hesitantly signed the bill on Friday.

Colorado’s bill and those that have faltered in Washington, Connecticut and elsewhere have faced battles on many fronts, including between civil rights groups and the technology industry, and lawmakers wary of jumping into a technology few yet understand and concerned governors in being the strange state. and scary AI startups.

Polis signed the Colorado bill “with reservations,” saying in a statement that he was wary of regulations that harm AI innovation. The project is valid for two years and can be amended before it becomes law.

“I encourage (lawmakers) to significantly improve this before it goes into effect,” Polis wrote.

The Colorado proposal, along with six sister bills, are complex but will largely require companies to assess the risk of discrimination from their AI and inform customers when AI has been used to help make an important decision for them.

The bills are separate from more than 400 AI-related bills that have been debated this year. Most target slices of AI, such as using deepfakes in elections or to make pornography.

The seven bills are more ambitious, apply to major industries and target discrimination, one of the most perverse and complex problems in technology.

“We actually have no visibility into the algorithms used, whether they work or not, or whether we are discriminated against,” said Rumman Chowdhury, a US State Department AI envoy who previously led Twitter’s AI ethics team. .

While anti-discrimination laws are already in place, those who study AI discrimination say it’s a different beast, that the U.S. is already behind on regulation.

“Computers are making biased decisions on a massive scale,” said Christine Webber, a civil rights lawyer who has worked on class-action discrimination lawsuits, including against Boeing and Tyson Foods. Now, Webber is close to final approval of one of the country’s first settlements in a class-action lawsuit over AI discrimination.

“Nor, I should say, that the old systems were perfectly free from bias,” Webber said. But “anyone could only see so many resumes during the day. So you could only make a few biased decisions in a day and the computer can do that quickly for a large number of people.”

When you apply for a job, an apartment, or a home loan, there’s a good chance that AI is evaluating your application: sending it to the queue, assigning it a score, or filtering it. An estimated 83% of employers use algorithms to help with hiring, according to the Equal Employment Opportunity Commission.

The AI ​​itself doesn’t know what to look for in a job application, so it’s taught based on past resumes. Historical data used to train algorithms can smuggle biases.

Amazon, for example, worked on a hiring algorithm trained based on old resumes: mostly male candidates. When evaluating new applicants, he downgraded resumes with the word “women’s” or that listed women’s colleges because they were not represented in the historical data – the resumes – from which he had learned. The project was scuttled.

Webber’s class action lawsuit alleges that an AI system that evaluates rental applications disproportionately assigned lower scores to black or Hispanic applicants. One study found that an AI system built to assess medical needs passed black patients on for special care.

Studies and lawsuits have provided a glimpse behind the scenes of AI systems, but most algorithms remain veiled. Most Americans are unaware that these tools are being used, a Pew Research survey shows. Companies are generally not required to explicitly disclose that an AI was used.

“Just pulling back the curtain so we can see who is actually doing the assessment and what tool is being used is a huge first step,” Webber said. “Existing laws don’t work if we can’t get at least some basic information.”

That’s what the Colorado bill, along with another surviving bill in California, are trying to change. The bills, including a flagship proposal in Connecticut that was rejected by the governor’s opposition, are largely similar.

The Colorado bill will require companies that use AI to help make important decisions so that Americans annually evaluate their AI for potential bias; implement a supervision program within the company; inform the state attorney general if discrimination was found; and inform customers when AI has been used to help them make a decision, including an option to appeal.

Unions and academics fear that reliance on corporate self-supervision means it will be difficult to proactively address discrimination in an AI system before it causes harm. Companies fear that forced transparency could reveal trade secrets, including in potential litigation, in this new hyper-competitive field.

AI companies also lobbied for, and generally received, a provision that only allows the attorney general, not citizens, to bring lawsuits under the new law. The details of the application were left to the attorney general.

While large AI companies have more or less signed on to these proposals, a group of small AI companies based in Colorado said the requirements can be managed by giant AI companies, but not by emerging startups.

“We are in a new age of primordial soup,” said Logan Cerkovnik, founder of Thumper.ai, referring to the field of AI. “Having overly restrictive legislation that forces us to define and restrict the use of technology while it is being formed will be detrimental to innovation.”

Everyone agreed, along with many AI companies, that it is critical to confront what is formally called “algorithmic discrimination.” But they said the bill, as written, falls short of that goal. Instead, they proposed strengthening existing anti-discrimination laws.

Chowdhury worries that court cases are too expensive and time-consuming to be an effective enforcement tool and laws should instead go beyond what even Colorado is proposing. Instead, Chowdhury and academics have proposed an independent, accredited organization that can explicitly test for possible biases in an AI algorithm.

“You can understand and deal with a single person who is discriminatory or biased,” Chowdhury said. “What do we do when this is embedded throughout the institution?”

___

Bedayn is a corps member for the Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to cover undercovered issues.



Source link

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

Gasoline prices rise across the state

July 2, 2024
Average gas prices in Pennsylvania have risen about a penny per gallon over the past week, averaging $3.63 on Monday. Prices in Pennsylvania are 9.3 cents per gallon

Don't Miss