News

Algorithms help people see and correct their biases, study shows

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


Algorithms are a staple of modern life. People trust algorithmic recommendations to scroll through deep catalogs and find the best movies, routes, information, products, people and investments. As people train algorithms in their decisions – for example, algorithms that make recommendations on e-commerce and social media sites – algorithms learn and code human prejudices.

Algorithmic recommendations display bias toward popular choices and information that evokes outrage, such as party news. At the societal level, algorithmic biases perpetuate and amplify structural racial biases in society. judicial systemgender prejudice in people companies hireand wealth inequality in urban Development.

Algorithmic bias can also be used to reduce human bias. Algorithms can reveal structural biases in organizations. In a paper published in the Proceedings of the National Academy of Science, my colleagues and I discovered that algorithmic bias can help people better recognize and correct biases in themselves.

Prejudice in the mirror

In nine experiments, Begum Celikitutan, Romain Cadario It is I had survey participants rate Uber drivers or Airbnb listings on their driving ability, reliability, or likelihood of renting the listing. We provide participants with relevant details such as the number of trips they have taken, a description of the property or a star rating. We also included irrelevant biased information: a photograph revealed the drivers’ age, gender, and attractiveness, or a name that suggested the listing’s hosts were white or black.

After participants made their ratings, we showed them one of two ratings summaries: one showing their own ratings or one showing the ratings of an algorithm that was trained on their ratings. We told participants about the biased feature that may have influenced these ratings; for example, Airbnb guests are less likely to rent to hosts with distinctly African-American names. We then asked them to rate how much influence bias had on their abstract ratings.

Regardless of whether participants assessed the biasing influence of race, age, gender, or attractiveness, they observed more bias in the ratings made by the algorithms than they did themselves. This algorithmic mirror effect held whether participants judged classifications from real algorithms or we showed participants their own classifications and deceptively told them that an algorithm made those classifications.

Participants saw more bias in the algorithms’ decisions than in their own decisions, even when we gave participants a cash bonus if their biased judgments matched the judgments made by a different participant who saw the same decisions. The algorithmic mirror effect remained even if participants were in the marginalized category – for example, identifying as a woman or as black.

Research participants were just as able to see bias in trained algorithms in their own decisions as they were able to see bias in other people’s decisions. Additionally, participants were more likely to see the influence of racial bias on the algorithms’ decisions than on their own decisions, but they were equally likely to see the influence of defensible characteristics, such as star ratings, on the algorithms’ decisions and their own decisions. decisions.

Prejudice blind spot

People see more bias in algorithms because algorithms remove bias from people. prejudice blind spots. It’s easier to see bias in other people’s decisions than in your own because you use different evidence to evaluate them.

When examining your decisions for bias, you look for evidence of conscious bias—whether you thought about race, gender, age, status, or other unwarranted characteristics when deciding. You ignore and excuse bias in your decisions because you don’t have access to associative machinery this drives your intuitive judgments, where bias often occurs. You might think, “I didn’t think about their race or gender when I hired them. I hired them on merit alone.”

When you examine other people’s decisions for bias, you won’t have access to the processes they used to make their decisions. Then you examine your decisions for bias, where bias is evident and harder to excuse. You can see, for example, that they only hired white men.

Algorithms remove the bias blind spot because you see the algorithms more how you see other people than yourself. Algorithm decision-making processes are a black boxsimilar to how other people’s thoughts are inaccessible to you.

Participants in our study who were more likely to demonstrate the bias blind spot were more likely to see more bias in the algorithms’ decisions than in their own decisions.

People also externalize biases into algorithms. Seeing bias in algorithms is less threatening than seeing bias in yourself, even when the algorithms are trained on your choices. People blame algorithms. Algorithms are trained on human decisions, but people call reflected bias “algorithmic bias.”

corrective lens

Our experiments show that people are also more likely to correct their biases when they are reflected in algorithms. In a final experiment, we gave participants the opportunity to correct the ratings they evaluated. We show each participant their own ratings, which we assign to the participant or to an algorithm trained on their decisions.

Participants were more likely to correct ratings when they were assigned to an algorithm because they believed the ratings were more biased. As a result, the final corrected ratings were less biased when assigned to an algorithm.

Algorithmic biases that have pernicious effects have been well documented. Our findings show that algorithmic bias can be leveraged for good. O first step to correcting bias is to recognize your influence and direction. Like mirrors that reveal our biases, algorithms can improve our decision-making.

This article was republished from The conversation, an independent, nonprofit news organization that brings you trusted facts and analysis to help you understand our complex world. It was written by: Carey K. Morecunha, Boston University

See more information:

Carey K. Morewedge does not work for, consult with, own shares in, or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond her academic appointment.



Source link

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 6,321

Don't Miss

Biden and Trump tie in New Hampshire poll

President Biden and former President Trump are virtually tied in

Urgent recall of sandwiches and wraps sold in five major UK supermarkets over fears they contain deadly E. coli bacteria

POPULAR food products sold by major UK retailers are being