AI Transparency Now 🗣️
New CR survey: US people won’t trust AI tools with decisions unless data about them is shared transparently and is auditable & correctable. They're not wrong. (audio; 3:36)
What do US people think about AI and whether AI tools can be trusted for making decisions about us? Spoiler: we don’t trust them. Here’s a summary of one recent survey.
What People Think
A Consumer Reports AI survey of 2,022 U.S. adults, conducted in May 2024, shows that people in the USA care about transparency and explainable AI.
“A new nationally representative Consumer Reports survey explores Americans’ attitudes toward artificial intelligence (AI) and algorithmic decision-making. The survey found that a majority of Americans are uncomfortable about the use of AI and algorithmic decision-making technology around major life moments as it relates to housing, employment, and healthcare.”1
People are looking for companies to be far more transparent:
about the data used to train or score the AI model or algorithm, and
about a way they can correct the data (about them) used in the decision, if it’s wrong.
“The vast majority of Consumer Reports’ respondents (83%) said they would want to know what information was used to instruct AI or a computer algorithm to make a decision about them. Another super-majority (91%) said they would want to have a way to correct the data where a computer algorithm was used.”2
Why They’re Not Wrong
Their concerns are well-founded. As this article on the CR survey pointed out:
“Given that AI frequently screws up, often in a discriminatory way, they're probably not wrong to be concerned.”3
Predictive policing4, the discriminatory example cited in this article, is just one cause for concern. Other risky areas for AI bias include credit decisions5 (e.g. on mortgages, loans, and rentals):
“The biggest-ever study of real people’s mortgage data shows that predictive tools used to approve or reject loans are less accurate for minorities.”
and healthcare 6:
“… for this new class of tools to do more good than harm, Sendak believes the entire health care sector must address its underlying racial inequity. "You have to look in the mirror," he said. "It requires you to ask hard questions of yourself, of the people you work with, the organizations you're a part of. Because if you're actually looking for bias in algorithms, the root cause of a lot of the bias is inequities in care."
The risk of perpetuating existing inequities makes detecting and rooting out bias in AI and algorithms particularly important and challenging. (The case described in the above article, about detecting sepsis in Hispanic children, illustrates this.) But that’s no excuse for not trying — and we need to fix the existing inequities while we’re at it.
That’s what I think. What do YOU think?
“Consumer Reports survey: Many Americans concerned about AI, algorithms”, Consumer Reports, 2024-07-25. (survey; PDF of full report)
“Americans Are Uncomfortable with Automated Decision-Making”, by Catalina Sanchez and Adam Schwartz / Electronic Frontier Foundation, 2024-09-03.
“Americans Absolutely Detest AI That Makes Decisions for Them: Who would possibly want to have AI making banking and rental decisions for them?”, by Noor Al-Sibai / Futurism.com, 2024-09-08.
“Predictive policing algorithms are racist. They need to be dismantled.”, Will Douglas Heaven / MIT Technology Review, 2020-07-17.
“Bias isn’t the only problem with credit scores—and no, AI can’t help”, Will Douglas Heaven / MIT Technology Review, 2021-06-17.
“AI in medicine needs to be carefully deployed to counter bias – and not entrench it”, by Ryan Levi and Dan Gorenstein / NPR, 2023-06-06.
Have you met Christian Ortiz from Justice AI? He’s doing powerful work around #AI and bias training.