Artificial intelligence is transforming the way we work and live, with advances disrupting industries far and wide. As tools like ChatGPT become more commonplace, technology expert Hassan Taher is asking major questions about how bias is shaping AI responses and the impact it could have on humans.
“Like any tool crafted by humans, AI is not without its flaws,” Taher wrote in a recent blog post. “One of the most concerning issues surrounding AI is the potential for these systems to harbor and propagate biases. Recent research suggests that not only can humans absorb these biases from AI, but they may also retain these biases even after discontinuing their use of the AI.”
The challenge, he noted, is that as AI apps absorb more information, presumably it becomes more accurate and better able to respond to questions. However, if the underlying datasets that AI tools are using are themselves biased, AI will merely propagate those biases. That can cause major problems, Taher explained.
“The potential long-term effects of AI biases on society are profound. If unchecked, we risk entering a vicious cycle where biased AI leads to more biased humans, who then create increasingly biased algorithms,” he wrote.
Hassan Taher believes that, if unchecked, there could be “far-reaching implications,” especially in sectors that could greatly impact our lives, such as health care, technology, and law enforcement.
Problematic Results Perpetuate AI
There are frequent examples of AI providing inaccurate information in response to queries, including making up information and providing fake sources. The industry refers to these errors as “hallucinations,” but the impacts can be significant.
The problem is often humans are unaware of what’s presented as factual and what’s a glitch in the AI matrix. These errors can often have an oversized impact on marginalized groups.
A simple example is speech recognition software, which has regularly failed to recognize voices with non-American accents, causing problems for those who use in-home assistants or voice assistants.
There are far more damning, and damaging, potential impacts. For example, face recognition software that’s powered by AI has been shown to be racially biased against Black people, which could lead to more arrests of the innocent.
As more medical practitioners use AI for diagnosis and treatment recognitions, biases are emerging. If an AI tool is trained only on data from a subset of the population, such as those of a particular age or race, the results could be deadly for anyone not that age or race.
These biases come about in several ways. AI datasets based on news articles may contain bias about gender or coverage of a particular topic, based on the publications used.
Collection of data also can introduce bias, such as in the collection of criminal justice information. In some cases, certain neighborhoods may be oversampled, resulting in the appearance of higher crime rates for those areas. That can lead to problems in law enforcement coverage, insurance rates, and property values.
AI users themselves can contribute to the bias issue. If the AI tool relies on search results, it could be skewed to reflect the search queries as much as the results. Age, gender, and racial biases could easily arise as unintended consequences.
How To Solve the AI Bias Issue
One challenge in addressing this issue is that simply adding more data or better information doesn’t necessarily remediate the problem.
Even if the AI model itself improves with more and better information, the damage may already be done. And the people who are relying on AI to address complex issues may have themselves been biased by the AI that was influenced by human bias. A vicious cycle of bias persists.
However, that doesn’t mean we shouldn’t try. AI has tremendous potential to improve outcomes, simplify work and leisure and save time and resources.
It also has an opportunity to solve complex problems quickly. Already, AI is being used to scan X-rays and MRI results, and can identify disease faster than humans.
Humans can improve all the AI datasets by being more diligent about the way datasets are created, launched, and used. This task won’t be easy and will require collaboration across disciplines and industries.
There’s also a danger that improving the database could lead to rival AIs based on political or ideological differences as people debate the trust and accuracy of this information.
A recent Harvard Business Review article suggests two approaches to solving the AI bias issue.
First, the authors believe we should lean into AI helping humans with decision-making. Machine learning systems are better at discerning variables. The tools can also determine whether the algorithms being used are inherently biased, but have previously gone unnoticed. There’s the potential, they argue, that AI can benefit disadvantaged groups.
The second issue is to address the challenge of bias in AI head-on. One core component is to better define and codify the notion of “fairness” in the models used.
For example, humans can require models to have equal predictive value among all represented groups. They could also require that the rates of false positives and false negatives be equal across groups.
The definitions of fairness could change over time, meaning the models and the guidance would need to change as well.
For leaders, there are other critical steps to take, including:
- Staying current on the latest progress AI is making and how the field is evolving.
- Developing practices on the use of AI that focus on responsibility, ethics, and mitigating bias.
- Having more conversations about bias and tests to uncover bias, checking human and artificial decisions against each other and refining algorithms and usage accordingly.
- Think about how humans and AI will work together to reduce bias.
- Invest in more and better data and use a multidisciplinary approach to research into biases (while protecting privacy).
- Invest in diversifying AI.
Hassan Taher’s Insights Into the Bias Issue
Hassan Taher — a writer, speaker, and AI consultant — has long been a visionary leader in artificial intelligence.
Taher graduated from the University of Texas at Dallas, where he studied computer science. He’s the author of three books on AI: The Rise of Intelligent Machines, AI and Ethics: Navigating the Moral Maze, and The Future of Work in an AI-Powered World.
Through his consulting firm, Taher AI Solutions, he works with corporations and governments on AI technologies, including image recognition software, chatbots and machine learning algorithms.
Hassan Taher stresses that transparency is critical to eliminate AI bias. That means understanding what makes these AI tools work and not relying on “black boxes” implicitly.
“As AI continues to play a more prominent role in our lives,” he wrote, “it’s crucial to approach its development and implementation with caution, ensuring that we don’t inadvertently perpetuate harmful biases.
“While AI offers immense potential benefits, it’s essential to be aware of its limitations and potential pitfalls. By understanding and addressing the biases in AI, we can harness its power responsibly and ensure a more equitable future for all.”
Leave a Reply