×

All AI is Biased

Here is What You Should Do About It

By Greg Nudelman, UXforAI.com


Artificial Intelligence (AI) is everywhere—from the playlist suggestions you get on music apps to the automated resume-screening tools employers use to shortlist candidates. We have become so accustomed to AI-driven experiences that it is easy to forget something crucial: every AI system is biased. It is not that developers are evil masterminds inserting prejudices on purpose. Instead, AI is built on data generated by humans, who all have conscious and unconscious biases. When we feed AI systems flawed or limited data, guess what? The output is inevitably skewed.

Bias in AI: A Quick Refresher

To understand AI bias, it helps to know how AI systems learn. Typically, these systems use algorithms trained on large amounts of data. This data can be text, images, audio—anything digitized. The catch? The data reflects the world it was taken from, warts and all. If a dataset has more male speakers than female speakers, a voice-recognition AI might better understand men’s voices. If an image dataset for “CEO” only includes pictures of white men, the system might internally start to equate leadership with whiteness and maleness. This phenomenon is often referred to as “white male default,” but it extends to every axis of identity—gender, race, age, ability, and more [1].

But the bias problem does not end at “white male default.” AI can encode all manner of subtle or blatant prejudices. It might overlook particular dialects, exclude certain body types, or misinterpret cultural references. Worse, its predictions or recommendations might reinforce stereotypes or the impact on systematically disadvantage people, whether in hiring, healthcare, criminal sentencing, or advertising.

Meet the AI “Bullshit Generator”

Here is where it gets a bit wild: large language models (think ChatGPT or other text-generation tools) can generate anything on command, whether factually correct or complete nonsense. They are not intentionally lying, they are simply producing the text that seems statistically likely to be the “best answer” based on their training data. That means you will get a wave of misinformation if you ask it for conspiracy theories. If your prompt unintentionally implies that certain groups of people are inferior, the AI might regurgitate or even invent “evidence” to support that bias.

This is why AI is increasingly described as a “bullshit generator.” [2] By default, it might sound convincing but it can also perpetuate biases—and do so very convincingly—because its goal is to produce content in a way that appears coherent and logical. AI systems do not understand context and ethics the way humans do. They are automated mimicry machines that can seamlessly replicate biases if we are not vigilant. This puts a massive responsibility on us, the users, to guide, check, and correct it.

Why Does It Matter?

You might be thinking: “Sure, AI is biased. So what? The world itself is not equal, right?” But ignoring these biases is dangerous for a few reasons:

  • Reinforced Inequalities: AI can replicate or even magnify existing disparities. For instance, a hiring algorithm might systematically favor male candidates if its training data comes from historically male-dominated industries.
  • Missed Perspectives: AI is increasingly used to make crucial decisions, such as recommending who qualifies for loans, allocating police resources, and suggesting medical treatments. Biased outputs can lead to real harm, especially for marginalized communities.
  • False Credibility: AI can present itself as objective and data-driven. There is a significant risk that people will treat AI outputs as facts when, in reality, they might be a product of flawed assumptions.
  • Ethical and Legal Implications: Companies can face lawsuits or reputational damage if their AI systems discriminate. On a personal level, if you are involved in developing or implementing AI in the future, you could be part of either propagating or preventing these issues.

Recognizing the Bias

When you interact with AI—whether it is a chatbot or a facial-recognition system—here are some signs that you might be dealing with biased or unreliable output:

  • Stereotypical or One-Dimensional Results: If you notice the system repeatedly associating specific jobs with men, certain social roles with women, or ignoring certain racial groups, there is a strong hint of bias in the training data.
  • Confident but Wrong Answers: AI might explain with such detail and articulation that you assume it is correct. When you dig deeper, you find it is riddled with inaccuracies.
  • Lack of Source Transparency: If the AI does not (or cannot) show you how it arrived at its conclusion, it is much harder to spot misinformation and bias. This often goes hand-in-hand with “bullshitting”—the AI might just cobble together a mix of plausible-sounding statements.
  • Hasty Judgments on Complex Topics: Some systems spit out oversimplified opinions on multifaceted issues, ignoring nuance or context.

What You Can Do About It

Stay Critical and Curious:

  • Ask Questions: Whenever you get a recommendation or information from an AI, ask yourself where it might have come from. Does it align with credible sources? If the AI is summarizing research, check to see if you can find the original study or data.
  • Diversify Your Data Sources: Do not rely solely on one AI chatbot or product. Use multiple sources of information to triangulate the truth. AI is a great starting point, but it should not be the final word on anything crucial.

Demand Transparency:

  • Ask for Explainability: If an AI system impacts your life—say, if your college uses it to screen essays or your employer uses it to evaluate your performance—ask how it makes decisions. Request a clear explanation of the variables it uses and how it weighs them.
  • Push Institutions to Open the Data: Universities, companies, and government agencies using AI could better explain their data practices. Voice your concerns and advocate for more transparent policies.

Champion Inclusive Data:

  • Call Out Narrow Datasets: When you spot obvious gaps—like an image recognition tool that fails on darker skin tones—use your voice to alert developers or your institution’s IT department.
  • Contribute to More Representative Projects: If you have the chance, work on AI projects that actively seek out diverse data sources. Remember that inclusivity is not a one-off box to check, it is an ongoing process.

Collaborate with Diverse Teams:

  • Involve Stakeholders: AI is often built by relatively homogenous teams. Advocate for collaboration with people from different backgrounds, genders, races, and disciplines. This diversity can help spot blind spots that a single group might miss.
  • Listen to User Feedback: Sometimes, the best insights on AI bias come from the people most impacted by it. Whether on a student committee or part of a campus tech project, ensure you are actively seeking feedback from peers across the spectrum.

Develop Ethical Guidelines and Best Practices:

  • Create Checklists: If you are involved in coding or product design, institute an ethics or bias checklist. AI Humanifesto is an excellent resource to get you going in the right direction [3].
  • Set Boundaries for AI Use: Consider when and where AI should not be used. Specific tasks require a human-led approach, especially when fairness and empathy are paramount.

The Road Ahead

By recognizing AI’s biases, staying vigilant about the “bullshit” it can spin, and pushing for more inclusive design, you contribute to an AI landscape that is fairer, more honest, and more aligned with the needs of everyone. You might not be able to fix the world’s prejudices just by developing or using AI systems more carefully—but every small change helps. Every line of ethical code, every insistence on diverse datasets, and every moment of skepticism about AI’s “facts” pushes the entire field in the right direction.

Bias in AI is not going away overnight; it is a complex problem embedded in society’s historical imbalances, systemic inequalities, and flawed data collection processes. However, you have more power than you might realize. As students, researchers, and future tech leaders, you have both the power and responsibility to shape AI’s trajectory.

Do not underestimate the role you can play. Call out biases. Champion inclusive practices. Demand transparency. Because at the end of the day, the future of AI is not just about fancy algorithms; it is about the society we are building with them—and we all have a stake in making that society a place worth living in.

Greg Nudelman

About the author

Greg Nudelman is a UX Designer, Strategist, Speaker, and Author. For over 20 years, he has been helping his Fortune 100 clients like Cisco, IBM, and Intuit to create loyal customers and generate $100s of millions in additional value. A veteran of 35 AI projects, Greg is currently a Distinguished Designer at Sumo Logic, creating innovative AI/ML solutions for Security, Network, and Cloud Monitoring. Greg presented 120+ keynotes and workshops in 18 countries and authored 5 UX books and 24 patents. His latest book, “UX for AI,” is coming in April 2025. More info.

Here is What You Should Do About It

By Greg Nudelman, UXforAI.com


Artificial Intelligence (AI) is everywhere—from the playlist suggestions you get on music apps to the automated resume-screening tools employers use to shortlist candidates. We have become so accustomed to AI-driven experiences that it is easy to forget something crucial: every AI system is biased. It is not that developers are evil masterminds inserting prejudices on purpose. Instead, AI is built on data generated by humans, who all have conscious and unconscious biases. When we feed AI systems flawed or limited data, guess what? The output is inevitably skewed.

Bias in AI: A Quick Refresher

To understand AI bias, it helps to know how AI systems learn. Typically, these systems use algorithms trained on large amounts of data. This data can be text, images, audio—anything digitized. The catch? The data reflects the world it was taken from, warts and all. If a dataset has more male speakers than female speakers, a voice-recognition AI might better understand men’s voices. If an image dataset for “CEO” only includes pictures of white men, the system might internally start to equate leadership with whiteness and maleness. This phenomenon is often referred to as “white male default,” but it extends to every axis of identity—gender, race, age, ability, and more [1].

But the bias problem does not end at “white male default.” AI can encode all manner of subtle or blatant prejudices. It might overlook particular dialects, exclude certain body types, or misinterpret cultural references. Worse, its predictions or recommendations might reinforce stereotypes or the impact on systematically disadvantage people, whether in hiring, healthcare, criminal sentencing, or advertising.

Meet the AI “Bullshit Generator”

Here is where it gets a bit wild: large language models (think ChatGPT or other text-generation tools) can generate anything on command, whether factually correct or complete nonsense. They are not intentionally lying, they are simply producing the text that seems statistically likely to be the “best answer” based on their training data. That means you will get a wave of misinformation if you ask it for conspiracy theories. If your prompt unintentionally implies that certain groups of people are inferior, the AI might regurgitate or even invent “evidence” to support that bias.

This is why AI is increasingly described as a “bullshit generator.” [2] By default, it might sound convincing but it can also perpetuate biases—and do so very convincingly—because its goal is to produce content in a way that appears coherent and logical. AI systems do not understand context and ethics the way humans do. They are automated mimicry machines that can seamlessly replicate biases if we are not vigilant. This puts a massive responsibility on us, the users, to guide, check, and correct it.

Why Does It Matter?

You might be thinking: “Sure, AI is biased. So what? The world itself is not equal, right?” But ignoring these biases is dangerous for a few reasons:

  • Reinforced Inequalities: AI can replicate or even magnify existing disparities. For instance, a hiring algorithm might systematically favor male candidates if its training data comes from historically male-dominated industries.
  • Missed Perspectives: AI is increasingly used to make crucial decisions, such as recommending who qualifies for loans, allocating police resources, and suggesting medical treatments. Biased outputs can lead to real harm, especially for marginalized communities.
  • False Credibility: AI can present itself as objective and data-driven. There is a significant risk that people will treat AI outputs as facts when, in reality, they might be a product of flawed assumptions.
  • Ethical and Legal Implications: Companies can face lawsuits or reputational damage if their AI systems discriminate. On a personal level, if you are involved in developing or implementing AI in the future, you could be part of either propagating or preventing these issues.

Recognizing the Bias

When you interact with AI—whether it is a chatbot or a facial-recognition system—here are some signs that you might be dealing with biased or unreliable output:

  • Stereotypical or One-Dimensional Results: If you notice the system repeatedly associating specific jobs with men, certain social roles with women, or ignoring certain racial groups, there is a strong hint of bias in the training data.
  • Confident but Wrong Answers: AI might explain with such detail and articulation that you assume it is correct. When you dig deeper, you find it is riddled with inaccuracies.
  • Lack of Source Transparency: If the AI does not (or cannot) show you how it arrived at its conclusion, it is much harder to spot misinformation and bias. This often goes hand-in-hand with “bullshitting”—the AI might just cobble together a mix of plausible-sounding statements.
  • Hasty Judgments on Complex Topics: Some systems spit out oversimplified opinions on multifaceted issues, ignoring nuance or context.

What You Can Do About It

Stay Critical and Curious:

  • Ask Questions: Whenever you get a recommendation or information from an AI, ask yourself where it might have come from. Does it align with credible sources? If the AI is summarizing research, check to see if you can find the original study or data.
  • Diversify Your Data Sources: Do not rely solely on one AI chatbot or product. Use multiple sources of information to triangulate the truth. AI is a great starting point, but it should not be the final word on anything crucial.

Demand Transparency:

  • Ask for Explainability: If an AI system impacts your life—say, if your college uses it to screen essays or your employer uses it to evaluate your performance—ask how it makes decisions. Request a clear explanation of the variables it uses and how it weighs them.
  • Push Institutions to Open the Data: Universities, companies, and government agencies using AI could better explain their data practices. Voice your concerns and advocate for more transparent policies.

Champion Inclusive Data:

  • Call Out Narrow Datasets: When you spot obvious gaps—like an image recognition tool that fails on darker skin tones—use your voice to alert developers or your institution’s IT department.
  • Contribute to More Representative Projects: If you have the chance, work on AI projects that actively seek out diverse data sources. Remember that inclusivity is not a one-off box to check, it is an ongoing process.

Collaborate with Diverse Teams:

  • Involve Stakeholders: AI is often built by relatively homogenous teams. Advocate for collaboration with people from different backgrounds, genders, races, and disciplines. This diversity can help spot blind spots that a single group might miss.
  • Listen to User Feedback: Sometimes, the best insights on AI bias come from the people most impacted by it. Whether on a student committee or part of a campus tech project, ensure you are actively seeking feedback from peers across the spectrum.

Develop Ethical Guidelines and Best Practices:

  • Create Checklists: If you are involved in coding or product design, institute an ethics or bias checklist. AI Humanifesto is an excellent resource to get you going in the right direction [3].
  • Set Boundaries for AI Use: Consider when and where AI should not be used. Specific tasks require a human-led approach, especially when fairness and empathy are paramount.

The Road Ahead

By recognizing AI’s biases, staying vigilant about the “bullshit” it can spin, and pushing for more inclusive design, you contribute to an AI landscape that is fairer, more honest, and more aligned with the needs of everyone. You might not be able to fix the world’s prejudices just by developing or using AI systems more carefully—but every small change helps. Every line of ethical code, every insistence on diverse datasets, and every moment of skepticism about AI’s “facts” pushes the entire field in the right direction.

Bias in AI is not going away overnight; it is a complex problem embedded in society’s historical imbalances, systemic inequalities, and flawed data collection processes. However, you have more power than you might realize. As students, researchers, and future tech leaders, you have both the power and responsibility to shape AI’s trajectory.

Do not underestimate the role you can play. Call out biases. Champion inclusive practices. Demand transparency. Because at the end of the day, the future of AI is not just about fancy algorithms; it is about the society we are building with them—and we all have a stake in making that society a place worth living in.