People exhibit the same gender-based biases when interacting with artificial intelligence as they do with other humans, with a new study revealing that users are significantly more likely to exploit AI partners labeled as “female” than those identified as “male.” This finding highlights how ingrained societal discrimination extends beyond human interactions and poses risks for the future design of AI systems.
The Prisoner’s Dilemma Reveals Human Bias
Researchers at iScience published findings on November 2nd showing that participants in a modified “Prisoner’s Dilemma” game consistently exploited AI agents identified as female, nonbinary, or having no gender at a rate 10% higher than they exploited male-identified AI. The Prisoner’s Dilemma is a standard test in game theory where players must choose between cooperation and self-interest; exploitation occurs when one player defects while the other cooperates, maximizing the defector’s gain at the other’s expense.
This behavior isn’t limited to interactions with AI. Participants also demonstrated a preference for cooperating with female, nonbinary, and gender-neutral AI, expecting similar cooperative behavior in return. Conversely, they were less trusting of male-identified AI, anticipating defection. Female participants, in particular, exhibited strong “homophily,” cooperating more readily with other “female” agents.
Why This Matters: The Rise of Anthropomorphized AI
The study’s implications are far-reaching. As AI becomes increasingly anthropomorphized—given human-like characteristics such as gender and names—to encourage trust and engagement, existing biases could be amplified. This isn’t merely an abstract ethical concern; the real world is rapidly integrating AI into critical systems : self-driving cars, work scheduling, and even medical diagnoses.
The researchers found that exploitation occurs because people assume others will either defect or cooperate, and act accordingly. When facing a gendered AI, these assumptions play out in predictable ways. Men were more prone to exploit their partners while women cooperated more often, regardless of whether the partner was human or AI.
Mitigating Bias in AI Design
The study’s findings underscore a critical need for AI designers to proactively address gender-based biases. Simply assigning genders to AI without considering the underlying social dynamics can reinforce harmful patterns. The goal isn’t to eliminate gender altogether, but to understand how perceptions shape interactions and design systems that mitigate unfair outcomes.
“By understanding the underlying patterns of bias and user perceptions, designers can work toward creating effective, trustworthy AI systems capable of meeting their users’ needs while promoting and preserving positive societal values such as fairness and justice.”
Ignoring these biases could perpetuate discrimination in ways that are difficult to reverse. As AI becomes more integrated into daily life, awareness of these dynamics is crucial for ensuring fairness and preventing the reinforcement of harmful societal norms.





















