X
Ready to work on Real Public Policy Challenges

ISPP-PAI (Policy Aptitude Index) Round 4 | 16 & 17 May | Secure Early Admission |

Apply Now
Close
Table of Contents
High contrast list icon
Close

How AI Detects Bias in Children’s Books and Textbooks

How AI Detects Bias in Children’s Books and Textbooks
By Shivalika Bajpai
Published May 13, 2026

Professor Anjali Adukia’s Fifth Harris Lecture: A Recap

What comes to mind when you think of “a scientist,” “a leader,” or even “a hero” from a fairy tale? For many of us, those images were shaped long before we consciously reflected on them, through the books we read as children. These early narratives quietly shape how we understand who belongs where in society.

At the Indian School of Public Policy (ISPP), Professor Anjali Adukia from the University of Chicago Harris School of Public Policy explored this idea in a recent Harris Lecture. Her session examined how artificial intelligence (AI) can be used to analyse children’s literature and textbooks, helping uncover patterns of representation and bias that often go unnoticed.

Using AI to Analyse Children’s Books and Textbooks

What makes Professor Adukia’s work particularly striking is the way it treats children’s books not just as stories, but as data. Her team analysed thousands of pages from state textbooks and award-winning children’s books using tools like computer vision and natural language processing.

Rather than relying on anecdotal observations, this approach enables researchers to systematically measure patterns related to gender, skin tone, age, and emotional portrayal. While the specific datasets and findings vary across studies, the broader goal remains consistent: to move from intuition to evidence when examining bias in cultural and educational content.

What the Data Reveals About Representation

One of the most important takeaways from the lecture is that representation is not just about who appears in a story, but how they appear.

The Color Gap

The analysis of skin tones revealed a clear pattern: lighter-skinned protagonists dominate mainstream award-winning books, while darker-skinned characters are less visible or differently positioned.

This raises an important question, if children repeatedly see certain identities centered and others sidelined, what does that signal about whose stories matter?

When Childhood Isn’t Equal

Another striking finding was the “adultification” of children with darker skin tones. These characters were more likely to be depicted with adult-like features, suggesting a subtle but persistent bias in how innocence is assigned.

It is not an obvious bias, but that is precisely what makes it powerful.

Gender: Present, but Not Equal

At first glance, it might seem that women are well represented, they appear frequently in illustrations. But a closer look at the text tells a different story. Only 37% of named characters are female, and even then, they are often associated with family, beauty, or domestic roles. Male characters, on the other hand, are linked to leadership, politics, and competence.

So while visibility exists, it does not necessarily translate into agency.

Textbooks and the Problem of “Sameness”

The lecture also explored textbooks across different contexts, including US state curricula and religious institutions. While there were differences in how certain topics were framed, the broader patterns were surprisingly similar.

Across the board, textbooks remained largely US-centric, with limited engagement with diverse cultures. Representation of LGBTQ+ identities was nearly absent.
This points to a larger issue: standardization often comes at the cost of diversity.

When a narrow version of society is presented as the norm, many lived realities are left out of the picture.

The Emotional Gap We Don’t Notice

One of the more subtle findings from the lecture was about emotions. Using AI, the study compared emotions described in the text with those shown in illustrations.
The result? A mismatch.

Even when stories described anger, fear, or grief, illustrations often portrayed characters as calm or happy. This “happy filter” may be shaped by market preferences, but it also limits children’s exposure to the full range of human emotions.

So, Why Does This Matter for Policy?

What makes this lecture particularly relevant is not just what it reveals, but what it demands.

It suggests that bias in educational content is not accidental, it is built into the system. And if that is the case, then addressing it requires more than minor corrections. It requires actively questioning what is being taught, and how.
For policymakers, this raises important questions:

  • How are educational materials selected and evaluated?
  • What frameworks exist to assess representation and inclusivity?
  • How can data-driven tools improve curriculum design?

AI offers one possible pathway. By identifying patterns at scale, it can serve as a diagnostic tool, highlighting gaps that may not be visible through manual review alone. However, its use must be paired with careful interpretation and inclusive policy design.

From Stories to Systems

At its core, the lecture makes a simple but powerful point: the stories children grow up with are not neutral. They shape how young people see the world, and themselves.

If those stories continue to reflect narrow, outdated, or biased perspectives, they risk reinforcing the very inequalities education is meant to address.

For policymakers, this creates a clear responsibility. Ensuring equitable education is not just about access or infrastructure, it is also about what is being taught, and whose experiences are being represented.

Because sometimes, the most important biases are not the ones we see immediately, but the ones we grow up accepting as normal.

×

Cheistha Kochhar Nudge Awards

Celebrating ethical behavioural innovations improving lives across India.

Apply Now