Next Billion: CARE's Koheun Lee on how to build a women-centered design GPT

September 29, 2025

Koheun Lee, Human-Centered Program Manager for CARE’s Strive Women program, pens an insightful blog for Next Billion, a publication of the William Davidson Institute, that discusses how ChatGPT and other large language models (LLMs) often reinforce societal bias and perpetuate harmful stereotypes. In her blog, she highlights lessons learned from a women-centered design experiment and discusses how to mitigate biases in existing GPTs.

Some of the practical steps to recognize and mitigate bias against women one can take, include:

  • Be explicit in your prompts: Clearly state your expectations for language, perspectives and representation that includes women and girls. Example: “Summarize the barriers women face in accessing digital financial services, and suggest solutions tailored to their lived experience.”
  • Push for other perspectives: Prompt AI to consider women and girls in specific scenarios. Example: “Analyze how social expectations might influence women’s uptake of mobile banking in South Asia.”
  • Audit and improve women and girls’ representation: Ask AI to identify and revise misleading or biased language in documents. Example: “Review this product brochure and suggest changes to ensure it uses balanced language.”
  • Request sources and verify information: Ask the AI to provide sources and independently verify them to ensure accuracy. Example: “Provide sources for your response.”
  • Reflect on your own perspective: Consider your assumptions and the language you use in prompts. Instead of: “Describe the decision-making process of a family when choosing a loan product,” try: “Describe the decision-making process for choosing a loan product in households where a woman is the primary financial decision maker. What unique factors might influence her choices?”

Read the full blog here.

Back to Top