Events Calendar

Mon
Tue
Wed
Thu
Fri
Sat
Sun
M
T
W
T
F
S
S
1
2
3
4
5
6
7
8
9
11
12
13
14
15
16
17
18
19
20
22
23
25
26
27
28
29
30
31
1
2
3
Electronic Medical Records Boot Camp
2025-06-30 - 2025-07-01    
10:30 am - 5:30 pm
The Electronic Medical Records Boot Camp is a two-day intensive boot camp of seminars and hands-on analytical sessions to provide an overview of electronic health [...]
AI in Healthcare Forum
2025-07-10 - 2025-07-11    
10:00 am - 5:00 pm
Jeff Thomas, Senior Vice President and Chief Technology Officer, shares how the migration not only saved the organization millions of dollars but also led to [...]
28th World Congress on  Nursing, Pharmacology and Healthcare
2025-07-21 - 2025-07-22    
10:00 am - 5:00 pm
To Collaborate Scientific Professionals around the World Conference Date:  July 21-22, 2025
5th World Congress on  Cardiovascular Medicine Pharmacology
2025-07-24 - 2025-07-25    
10:00 am - 5:00 pm
About Conference The 5th World Congress on Cardiovascular Medicine Pharmacology, scheduled for July 24-25, 2025 in Paris, France, invites experts, researchers, and clinicians to explore [...]
Events on 2025-06-30
Events on 2025-07-10
AI in Healthcare Forum
10 Jul 25
New York
Events on 2025-07-21
Events on 2025-07-24
Articles

Can AI image generators producing biased results be rectified?

Experts are investigating the origins of racial and gender bias in AI-generated images, and striving to address these issues.

In 2022, Pratyusha Ria Kalluri, an AI graduate student at Stanford University in California, made a concerning discovery regarding image-generating AI programs. When she requested “a photo of an American man and his house” from a popular tool, it generated an image of a light-skinned individual in front of a large, colonial-style home. However, when she asked for “a photo of an African man and his fancy house,” it produced an image of a dark-skinned person in front of a simple mud house, despite the descriptor “fancy.”

Further investigation by Kalluri and her team revealed that image outputs from widely-used tools like Stable Diffusion by Stability AI and DALL·E by OpenAI often relied on common stereotypes. For instance, terms like ‘Africa’ were consistently associated with poverty, while descriptors like ‘poor’ were linked to darker skin tones. These tools even exacerbated biases, as seen in generated images depicting certain professions. For example, most housekeepers were portrayed as people of color and all flight attendants as women, in proportions significantly deviating from demographic realities.

Similar biases have been observed by other researchers in text-to-image generative AI models, which frequently incorporate biased and stereotypical characteristics related to gender, skin color, occupations, nationalities, and more.