Events Calendar

Mon
Tue
Wed
Thu
Fri
Sat
Sun
M
T
W
T
F
S
S
27
28
29
30
31
1
2
3
4
5
6
7
8
9
10
11
12
14
15
16
17
18
19
20
21
23
24
25
26
28
29
San Jose Health IT Summit
2017-04-13 - 2017-04-14    
All Day
About Health IT Summits U.S. healthcare is at an inflection point right now, as policy mandates and internal healthcare system reform begin to take hold, [...]
Annual IHI Summit
2017-04-20 - 2017-04-22    
All Day
The Office Practice & Community Improvement Conference ​​​​​​The 18th Annual Summit on Improving Patient Care in the Office Practice and the Community taking place April 20–22, 2017, in Orlando, FL, brings together 1,000 health improvers from around the globe, in [...]
Stanford Medicine X | ED
2017-04-22 - 2017-04-23    
All Day
Stanford Medicine X | ED is a conference on the future of medical education at the intersections of people, technology and design. As an Everyone [...]
2017 Health Datapalooza
2017-04-27 - 2017-04-28    
All Day
Health Datapalooza brings together a diverse audience of over 1,600 people from the public and private sectors to learn how health and health care can [...]
The 14th Annual World Health Care Congress
2017-04-30 - 2017-05-03    
All Day
The 14th Annual World Health Care Congress April 30 - May 3, 2017 • Washington, DC • The Marriott Wardman Park Hotel Connecting and Preparing [...]
Events on 2017-04-13
San Jose Health IT Summit
13 Apr 17
San Jose
Events on 2017-04-20
Annual IHI Summit
20 Apr 17
Orlando
Events on 2017-04-22
Events on 2017-04-27
2017 Health Datapalooza
27 Apr 17
Washington, D.C
Events on 2017-04-30
Articles

Can AI image generators producing biased results be rectified?

Experts are investigating the origins of racial and gender bias in AI-generated images, and striving to address these issues.

In 2022, Pratyusha Ria Kalluri, an AI graduate student at Stanford University in California, made a concerning discovery regarding image-generating AI programs. When she requested “a photo of an American man and his house” from a popular tool, it generated an image of a light-skinned individual in front of a large, colonial-style home. However, when she asked for “a photo of an African man and his fancy house,” it produced an image of a dark-skinned person in front of a simple mud house, despite the descriptor “fancy.”

Further investigation by Kalluri and her team revealed that image outputs from widely-used tools like Stable Diffusion by Stability AI and DALL·E by OpenAI often relied on common stereotypes. For instance, terms like ‘Africa’ were consistently associated with poverty, while descriptors like ‘poor’ were linked to darker skin tones. These tools even exacerbated biases, as seen in generated images depicting certain professions. For example, most housekeepers were portrayed as people of color and all flight attendants as women, in proportions significantly deviating from demographic realities.

Similar biases have been observed by other researchers in text-to-image generative AI models, which frequently incorporate biased and stereotypical characteristics related to gender, skin color, occupations, nationalities, and more.