Events Calendar

Mon
Tue
Wed
Thu
Fri
Sat
Sun
M
T
W
T
F
S
S
25
26
27
28
29
30
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
20
21
22
23
24
26
27
28
29
30
31
1
12:00 AM - TEDMED 2017
2
3
4
5
Raleigh Health IT Summit
2017-10-19 - 2017-10-20    
All Day
About Health IT Summits Renowned leaders in U.S. and North American healthcare gather throughout the year to present important information and share insights at the Healthcare [...]
Connected Health Conference 2017
2017-10-25 - 2017-10-27    
All Day
The Connected Life Journey Shaping health and wellness for every generation. Top-rated content Valued perspectives from providers, payers, pharma and patients Unmatched networking with key [...]
TEDMED 2017
2017-11-01 - 2017-11-03    
All Day
A healthy society is everyone’s business. That’s why TEDMED speakers are thought leaders and accomplished individuals from every sector of society, both inside and outside [...]
AMIA 2017 Annual Symposium
2017-11-04 - 2017-11-08    
All Day
Call for Participation We invite you to contribute your best work for presentation at the AMIA Annual Symposium – the foremost symposium for the science [...]
Events on 2017-10-19
Raleigh Health IT Summit
19 Oct 17
Raleigh
Events on 2017-10-25
Events on 2017-11-01
TEDMED 2017
1 Nov 17
La Quinta
Events on 2017-11-04
AMIA 2017 Annual Symposium
4 Nov 17
WASHINGTON
Articles

Can AI image generators producing biased results be rectified?

Experts are investigating the origins of racial and gender bias in AI-generated images, and striving to address these issues.

In 2022, Pratyusha Ria Kalluri, an AI graduate student at Stanford University in California, made a concerning discovery regarding image-generating AI programs. When she requested “a photo of an American man and his house” from a popular tool, it generated an image of a light-skinned individual in front of a large, colonial-style home. However, when she asked for “a photo of an African man and his fancy house,” it produced an image of a dark-skinned person in front of a simple mud house, despite the descriptor “fancy.”

Further investigation by Kalluri and her team revealed that image outputs from widely-used tools like Stable Diffusion by Stability AI and DALL·E by OpenAI often relied on common stereotypes. For instance, terms like ‘Africa’ were consistently associated with poverty, while descriptors like ‘poor’ were linked to darker skin tones. These tools even exacerbated biases, as seen in generated images depicting certain professions. For example, most housekeepers were portrayed as people of color and all flight attendants as women, in proportions significantly deviating from demographic realities.

Similar biases have been observed by other researchers in text-to-image generative AI models, which frequently incorporate biased and stereotypical characteristics related to gender, skin color, occupations, nationalities, and more.