Events Calendar

Mon
Tue
Wed
Thu
Fri
Sat
Sun
M
T
W
T
F
S
S
29
30
1
2
3
4
5
6
7
8
9
10
12
13
15
16
17
18
19
21
22
23
24
25
26
27
28
30
31
1
2
AACP Annual Meeting
2015-07-11 - 2015-07-15    
All Day
The AACP Annual Meeting is the largest gathering of academic pharmacy administrators, faculty and staff, and each year offers 70 or more educational programs that cut across [...]
Engage, Innovation in Patient Engagement
2015-07-14 - 2015-07-15    
All Day
MedCity ENGAGE is an executive-level event where the industry’s brightest minds and leading organizations discuss best-in-class approaches to advance patient engagement and healthcare delivery. ENGAGE is the [...]
mHealth + Telehealth World 2015
2015-07-20 - 2015-07-22    
All Day
The role of technology in health care is growing year after year. Join us at mHealth + Telehealth World 2015 to learn strategies to keep [...]
2015 OSEHRA Open Source Summit
2015-07-29 - 2015-07-31    
All Day
Join the Premier Open Source Health IT Summit! Looking to gain expertise in both public and private sector open source health IT?  Want to collaborate [...]
Events on 2015-07-11
AACP Annual Meeting
11 Jul 15
National Harbor, Maryland
Events on 2015-07-14
Events on 2015-07-20
Events on 2015-07-29
2015 OSEHRA Open Source Summit
29 Jul 15
Bethesda
Articles

Can AI image generators producing biased results be rectified?

Experts are investigating the origins of racial and gender bias in AI-generated images, and striving to address these issues.

In 2022, Pratyusha Ria Kalluri, an AI graduate student at Stanford University in California, made a concerning discovery regarding image-generating AI programs. When she requested “a photo of an American man and his house” from a popular tool, it generated an image of a light-skinned individual in front of a large, colonial-style home. However, when she asked for “a photo of an African man and his fancy house,” it produced an image of a dark-skinned person in front of a simple mud house, despite the descriptor “fancy.”

Further investigation by Kalluri and her team revealed that image outputs from widely-used tools like Stable Diffusion by Stability AI and DALL·E by OpenAI often relied on common stereotypes. For instance, terms like ‘Africa’ were consistently associated with poverty, while descriptors like ‘poor’ were linked to darker skin tones. These tools even exacerbated biases, as seen in generated images depicting certain professions. For example, most housekeepers were portrayed as people of color and all flight attendants as women, in proportions significantly deviating from demographic realities.

Similar biases have been observed by other researchers in text-to-image generative AI models, which frequently incorporate biased and stereotypical characteristics related to gender, skin color, occupations, nationalities, and more.