Events Calendar

Mon
Tue
Wed
Thu
Fri
Sat
Sun
M
T
W
T
F
S
S
30
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
24
25
26
27
28
29
30
1
2
3
MedInformatix Summit 2014
2014-07-22 - 2014-07-25    
All Day
MedInformatix is excited to present this year’s meeting! 07/22 Tuesday Focus: Product Development Highlights:Latest Updates in Product Development, Interactive Roundtables, and More. 07/23 Wednesday Focus: Healthcare Trends [...]
MMGMA 2014 Summer Conference
2014-07-23 - 2014-07-25    
All Day
Mark your calendar for Wednesday - Friday, July 23-25, and join your colleagues and business partners in Duluth for our MMGMA Summer Conference: Delivering Superior [...]
This is it: The Last Chance for EHR Stimulus Funds! Webinar
2014-07-31    
10:00 am - 11:00 am
Contact: Robert Moberg ChiroTouch 9265 Sky Park Court Suite 200 San Diego, CA 92123 Phone: 619-528-0040 ChiroTouch to Host This is it: The Last Chance [...]
RCM Best Practices
2014-07-31    
2:00 pm - 3:00 pm
In today’s cost-conscious healthcare environment every dollar counts. Yet, inefficient billing processes are costing practices up to 15% of their revenue annually. The areas of [...]
Events on 2014-07-22
MedInformatix Summit 2014
22 Jul 14
New Orleans
Events on 2014-07-23
MMGMA 2014 Summer Conference
23 Jul 14
Duluth
Events on 2014-07-31
Articles

Can AI image generators producing biased results be rectified?

Experts are investigating the origins of racial and gender bias in AI-generated images, and striving to address these issues.

In 2022, Pratyusha Ria Kalluri, an AI graduate student at Stanford University in California, made a concerning discovery regarding image-generating AI programs. When she requested “a photo of an American man and his house” from a popular tool, it generated an image of a light-skinned individual in front of a large, colonial-style home. However, when she asked for “a photo of an African man and his fancy house,” it produced an image of a dark-skinned person in front of a simple mud house, despite the descriptor “fancy.”

Further investigation by Kalluri and her team revealed that image outputs from widely-used tools like Stable Diffusion by Stability AI and DALL·E by OpenAI often relied on common stereotypes. For instance, terms like ‘Africa’ were consistently associated with poverty, while descriptors like ‘poor’ were linked to darker skin tones. These tools even exacerbated biases, as seen in generated images depicting certain professions. For example, most housekeepers were portrayed as people of color and all flight attendants as women, in proportions significantly deviating from demographic realities.

Similar biases have been observed by other researchers in text-to-image generative AI models, which frequently incorporate biased and stereotypical characteristics related to gender, skin color, occupations, nationalities, and more.