Events Calendar

Mon
Tue
Wed
Thu
Fri
Sat
Sun
M
T
W
T
F
S
S
1
2
3
4
5
6
7
8
12:00 AM - DEVICE TALKS
9
11
12
13
14
16
18
19
20
21
22
23
24
26
27
28
29
30
31
1
2
3
4
DEVICE TALKS
DEVICE TALKS BOSTON 2018: BIGGER AND BETTER THAN EVER! Join us Oct. 8-10 for the 7th annual DeviceTalks Boston, back in the city where it [...]
6th Annual HealthIMPACT Midwest
2018-10-10    
All Day
REV1 VENTURES COLUMBUS, OH The Provider-Patient Experience Summit - Disrupting Delivery without Disrupting Care HealthIMPACT Midwest is focused on technologies impacting clinician satisfaction and performance. [...]
15 Oct
2018-10-15 - 2018-10-16    
All Day
Conference Series Ltd invites all the participants from all over the world to attend “3rd International Conference on Environmental Health” during October 15-16, 2018 in Warsaw, Poland which includes prompt keynote [...]
17 Oct
2018-10-17 - 2018-10-19    
7:00 am - 6:00 pm
BALANCING TECHNOLOGY AND THE HUMAN ELEMENT In an era when digital technologies enable individuals to track health statistics such as daily activity and vital signs, [...]
Epigenetics Congress 2018
2018-10-25 - 2018-10-26    
All Day
Conference: 5th World Congress on Epigenetics and Chromosome Date: October 25-26, 2018 Place: Istanbul, Turkey Email: epigeneticscongress@gmail.com About Conference: Epigenetics congress 2018 invites all the [...]
Events on 2018-10-08
DEVICE TALKS
8 Oct 18
425 Summer Street
Events on 2018-10-10
Events on 2018-10-17
17 Oct
Events on 2018-10-25
Epigenetics Congress 2018
25 Oct 18
Istanbul
Articles

Can AI image generators producing biased results be rectified?

Experts are investigating the origins of racial and gender bias in AI-generated images, and striving to address these issues.

In 2022, Pratyusha Ria Kalluri, an AI graduate student at Stanford University in California, made a concerning discovery regarding image-generating AI programs. When she requested “a photo of an American man and his house” from a popular tool, it generated an image of a light-skinned individual in front of a large, colonial-style home. However, when she asked for “a photo of an African man and his fancy house,” it produced an image of a dark-skinned person in front of a simple mud house, despite the descriptor “fancy.”

Further investigation by Kalluri and her team revealed that image outputs from widely-used tools like Stable Diffusion by Stability AI and DALL·E by OpenAI often relied on common stereotypes. For instance, terms like ‘Africa’ were consistently associated with poverty, while descriptors like ‘poor’ were linked to darker skin tones. These tools even exacerbated biases, as seen in generated images depicting certain professions. For example, most housekeepers were portrayed as people of color and all flight attendants as women, in proportions significantly deviating from demographic realities.

Similar biases have been observed by other researchers in text-to-image generative AI models, which frequently incorporate biased and stereotypical characteristics related to gender, skin color, occupations, nationalities, and more.