Events Calendar

Mon
Tue
Wed
Thu
Fri
Sat
Sun
M
T
W
T
F
S
S
1
2
3
4
5
6
7
8
9
10
11
13
14
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1
2
3
4
5
Drug Addiction and Rehabilitation Therapy
2021-11-12 - 2021-11-13    
All Day
Conference Series LLC Ltd is delighted to invite the Scientists, Physiotherapists, neurologists, Doctors, researchers & experts from the arena of Drug Addiction and Rehabilitation therapy, [...]
Drug Addiction and Rehabilitation Therapy
2021-11-12 - 2021-11-13    
All Day
This Rehabilitation 2021 Conference is based on the theme “Exploring latest Innovations in Drug Addiction and Rehabilitation”. Rehabilitation 2021, Singapore welcomes proposals and ideas from [...]
3D Printing and Additive Manufacturing
2021-11-15 - 2021-11-16    
All Day
DLP (Digital Light Processing) is a similar process to stereolithography in that it is a 3D printing process that works with photopolymers. The major difference [...]
Microfluidics and Bio-MEMS 2021
2021-11-16 - 2021-11-17    
All Day
Lab-on-a-chip (LOC) devices integrate and scale down laboratory functions and processes to a miniaturized chip format. Many LOC devices are used in a wide array [...]
Food Technology & Processing
2021-12-01 - 2021-12-02    
All Day
Food Technology 2021 scientific committee feels esteemed delight to invite participants from around the world to join us at 25th International Conference on Food Technology [...]
Events on 2021-11-15
Events on 2021-11-16
Events on 2021-12-01
Articles

Can AI image generators producing biased results be rectified?

Experts are investigating the origins of racial and gender bias in AI-generated images, and striving to address these issues.

In 2022, Pratyusha Ria Kalluri, an AI graduate student at Stanford University in California, made a concerning discovery regarding image-generating AI programs. When she requested “a photo of an American man and his house” from a popular tool, it generated an image of a light-skinned individual in front of a large, colonial-style home. However, when she asked for “a photo of an African man and his fancy house,” it produced an image of a dark-skinned person in front of a simple mud house, despite the descriptor “fancy.”

Further investigation by Kalluri and her team revealed that image outputs from widely-used tools like Stable Diffusion by Stability AI and DALL·E by OpenAI often relied on common stereotypes. For instance, terms like ‘Africa’ were consistently associated with poverty, while descriptors like ‘poor’ were linked to darker skin tones. These tools even exacerbated biases, as seen in generated images depicting certain professions. For example, most housekeepers were portrayed as people of color and all flight attendants as women, in proportions significantly deviating from demographic realities.

Similar biases have been observed by other researchers in text-to-image generative AI models, which frequently incorporate biased and stereotypical characteristics related to gender, skin color, occupations, nationalities, and more.