Events Calendar

Mon
Tue
Wed
Thu
Fri
Sat
Sun
M
T
W
T
F
S
S
29
30
1
2
3
4
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
24
25
26
27
28
29
30
31
1
Food Safety and Health
2021-06-28 - 2021-06-29    
All Day
The main objective is to bring all the leading academic scientists, researchers and research scholars together to exchange and share their experiences and research results [...]
Food Microbiology
2021-06-28 - 2021-06-29    
All Day
This conference provide a platform to share the new ideas and advancing technologies in the field of Food Microbiology and Food Technology. The objective of [...]
Smart Robots and Artificial Intelligence 2021
2021-07-05 - 2021-07-06    
All Day
Robotics is an imperative development that is related to the well-being of all individuals. A Robot is a useful gadget, multitasking operator sketched to move [...]
World Plant and Soil Science Congress
2021-07-23 - 2021-07-24    
All Day
It’s our greatest pleasure to welcome you to the official website of 2nd World Plant and Soil Science Congress that aims at bringing together the [...]
Food and Beverages
2021-07-26 - 2021-07-27    
12:00 am
The conference highlights the theme “Global leading improvement in Food Technology & Beverages Production” aimed to provide an opportunity for the professionals to discuss the [...]
Events on 2021-06-28
Events on 2021-07-05
Events on 2021-07-23
Events on 2021-07-26
Food and Beverages
26 Jul 21
Articles

Can AI image generators producing biased results be rectified?

Experts are investigating the origins of racial and gender bias in AI-generated images, and striving to address these issues.

In 2022, Pratyusha Ria Kalluri, an AI graduate student at Stanford University in California, made a concerning discovery regarding image-generating AI programs. When she requested “a photo of an American man and his house” from a popular tool, it generated an image of a light-skinned individual in front of a large, colonial-style home. However, when she asked for “a photo of an African man and his fancy house,” it produced an image of a dark-skinned person in front of a simple mud house, despite the descriptor “fancy.”

Further investigation by Kalluri and her team revealed that image outputs from widely-used tools like Stable Diffusion by Stability AI and DALL·E by OpenAI often relied on common stereotypes. For instance, terms like ‘Africa’ were consistently associated with poverty, while descriptors like ‘poor’ were linked to darker skin tones. These tools even exacerbated biases, as seen in generated images depicting certain professions. For example, most housekeepers were portrayed as people of color and all flight attendants as women, in proportions significantly deviating from demographic realities.

Similar biases have been observed by other researchers in text-to-image generative AI models, which frequently incorporate biased and stereotypical characteristics related to gender, skin color, occupations, nationalities, and more.