[ad_1]

Oh The paper was published In the journal The nature It turns out that online images show a stronger gender bias than online text. The researchers also found that bias is more psychologically powerful than in written visual form.

Photo credit: Soline Delicourt

A picture is worth a thousand words, as the saying goes, and research has shown that the human brain retains information better from pictures than from text.

These days, we consume more visual content than ever, using image-laden news sites and social media platforms. And much of that visual content, according to new Berkeley Haas research, is reinforcing powerful gender stereotypes.

Using experiments, observations, and large-scale language models, professors Douglas Guilbeault and Soline Delecourt found that female and male gender associations in images retrieved from Google are much higher than within text from Google News. Furthermore, while the text is slightly more biased towards men than women, this bias is four times stronger in the images.

“Most of the previous research on bias on the Internet has focused on text, but now we have Google Images, TikTok, YouTube, Instagram — all kinds of content based on methods other than text,” says Delecourt. “Our research suggests that the extent of online bias is much wider than previously shown.”

Studies show that not only is online gender bias more prevalent in images than in text, but that such bias is psychologically more powerful in visual form. Surprisingly, in one experiment, study participants who viewed gender-biased images—as opposed to those who read gender-biased text—still showed significantly stronger biases three days later.

As online worlds become more and more visual, it’s important to understand the enormous power of images, says Galbeault, lead author of the paper.

“We realized that this had implications for stereotyping—and no one had demonstrated this relationship before,” says Guilbeault. “Images are a particularly sticky way to communicate stereotypes.”

To eliminate gender bias in online images, Guilbeault and Delecourt founded Psiphon, Inc. The co-authors of Tasker worked closely with Hull, a software company that develops censorship navigation tools. Bhargu Srinivasa Desikan, a doctoral researcher at Ecole Polytechnique Fédérale de Lausanne in Switzerland (now at IPPR in London); Mark Cho from Columbia University; and Ethan Nadler from the University of Southern California. They developed a new series of techniques to compare bias in images versus text and to investigate its psychological effects in both media.

First, the researchers extracted 3,495 social categories—including professions like “doctor” and “carpenter” as well as social roles like “friend” and “neighbor.” WordNetA large database of related words and concepts.

To calculate the gender balance in each category of images, the researchers retrieved the top hundred Google images corresponding to each category and recruited people to rate each human face by gender.

Measuring gender bias in online text was a difficult proposition—though well suited to rapidly developing models of a large language, which included references to gender in Google News text as well as occurrences of each social category. The frequency of occurrence is noted.

The researchers’ analysis revealed that gender associations were more intense within the text than within the images. There were also images that focused more on men than women.

The experimental phase of the study sought to illuminate the effects that biases in online images have on Internet users. Researchers asked 450 participants to use Google to find appropriate descriptions of science, technology, and arts occupations. One group used Google News to search and upload textual details. Another group used Google Images to find and upload images of professions. (A control group was assigned the same task with neutral categories such as “apple” and “guitar”.)

After selecting their text- or picture-based descriptions, participants rated which gender they most associated with each occupation. They then completed a test in which they were asked to quickly sort different words into gender categories. Tested again after three days.

Participants working with pictures showed stronger gender associations than those in the text and control conditions—even three days later.

“This is not. Only about the frequency of gender bias online,” says Guilbeault. “Part of the story here is that there’s something very sticky, very powerful about the representation of people in images that isn’t in text.”

Interestingly, when the researchers conducted their own online survey of public opinion—and when they looked at occupational gender distribution data reported by the U.S. Bureau of Labor Statistics—they found that the gender gap between Google Much less was evident than what was reflected in the images.

Delecourt and Guilbeault say they hope their findings will lead to a more serious approach to the challenges posed by bias embedded in online images. After all, it’s relatively easy to tweak text to be as neutral as possible, while pictures of people naturally convey race, gender, and other demographic information.

Guilbeault notes that other research shows a decrease in gender bias in online texts, but these findings may not tell the whole story. “We actually still see a very widespread gender bias in the images,” he says. “That might be because we haven’t really focused on images in terms of this movement toward gender equality. But it might also be because it’s hard to do that in images.

Guilbeault and Delecourt are already working on another project to assess gender-age bias online using many of the same techniques. “One of the reasons this paper is so exciting is that it opens the door to many, many other kinds of research — across age or race, or other methods like video,” says Delecourt.

Source: UC Berkeley



[ad_2]