Promoting Diversity as a Solution to Bias in Artificial Intelligence

Promoting Diversity as a Solution to Bias in Artificial Intelligence

Diverse perspectives crucial in countering biases in AI, as experts emphasize the need to confront and dismantle prejudices encoded in emerging technology

Throughout his career in artificial intelligence, Calvin Lawrence has noticed an ongoing rarity - few of his colleagues look like him. Despite his extensive experience in computer engineering spanning over 25 years, he has encountered only a handful of fellow professionals who share his background as a Black individual.

Lawrence, author of "Hidden in White Sight," explores how artificial intelligence has the potential to rapidly reshape society, but also presents the challenge of addressing and eliminating biases encoded in emerging technology that contribute to systemic racism.

AI is often influenced by the data it is constructed from, and this data can sometimes contain biases related to race, gender, and other factors. In a recent example, a Black mother in Detroit filed a lawsuit against the city after being wrongly arrested while eight months pregnant, due to her being incorrectly linked to a crime through the use of facial recognition technology. The police chief of Detroit later attributed this mistake to "poor investigative work."

A study conducted in 2022 revealed that a robot trained by AI was inclined to associate Black men with criminal activity and women with homemaking. The researchers warned that the use of such technology could exacerbate harmful stereotypes related to racism and misogyny. In New York City, the local health department has widened a coalition to challenge clinical algorithms that consider race, citing that the outcomes often harm people of color. According to the New York City Department of Health and Mental Hygiene, these algorithms have been found to overestimate the health of people of color, leading to potential delays in their treatment.

An OpenAI spokesperson told CNN that the company is committed to addressing bias and other risks in their artificial general intelligence models such as ChatGPT. They also mentioned they are continuously working to reduce bias and mitigate harmful outcomes, with the commitment to publish research on their efforts for each new model released.

Lawrence emphasized the importance of including people of color in every aspect of AI development to ensure that their experiences are accurately represented. He noted the lack of diversity in the process and stressed the need to educate and involve individuals from underrepresented communities.

Increasing diversity

Research has revealed that the underrepresentation of diverse groups in the technology industry starts long before reaching college. According to a 2023 study by the Code.org Advocacy Coalition, high school students of color typically have limited opportunities to take essential computer science courses.

89% of Asian students and 82% of White students had access to these courses, while only 78% of Black and Hispanic students and 67% of Native American students had the same privilege. "These opportunities are not evenly distributed, and that is a problem," noted Andres Lombana-Bermudez, a faculty associate at the Harvard University Berkman Klein Center for Internet and Society.

The unequal access can also result in fewer people of color pursuing computer science and artificial intelligence at the college level, according to Lombana-Bermudez. According to the 2022 Computing Research Associations Taulbee Survey, over two-thirds of all doctorates in computer science, computer engineering, or information in the United States were given to non-permanent U.S. residents for whom no ethnicity data is available.

White doctoral candidates received nearly 19% of degrees, while Asian candidates received 10.1%. In contrast, only 1.7% of degrees went to Hispanic graduates and 1.6% to Black graduates.

Lawrence stated that diversifying the field of artificial intelligence could potentially lead to safer and more ethical technology.

Lawrence founded the nonprofit AI 4 Black Kids with the aim of educating Black children about artificial intelligence and machine learning from a young age in order to increase representation in the field. He expressed the importance of having more Black people involved in the process, citing the limited historical points of view on which AI is currently trained.

The nonprofit provides mentorship programs for children between the ages of 5 and 19, in addition to offering scholarships and college counseling, according to Lawrence.

Lombana-Bermudez stated that addressing bias in AI requires not only greater racial diversity, but also diversity of thought. He suggests the inclusion of sociologists, lawyers, political scientists, and other humanities-oriented academics to contribute to the discussion on AI and ethics.

Lombana-Bermudez expressed hope that future generations will be able to address the issues of bias and inaccessibility as they grow up with advancing technology. "I have faith that things will improve and that we will have superior technologies in the future," he remarked. "However, it's a challenging and intricate process."