Home Marketing ‘Whom Does This Serve?’ Dismantling Bias in Generative AI

‘Whom Does This Serve?’ Dismantling Bias in Generative AI

0
‘Whom Does This Serve?’ Dismantling Bias in Generative AI

[ad_1]

In 1932, the first African American could have integrated Major League Baseball by accepting a position on the Philadelphia Athletics. But for Romare Bearden to become their star pitcher, he would have to pass as white.

An assumption can be made that Bearden asked himself, “Who does this serve?” Rather than play along, Bearden quit baseball and became one of America’s most renowned and influential artists. Jackie Robinson would go on to break the color barrier in America’s pastime.

In discussions with peers, “Who does this serve?” is a constant question; the answer is often, “Clearly not us.” This “us vs. them” feeling isn’t new to the Black community when it comes to many facets of American life, from education and medicine to government programs and legislation.

Following the trend, much of generative AI has been created and fed data by “them.” Examples include facial recognition technology that can’t render Black faces, chatbots recreating racial profiling, and social media AI tagging African American Vernacular English as hate speech.

Unfortunately, the people creating these tools aren’t asking questions of inclusion, and the technology gap is becoming difficult to close. Lack of representation in technology research and development, and lack of representation in the data used to train these artificial intelligences, perpetuates bias and leaves important questions unasked.

To bridge the gap, there are clear moves the tech community should make to ensure we don’t follow the discriminatory patterns of our innovative predecessors.

Employ developers trained in equitable coding practices

We should revamp current AI algorithms, using fresh eyes trained in equitable coding practices. Investments in back-billing data and actively including diverse datasets to combat bias will further improve machine learning.

However, development teams made up of the folks who created the biased system will re-dig the same hole. Employing thinkers who code diversity first to improve existing models is a quicker solution to rectify inequitable AI algorithms.