Raising Your AI to Be Ethical and Unbiased (You Know, Like Humans Should Be)

I tend to think of AI as a fresh start, as a way to explore the best parts of ourselves and to create algorithms that help advance humanity in positive ways. I realize that view is fraught with problems, however, ranging from questions of meaning—Who defines the “best” parts of ourselves? Who defines “positive ways”?—to questions of actual intent or impacts. It would be foolish to assume that all scientists are trying to create “good” AI and not “evil,” or that even the best intentions don’t have a chance of turning into Skynet.

Another challenge, especially in personality AI (and associated “make it human” AI specialties), is whether one can make the AI too human. Recent research, such as in Zhao, et al., indicates that allowing AI learning algorithms to go to school by sifting through words and images on the Internet only teaches the AI to be as biased as humans; e.g., to associate females with cooking and shopping. Other research has found racial bias as well (see Thorpe for examples). This only perpetuates gender and racial stereotypes and bias—not something we would think of as humans’ best qualities.

So what’s the solution? To raise our AI the way we would hope to raise our children, as unbiased and accepting of the full diversity of human beings, whether gender, race, sexuality, culture, etc., etc. How “raise our AI” works in the world of programming as opposed to the world of diapers is an open question; however, I think the similarities outweigh the differences. For instance, you would hope that your children would be exposed to many different kinds of people in their everyday lives, and that this would help them become less biased and more accepting of others. AI learning could occur the same way—they could be trained using diverse data that better represent human diversity. Or, perhaps, represents the best of human diversity. Ethically, that might be seen as manipulating data so that we get a desired result, which is true; in this case, however, we’re trying to educate our AI in a forward-thinking, what-we-wish-humans-themselves-to-be kind of way.

Which, looking at it, almost looks like a kind of do-gooder desire to play god. However, would you rather we developed AI that was prejudiced? That represented the worst of our biases?

More articles on this:
Devlin, “AI programs exhibit racial and gender biases, research reveals”
Kleinman, “Artificial intelligence: How to avoid racist algorithms”

This entry was posted in Artificial Intelligence and tagged , . Bookmark the permalink.