Monday, July 20, 2015

The AI Will Be Racist

Help, my algorithm is a racist
The other week I found out that my algorithm is a racist.

Don’t get me wrong, it wasn’t birthed this way. In fact, we can be sure that in this case the racism is a product of nurture, not nature. You see, I was running two creative sets. Both were pictures of children, their mere image beckoning the web browser to click on them. Click on them people did. The problem is that, over time, they clicked on one creative more than the other, and when they converted on the landing page, they converted on that same creative with higher frequency. Doing what it was designed to do, my algorithm jumped in, optimizing the campaign to the better-performing creative: the one with the white child, not the black child.

An awkward moment arose. What do we do? After all, this is a results business and the Caucasian creative was bring in the goods. Still something didn’t feel quite right. It also made me wonder, are we racist? Had our racism poisoned my algorithm and turned it into a monster?
CAN BIG DATA BE RACIST?
Take, for example, Harvard professor Latanya Sweeney’s discovery that searches for racially-associated names were disproportionately triggering targeted ads for criminal background checks and arrest records. The algorithm was both exposing racial bias (the offensive ads were more likely to reappear if people continued to click on them) and exacerbating it (the more people saw ads that suggested black names were connected to criminal activity, the more existing racial prejudice was reinforced).

Imagine now a Big Data algorithm that systematically denies credit to people of color based on their Facebook likes, a practice that several credit rating agencies are beginning to embrace. The predictive model suggesting that minority populations are a higher credit risk could just be a reflection of the bias in our society, as was the case with St. George’s and with Sweeney’s study. But as marketers continue looking for a shorthand with which to identify population segments, their curiosity about black customers is being used for purposes as benign as selling basketballs and Justin Timberlake CDs, or as nefarious as refusing access to credit or basic civil rights and services.
The Code We Can’t Contro
Philosophy professor Samir Chopra has discussed the dangers of such opaque programs in his book A Legal Theory for Autonomous Artificial Agents, stressing that their autonomy from even their own programmers may require them to be regulated as autonomous entities. Pasquale stresses the need for an “intelligible society,” one in which we can understand how the inputs that go into these black box algorithms generate the effects of those algorithms. I’m inclined to believe it’s already too late—and that algorithms will increasingly have effects over which even the smartest engineers will have only coarse-grained and incomplete control. It is up to us to study the effects of those algorithms, whether they are racist, sexist, error-laden, or simply invasive, and take countermeasures to mitigate the damage. With more corporate and governmental transparency, clear and effective regulation, and a widespread awareness of the dangers and mistakes that are already occurring, we can wrest back some control of our data from the algorithms that none of us fully understands.
When Algorithms Discriminate
Algorithms, which are a series of instructions written by programmers, are often described as a black box; it is hard to know why websites produce certain results. Often, algorithms and online results simply reflect people’s attitudes and behavior. Machine learning algorithms learn and evolve based on what people do online. The autocomplete feature on Google and Bing is an example. A recent Google search for “Are transgender,” for instance, suggested, “Are transgenders going to hell.”

“Even if they are not designed with the intent of discriminating against those groups, if they reproduce social preferences even in a completely rational way, they also reproduce those forms of discrimination,” said David Oppenheimer, who teaches discrimination law at the University of California, Berkeley.

No comments:

Post a Comment

Synthesis

Political

Potpourri

Blog Archive