danah boyd (yes, no capitals), who I
believe is one of the top two or three smartest people in the whole
world, has an amazing post up this week that really gave me pause.
Guilt Through Algorithmic
is a post very similar to the types of posts I write in that it brings
up some really complex problem with no solution. It asks questions
that have no good answers. It tickles your brain and ultimately leaves
you more depressed about where our technology and world are taking
us—if only because we haven’t yet though through all of the
ramifications of our actions. But, man!, does it make you think.
Dr. boyd is an expert in young people’s issues and has relatedly
become an expert in how young folks interact and live online. The
subjects of her research generally place her on the bleeding edge of
what’s next, what’s going to be a problem and how things that are
seemingly good for society end up ostracizing the most vulnerable.
The Algorithm post is all about something similar to the old
“googlebombing” trick from
2004. You remember the “waffle” thing for John Kerry and “miserable
failure” thing for George W. Bush. To do that trick, a person had to
actually change a website (using search engine optimization
techniques) to influence the algorithm and produce the desired result.
An identifiable person or group did something. This problem is much
more subtle, and to the best of my understanding, a perpetrator-less
crime. Dr. Boyd describes it thusly:
You’re a 16-year-old Muslim kid in America. Say your name is Mohammad Abdullah. Your schoolmates are convinced that you’re a terrorist. They keep typing in Google queries likes “is Mohammad Abdullah a terrorist?” and “Mohammad Abdullah al Qaeda.” Google’s search engine learns. All of a sudden, auto-complete starts suggesting terms like “Al Qaeda” as the next term in relation to your name.
See the difference? This isn’t a person slandering you, they’re just
searching for you. The money line:
It’s one thing to be slandered by another person on a website, on a blog, in comments. It’s another to have your reputation slandered by computer algorithms. The algorithmic associations do reveal the attitudes and practices of people, but those people are invisible; all that’s visible is the product of the algorithm, without any context of how or why the search engine conveyed that information. What becomes visible is the data point of the algorithmic association. But what gets interpreted is the “fact” implied by said data point, and that gives an impression of guilt.
I make no effort to minimize Dr. boyd’s horrific scenario (which she
says she’s heard real cases of), but worry about how something similar
could happen to us, to our agencies. Imagine you’re responding to some
emergency, disease outbreak, oil spill, wildland fire. Some members of
the public, say locals, for whatever reason are unhappy about the
response. They think you’re only focused on remediation in a way that
benefits you (giving out vaccine, mass doses of dispersant, focus on
rich neighborhoods) to the detriment of the general public.
Enterprising bloggers start searching online for proof of a
conspiracy. “Is the Mayor taking bribes?” “Does the vaccine cause
autism?” Not posting, just searching. As interest in the situation
grows, more and more people start to look for information online. They
head to their favorite search engine and type in your agency name, and
the auto-prompt suggests that you guys are taking bribes and giving
autism and hate African-Americans. Even if you’re WAY ahead of the
situation and have materials designed to combat that way of thinking,
the first thing the public sees is your name tied to unsavory
practices. The frame of reference has already been set.
The tricky part is that no one is at fault. An algorithm associated
you with some level of guilt. No one did anything malicious or
untowards. How can you possibly fix that? Or even identify that it’s a
problem? Nothing actually changed or happened, it’s just suggested
that you might be taking bribes.
And the more and more our society comes to depend more and more on
machine-based suggestions, this tone-deafness will only get worse.
Until sentience, of course. And then we’ve got a bigger fish to fry.