Monitored Automation

The following posts are coming up!

Recent Posts



One of the things I touched on in my talk at WordCamp NYC in September was the fact that automation has it’s flaws. The problems we face with automated systems flagging LGBTQ videos as ‘restricted’ or trending horrific topics or promulgating fake news is all because of automation. We did this to ourselves.

Computers Share Our Biases

Humans have biases, which color the ways in which we develop code. If we feel no one should be able to use a lower case P in WordPress (because that’s it’s name) then we can use our biases to programatically force that. Right or wrong, we are biased in many other ways.

The bias of robots is limited to what we’ve taught the robot. There’s no such thing as a true AI yet. Yes, I know European Parliament has declared robots to be ‘electronic persons’ and therefore responsible for their actions. In the case of automated reviews, the robot lacks the ability to detect nuance. It lacks ethics and morals. It cannot make a judgment of what is worth more than something else.

The flip side to this is that humans do what we’re told to do too. Sort of. If I tell you to download a file, review the code for security and sanity, but not make a moral judgement on it’s use, you will and you won’t. Oh sure, you understand it’s not your job, or responsibility, so you’ll try not to. We don’t ask the TSA to make a value judgement on what’s in our luggage, we ask them to determine if it’s allowed or not.

Biases Drive Design

Computers, even the most advanced, make decisions based on how we program them. If we tell them “The word Nazi is bad” then anything with that term that is submitted will be rejected. Even a tool by, say, the Holocaust Museum, talking about how their skill provides trivia about the Nazis’ rise to power.

The reason a computer has bias is because we, the people who program the computers, have bias. We are capable of discussing the paradox of tolerance and, in many cases, of coming to an agreement as to what should and should not be permitted. It’s easy to say that guns are illegal on airplanes. It’s not easy to say that being mean will get you kicked off a plane, because that’s subjective.

Back to the TSA. If you brought a pair of fuzzy handcuffs in your carry on, which is legal as of the time I wrote this, you would be permitted to do so, but someone would comment. You may even end up having a very public conversation about your private life. And yes, that TSA agent is totally making a value judgement about you.

Who Watches the WatchMon?

The answer, the solution to these problems, is difficult. On the one hand, a computer won’t judge you for the fuzzy handcuffs. On the other, it may also decide you’re a criminal for having them in your luggage. A human would understand the purpose of a skill that talks about Nazis not being hate speech, but they also may judge you for having some Nazi swag in your bag for your demonstration class.

Curiously the solution comes with a blend of automation and humanity. There are some things that should be auto-rejected. If you have a rule based on a clear technical limitation, then a computer should be relied on to process those. Except in practice, this is not the case. Instead, we need a human to check, approve or not, and move on to the next possibility. That’s right, the solution to the automation problem is human monitoring.

Human Bias Isn’t Solveable

We can’t stop people from being biased.

We can meet regularly to discuss the situation, but something needs to go over all the approvals and rejections to see what people are actually doing. So then we have automation monitor the human in reverse. A computer monitors and makes its calls “It looks like X is trending.” A human checks the trends and, if they notice something abnormal (like Betty White trending on a Tuesday afternoon), then can check if she’s done something.

If a human manually removes Betty White from trending every time is shows up, a computer can flag that for their supervisor to ask why the hate for The Golden Girls. But this means someone has to sit down and talk about removing one’s personal biases from work, and I promise you, it’s harder than it looks.

Automate But Verify

The ultimate answer? If you’re not monitoring and verifying your automation, you’re doing it wrong. Much like we say ‘test your backups,’ you have to test everything you task a computer to automagically do.


Posted

in

by

%d bloggers like this: