I consider myself a mindful developer. I strongly feel that morals and ethics and intent are important. In order to continue making the world better, we have to think about the worst thing our code might be used for and prepare for that. This means I spend a lot of time thinking about usage and intent.

However, like anyone else, I have my blind spots. And it’s amusing to me that I stumbled right over one when it came to an automated submission check.

Automation

I made an Amazon Echo skill that tells you when the last queer female died on TV. In November, we noticed it had three negative reviews, all of which misunderstood the purpose of the application. They all thought we were pro-death. Now to me this meant the description of the skill and the output needed to be modified.

No big deal, right? I’ll just go in and make an edit and I’ll be done.

Wrong.

Your skill contains content that violates our content guidelines. You can find our content guidelines here.

Specifically, any promotion or praise of hate speech, inciting racial or gender hatred, or promotion of groups or organizations which support such beliefs, such as the Ku Klux Klan are not appropriate for the Alexa Skills catalog.

My change was rejected because it was hate speech.

No matter how I wrote the description, or phrased the commands, it was getting flagged as hate speech automatically. That’s right, automatically. There was no human to contact, in fact the email said I had to fill in a contact form and maybe I’d get an answer.

Instead I went to my cohort in computing, who said she was certain it was the phrase “bury your queers.”

Documentation

I’ve had a lot of code reviews in my life. I’ve done a lot. There really two aspects of a review that help you survive it:

  1. A robust, if generic, explanation, with specifics outlined.
  2. Clear documentation to help resolve the specifics.

Amazon doesn’t give either of those.

There are no specifics that said, outright, “The problem is your content looks like your promoting the death of a specific type of people.” No matter how I tried to rewrite it, was rejected.

Finally I realized I was going to have to write it all over again, from the ground up, and explain it all better.

People over Process

I get grief from plugin developers for being reluctant to specify what is bad and what is good. This probably stems from my upbringing, where I was taught about right and wrong as relative concepts. Is it alright to kill? No. But if the choice is killing a child or yourself, because your car brakes went out, you have to be able to weigh the situation at hand.

Computers can’t do this. There is no artificial intelligence that can weigh the soul of a person and know if they’re going to do evil. If there was, the face of air travel and politics would be wildly different.

And the point I make here is that we cannot simply say “this description hits all our buzz words and triggers for badness, therefor it is clearly a bad thing.” It’s just not the case. In restricting us from being able to speak about such horrible things, we give them room to grow in the darkness.

But that is neither here nor there.

A More Positive Message

As Tracy put it, it was a damn good lesson in perspective.

She didn’t say damn.

While the “bury your gays” trope is well known to me, it is not universal. And while I feel queer is a perfectly acceptable term that has been in use for a very long time, others do not. People inside and outside our community are often unaware that one of the first pro-gay posters said “queer power” and not “gay power.” It’s just one of those things.

So the right path is to go forward in a positive way. I’ve rewritten the code to give you more details than just who died. It will now tell you what was posted, how many characters were added, who’s on the air and who’s not.

And yes, who died. Because that matters too.

Reader Interactions

%d bloggers like this: