Categories
How It Is

Do Robots Dream of Electric Smut?

“I’ll know it when I see it” works for humans, but not robots.

In July of 2018, I was informed by Google Adsense that specific content on my site was going to have “restricted ad serving” and I needed to go to the policy centre on Adsense to find out why. There was no link to this centre, by the way, and it took me a while to figure out I went to Adsense > Settings > Policy where I saw this:

The screen telling me I have adult content on a URL.

Yes, that image says that the post about Legitimate Porn Plugins was deemed to be sexual content. My guess is that they don’t like the image, because my post about how GPL says Porn is Okay did not get flagged.

My friend pointed out that it was ridiculously damaging to moderate content (or at least in this case, revenue) by “casting a wide net based solely on the presence of key words” and she’s quite right. Now I did attempt to get Google to reconsider, but like my past experiences with their censorship and draconian view, they don’t give a damn if you aren’t ‘big.’ And even then, important people get slapped by Google all the time.

History? What History? History? What History?

In 1964, there was a landmark case in the US, Jacobellis vs Ohio, about whether the state of Ohio could, consistent with the First Amendment, ban the showing of the Louis Malle film The Lovers (Les Amants), which the state had deemed obscene. During that case, and the reason it became so well known, was not the content matter.

In fact, the decision remained quite fragmented until 1973 Miller v. California decision in which it was declared that to be smut (i.e. obscene) it must be utterly without redeeming social importance. The SLAPS test addresses this with a check for “serious literary, artistic, political, or scientific value” – and yes, the acronym is hilarious.

No, everyone knows about the first case because of the following quote by Justice Potter Stewart:

I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.

Top ↑

Tea, Earl Grey, Hot Tea, Earl Grey, Hot

When I was very young, maybe six, my father did a talk about artificial intelligence with a slide of Captain Kirk ordering things from the ship’s computer. It stuck with me, which Dad finds amusing, and I’ve often reflected back on it as an understanding of what an AI can and cannot do.

The ship’s computer on Star Trek can do a great many things, but it cannot make ‘decisions’ for a person. In the end, a human always has to decide what to do with the variables, what they mean, and how they should be used. Kirk has to ask the computer to chill the wine, for example, and if he doesn’t specify a temperature, the computer will go back to what some other human (or more likely Mr. Spock) has determined is the optimal temperature.

AIs don’t exist. Even as useful as I find digital assistants like Siri and Alexa, I know they aren’t intelligent and they cannot make decisions. They can follow complex if/then/else patterns, but they lack the ability to make innovation. What happens if Kirk just asks for ‘white wine, chilled’? What vintage will he receive? What temperature?

To a degree, this is addressed with how Captain Picard orders his tea. “Tea, Earl Grey, hot.” But someone had to teach the system what ‘hot’ meant and what it meant to Jean-Luc and not Riker, who probably never drank any tea. Still, Picard has learned to specify that he wants Earl Grey tea, and he wants it hot. There’s probably some poor tech boffin in the belly of Starfleet who had to enter the optimum temperatures for each type of tea. Certainly my electric kettle has a button for ‘black tea’ but it also tells me that’s 174 degrees Fahrenheit.

Top ↑

Automation Limitations Automation Limitations

My end result with Google was that I had to set up that specific page to not show ads. Ever. Because Google refused to get a human to take a look and go “Oh, its the image, remove that and you’re fine.” But even then a human could look at the image, recognize it’s not pornography, and flag it as clean.

What we have is a limitation in the system, where in there is no human checking, which results in me getting annoyed, and Google being a monolithic annoyance. Basically, Google has automated the system to their specifications, and then instead of putting humans on the front lines to validate, they let it go.

This makes sense from a business perspective, if you’re as big as Google at least. It costs less. But we’ve all read stories about people getting locked out of their Google accounts, for a month or more, and facing drama because there’s no way to get in touch with a human being.

Top ↑

The Heart of It All is Humans The Heart of It All is Humans

And that’s really the heart of the problem.

Have you ever visited a forum or a chat site and it’s full of people acting like decent people to each other? Humans did that. A human sat down, purged the site of the vile content, and had to sit and read it to be sure. They pushed back on trolls and other problematic people, all to help you.

Don’t believe me? Okay, do you remember the WordPress plugin WangGuard by José Conti? He shut the service down in 2017 because it was giving him a mental break down. The plugin worked so well because he, a human being, evaluated content.

WangGuard worked in two different ways, one an algorithm that had been perfecting for 7 years, and that was perfecting as the sploggers evolved, so it was always ahead of them. And a second part that was human, in which I reviewed many things, and among them sploggers websites to see their content, improve the algorithm and make sure that it worked correctly both when a site was blocked and not it was. The great secret of WangGuard, was this second part, without this second part, WangGuard would not have become what it was.

José Conti – The True Reason for the Closure of WangGuard

Basically, Conti gave himself PTSD trying to make the internet a better place.

Because the absolute only way to make sure something was evil was to look at it. And the only way to make sure something is porn is to look at it.

An AI can’t do that yet.

3 replies on “Do Robots Dream of Electric Smut?”

174F is terribly low for black tea.
I remember Liam Neeson on “The Daily Show” (a number of years back) complaining about his tea not being hot enough – he wanted just-boiled water. He’d probably have accepted as low as 205F 🙂

PS Earl Grey is horrible. What on earth is bergamot? An abomination! (My morning brew is Barry’s Classic Blend leaves from Cork, Ireland).

Artificial intelligence is exactly that: artificial. And so much of what passes for AI these days is just machine learning. The AI of yore, expert systems, is making a comeback.
I wrote one of the first expert systems for nuclear power in 1986, RiTSE, a rule-based system for finding causes of spurious reactor trip. Without the experts, who made the “rules” for reactor trip, nothing would have happened. I just wrote the software.

Comments are closed.