Half-Elf on Tech

Thoughts From a Professional Lesbian

Author: Ipstenu (Mika Epstein)

  • Post Editing Is Broken

    Post Editing Is Broken

    By now I’m sure most of you have seen Gutenberg. And I’m sure you all have a lot of opinions about Gutenberg and why it’s absolutely not needed. You may also have read conversation about how we totally need Gutenberg, and it’s part of the long view of the future.

    I’m going to tell you something that may be difficult to accept.

    We need Gutenberg because post editing is broken.

    The Visual Editor is Limited

    The current visual editor, which uses TinyMCE, is incredibly limited. It’s awesome, as you can make a WYSIMWYG (What you see is mostly what you get) post, but it can be really hard to get layout and design flow to look ‘right.’ And if you want to insert custom content, you’re left using embeds or shortcodes.

    I love shortcodes. But. They’re weird and complicated and no two work exactly the same way. People don’t always document them, they’re not discoverable, and they can be incredibly obscure to use. Which ones take input and which are nested and so on.

    This means that advanced customization of post content is left to templating engines in those page-builder plugins, which either have to re-jigger the whole screen (like Gutenberg) or utilize a complex nesting of shortcodes (like that other plugin you’re thinking about). Neither is a great experience for users, especially when no two page builders work the same way.

    The HTML Editor is Cryptic

    If you’re not a developer or someone who read the original HTML 2.0 spec book (hardcover, y’all), then HTML may be a beast you don’t understand. It’s complicated, it has a lot of weird quirks, and you’ll hear people tell you to use tables (or not), or use divs (or not), or only troglodytes use spans and colors.

    Basically it’s confusing unless you know HTML, and that means if you’re not an advanced user or a designer/developer, you’re screwed. You’re expected to learn a whole new suite of complex arcana just to make a table with today’s WordPress. Or you use a plugin and then you find out the semantic HTML it used was problematic, and you have no idea how to fix it.

    Anyone who supports end-users who know MS Word and not WordPress have dealt with this drama. It’s real, and WordPress is still struggling to address it. Which is why we have Gutenberg in the pipeline.

    Gutenberg Isn’t Perfect

    None of this is to say that Gutenberg is perfect. I’ve had experiences with exactly how hard it is to wrangle. Building new blocks is crazy hard if you’re not using simple reusable blocks like my favourite spoiler block:

    If you want to make a complicated nested block it’s frustrating. You have to decide what flavour of Javascript you want to use and how to build it. Let’s be honest here, folks, it’s tough to be a developer in this new land.

    And as a user it’s no picnic either. It’s a lot of change and kicking yourself out of old habits and into embracing the new. Which we’re all generally terrible at. You have to shift from a fundamental concept of “Big Chunk of Content” and into “Smaller Blocks of Content.” Meta boxes and data like we add with ACF and CMB2 isn’t perfect yet either. Heck, I can’t even customize my Jetpack sharing with Gutenberg yet.

    But. As we use Gutenberg and as we inch forward, we start to see the progress. I can still insert tables via HTML inside a Gutenberg post. I can build (or hire someone to have built) a block to tweak things to my heart’s content. Things may be hard, but they’re possible.

    You Must Break a Bone to Set It

    When I was 11, I broke my arm. And I remember the feeling of abject horror when the doctor told me they’d have to break my arm again in order to set it. I used some language they’d never heard from a child my age. And it hurt like hell. It was the most pain I’d been in my young life.

    My arm never worked the ‘same’ way afterwards either. Oh sure, I could do pretty much everything, but I had to compensate and learn new ways to do other things. I don’t have full rotation in my wrist still, though it’s much better, which meant I had to change how I did certain motions. Like typing, that hand rarely rests on the keyboard. In short, I had to adapt. 

    The current editor is imperfect and broken. In order to fix it, we must shatter it and move forward. It hurts, it’s a struggle, but if we push each other, we can do this. Continue to criticize the things that are missing (not being able to hide taxonomies from use, for example), but do so in a way to help it forward.

  • More Complicated Composition

    More Complicated Composition

    Once you’ve made your basic composer.json file and you’ve got your library included, it’s time to consider a more complication situation.

    Let’s take the following example. You want to include two plugins/add-ons, a javascript library, oh and one of those plugins doesn’t use Composer. Don’t worry, we can do this.

    Organize Your Notes

    Step one is to make a list of what you need to install and where it needs to go. Because in this situation, due to the use of the javascript, we’re not going to be using that autoloader.

    1. list all your repositories 
    2. mark which ones do and don’t use Composer
    3. determine where they need to go

    For this example, we want the javascript files to go in the assets/js/ folder in our plugin, and we want the plugin/add-ons to go in plugins/

    Okay, that’s straightforward. But we need to address something.

    Avoid AutoTune

    I mentioned we’re not using the autoloader and I cited the javascript as why. That was a partial lie. You see, even though Composer makes an autoload file, and even though it is a dependancy manager, it actually tells you not to commit your dependancies to your own repository.

    The general recommendation is no. The vendor directory (or wherever your dependencies are installed) should be added to .gitignore/svn:ignore/etc.

    Official Composer Documentation

    The biggest reason why is one you absolutely will run into here, and it’s that when you add dependancies installed via git to another git repo, they end up as submodules, which are a special hell of their own. Only it’s worse. They’re not even really submodules, and you end up with empty folders.

    Seriously, I wasted two hours on this when I first ran into it.

    But that said, it’s really wise to omit your vendor folder because your plugin does not need 3 megs of files if all you want is one javascript file. Right? This means we need to add one more dependancy to Composer, and that’s Composer Copy File.

    Adding The Normal Dependancies

    Right now, our requires section looks like this:

        "require": {
            "slowprog/composer-copy-file": "^0.2.1"
        }

    So when you add in your normal dependancies, like the PHP library, you get this:

        "require": {
            "slowprog/composer-copy-file": "^0.2.1",
            "example/php-library1": "^1.3"
        }

    You’ll do this for the javascript libraries as well:

        "require": {
            "slowprog/composer-copy-file": "^0.2.1",
            "example/php-library1": "^1.3",
            "example/js-library1": "^0.2.1",
            "example/js-library2": "^2.30"
        }

    Adding the Weird Stuff

    Okay but I mentioned one of the PHP libraries I wanted to use didn’t have a composer.json file, which means I can’t include it like that. Instead, I have to add this section above the requires section, in order to create a new package to add:

        "repositories": [
            {
              "type": "package",
              "package": {
                "name": "example2/php-library",
                "version": "1.0.0",
                "source": {
                  "url": "https://github.com/example2/php-library",
                  "type": "git",
                  "reference": "master"
                }
              }
            }
        ],

    Right? That’s all kinds of weird, but basically I’m telling Composer that there’s a package called example2/php-library and it gets its data from https://github.com/example2/php-library – which means I can then add it to my requires like this:

        "require": {
            "slowprog/composer-copy-file": "^0.2.1",
            "example/php-library1": "^1.3",
            "example/js-library1": "^0.2.1",
            "example/js-library2": "^2.30",
            "example2/php-library2": "dev-master",
        }

    Copy Everything

    Once you have your files in and required, everything gets put in vendor which is great except for the part where we’re not going to use the autoloader. In fact, we’re going to add /vendor/ to our .gitignore file to make sure we keep our plugin small.

    No, we’re going to use that copy code we mentioned above like this:

        "scripts": {
            "post-install-cmd": [
                "SlowProg\\CopyFile\\ScriptHandler::copy"
            ],
            "post-update-cmd": [
                "SlowProg\\CopyFile\\ScriptHandler::copy"
            ]
        },
        "extra" : {
            "copy-file": {
                "vendor/example/php-library1/": "plugins/php-library1/",
                "vendor/example2/php-library2/": "plugins/php-library2/",
                "vendor/example/js-library1/dist/js/file.min.js":  "assets/js/file.min.js",
                "vendor/example/js-library2/dist/js/file2.min.js": "assets/js/file2.min.js"
            }
        }

    The first portion triggers a copy every time you install or update the Composer setup, and the section section (in extras) is what runs. 

    Now you include the libraries in your code like you’d downloaded and copied them, only you don’t have to worry so much about keeping them up to date. As long as you’ve got your composer versions, you’re good to go.

  • Small Steps with Composer

    Small Steps with Composer

    Like a great many people before me, I use composer to manage packages. In my case, I have a WordPress plugin that contains some fairly significant packages that other people have written. They’re libraries, and I have a bit of a love/hate relationship with them. Mostly, I hate keeping them up to date, which is where composer comes in for me.

    Composer?

    Composer bills itself as a package manager for PHP. This means it will download all the code you need for your plugins, toss it in a folder (usually called vendor) and let you get busy with the coding and not worrying about if your PHP library is out of date.

    It’s very similar to bower, which I’ve been using for a while now, and grunt, which I sometimes use with bower. However unlike those, Composer hooks in to Packagist, which allows you to include pretty much any library with a composer.json file to generate your builds. And this is where it gets hairy.

    Conceptualizing Composer

    The basics are these: You need a composer.json file to tell Composer what to do, and in that file you need to tell Composer what to do. Yep, that’s it. The complications come in with understanding exactly what it is you’re trying to do. So let’s start small.

    Today, you want to use Composer to update a library, say the AWS SDK for PHP, so you can include it in your plugin. That’s it. Like I said, small. We’re going to assume you’ve written everything else, and you went to the AWS SDK library on Github to get the files.

    The old way would be to download the zip, unzip it, and include it in your PHP code. The new way is is to make a Composer file.

    Constructing Composer

    Windows users, you need to download the setup file from getComposer.org. Mac/Linux users, I recommend you use the global install method, or if you’re lazy like me, brew install composer will get you done.

    Now that you have it installed, you need to make your file. There are a lot of options and possible calls to put in the file. All you need is the ‘requires’ section, which literally is going to tell Composer what it requires. I recommend a basic file that lists what it’s for, who wrote it, and what it needs. 

    Example:

    {
        "name": "example/my-project-name",
        "description": "This is a very cool package that uses AWS",
        "version": "1.0.0",
        "type": "wordpress plugin",
        "keywords": ["wordpress", "plugin", "self-hosted"],
        "homepage": "https://example.com",
        "license": "MIT",
        "authors": [
            {
                "name": "Test Example",
                "email": "test@example.com"
            },
        ],
        "require": {
            "aws/aws-sdk-php": "3.*"
        }
    }

    The secret sauce is that teeny requires section at the end, where I say what I want to require and what version. That’s how composer update knows what I need.

    You can also make the file without that requires section and then tell Composer to include it via command line: composer require aws/aws-sdk-php — That will write the line for you.

    Calling Composer

    So once you have that and install it and run Composer, how do you get it in WordPress? By default, Composer makes an autoloader file called autoload.php – and that will require everything it is you need. That means all you have to do is require that file in your plugin, and you’re done.

    What? You wanted more?

    Conclusion

    Getting started with Composer isn’t harder than writing a readme, even if it’s formatted pretty weirdly. It can make including large libraries a snap. But don’t worry, you can get really complicated if you want to.

  • Disable Google Ads on One Page

    Disable Google Ads on One Page

    After my adventures with Google telling me I was hosting adult content (again, this is actually my third go-round with them), I’m here to inform you that you can now block Google Ads on one page only.

    Add a URL Channel

    First you have to tell Google what your URLs are that you want to treat differently. For that, we’ll use a URL channel which you can find at My Ads > Content > URL Channels. Now, you only get 500 of these, which means you can only flag 500 unique URLs as … well … unique.

    Add a new URL channel - It gives you some examples, but basically forget using wildcards.

    You may notice, no wildcards. So I can add halfelf.org/2012/legitimate-porn-plugins/ but not halfelf.org/.*/.*[porn].*/ which would be pretty cool. Once you’ve added your URL, you’ll see it like this:

    URL channel in place, it's basically a list of all the URLs you treat special. Nothing fancy.

    Edit Ad Settings

    I’m using Auto Ads because I’m incurably lazy, as my friend Syed knows. So I go to My Ads > Auto Ads, but if you were using specific units, you’d go to My Ads > Ad Units. There you go to the Advanced URL Settings section and click on the add button for a new URL Group.

    Advanced settings, there's a button for "New URL Group" and nothing else explanatory.

    This brings you to a page where you can select the URLs for this group. 

    A massive list of all URLs. This could get messy, Google. Thanks.

    When you’ve picked all your URLs (and yes you can add more later), click next and you’ll get a list of all the possible ad units. Uncheck them all. That’s the point of this, right? Finally you’ll review the group and give it a name. I picked “No Ads” since that’s what this was.

    Review the group, come up with a name, make sure the URLs are correct.

    Annoyances

    1. You have to add in each URL one at a time
    2. There’s no wildcards or regex
    3. You only get 500 urls
    4. You still can’t talk to a human

    All in all, it’s another day with Google.

  • Do Robots Dream of Electric Smut?

    Do Robots Dream of Electric Smut?

    In July of 2018, I was informed by Google Adsense that specific content on my site was going to have “restricted ad serving” and I needed to go to the policy centre on Adsense to find out why. There was no link to this centre, by the way, and it took me a while to figure out I went to Adsense > Settings > Policy where I saw this:

    The screen telling me I have adult content on a URL.

    Yes, that image says that the post about Legitimate Porn Plugins was deemed to be sexual content. My guess is that they don’t like the image, because my post about how GPL says Porn is Okay did not get flagged.

    My friend pointed out that it was ridiculously damaging to moderate content (or at least in this case, revenue) by “casting a wide net based solely on the presence of key words” and she’s quite right. Now I did attempt to get Google to reconsider, but like my past experiences with their censorship and draconian view, they don’t give a damn if you aren’t ‘big.’ And even then, important people get slapped by Google all the time.

    History? What History?

    In 1964, there was a landmark case in the US, Jacobellis vs Ohio, about whether the state of Ohio could, consistent with the First Amendment, ban the showing of the Louis Malle film The Lovers (Les Amants), which the state had deemed obscene. During that case, and the reason it became so well known, was not the content matter.

    In fact, the decision remained quite fragmented until 1973 Miller v. California decision in which it was declared that to be smut (i.e. obscene) it must be utterly without redeeming social importance. The SLAPS test addresses this with a check for “serious literary, artistic, political, or scientific value” – and yes, the acronym is hilarious.

    No, everyone knows about the first case because of the following quote by Justice Potter Stewart:

    I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.

    Tea, Earl Grey, Hot

    When I was very young, maybe six, my father did a talk about artificial intelligence with a slide of Captain Kirk ordering things from the ship’s computer. It stuck with me, which Dad finds amusing, and I’ve often reflected back on it as an understanding of what an AI can and cannot do.

    The ship’s computer on Star Trek can do a great many things, but it cannot make ‘decisions’ for a person. In the end, a human always has to decide what to do with the variables, what they mean, and how they should be used. Kirk has to ask the computer to chill the wine, for example, and if he doesn’t specify a temperature, the computer will go back to what some other human (or more likely Mr. Spock) has determined is the optimal temperature.

    AIs don’t exist. Even as useful as I find digital assistants like Siri and Alexa, I know they aren’t intelligent and they cannot make decisions. They can follow complex if/then/else patterns, but they lack the ability to make innovation. What happens if Kirk just asks for ‘white wine, chilled’? What vintage will he receive? What temperature?

    To a degree, this is addressed with how Captain Picard orders his tea. “Tea, Earl Grey, hot.” But someone had to teach the system what ‘hot’ meant and what it meant to Jean-Luc and not Riker, who probably never drank any tea. Still, Picard has learned to specify that he wants Earl Grey tea, and he wants it hot. There’s probably some poor tech boffin in the belly of Starfleet who had to enter the optimum temperatures for each type of tea. Certainly my electric kettle has a button for ‘black tea’ but it also tells me that’s 174 degrees Fahrenheit.

    Automation Limitations

    My end result with Google was that I had to set up that specific page to not show ads. Ever. Because Google refused to get a human to take a look and go “Oh, its the image, remove that and you’re fine.” But even then a human could look at the image, recognize it’s not pornography, and flag it as clean.

    What we have is a limitation in the system, where in there is no human checking, which results in me getting annoyed, and Google being a monolithic annoyance. Basically, Google has automated the system to their specifications, and then instead of putting humans on the front lines to validate, they let it go.

    This makes sense from a business perspective, if you’re as big as Google at least. It costs less. But we’ve all read stories about people getting locked out of their Google accounts, for a month or more, and facing drama because there’s no way to get in touch with a human being.

    The Heart of It All is Humans

    And that’s really the heart of the problem.

    Have you ever visited a forum or a chat site and it’s full of people acting like decent people to each other? Humans did that. A human sat down, purged the site of the vile content, and had to sit and read it to be sure. They pushed back on trolls and other problematic people, all to help you.

    Don’t believe me? Okay, do you remember the WordPress plugin WangGuard by José Conti? He shut the service down in 2017 because it was giving him a mental break down. The plugin worked so well because he, a human being, evaluated content.

    WangGuard worked in two different ways, one an algorithm that had been perfecting for 7 years, and that was perfecting as the sploggers evolved, so it was always ahead of them. And a second part that was human, in which I reviewed many things, and among them sploggers websites to see their content, improve the algorithm and make sure that it worked correctly both when a site was blocked and not it was. The great secret of WangGuard, was this second part, without this second part, WangGuard would not have become what it was.

    José Conti – The True Reason for the Closure of WangGuard

    Basically, Conti gave himself PTSD trying to make the internet a better place.

    Because the absolute only way to make sure something was evil was to look at it. And the only way to make sure something is porn is to look at it.

    An AI can’t do that yet.

  • cPanel ❤️ DreamObjects

    cPanel ❤️ DreamObjects

    This is something I’ve wanted for a long time. I opened a ticket with cPanel about it yonks ago as Ceph storage is offered by more than just Amazon, and yet cPanel was making it super hard to use for backups.

    Well in the next release of cPanel, this will no longer be the case! If you’re on version 74 (which is in release stage, but not current, so most people do not have it yet) you can do this.

    Add A New Backup Option

    Go to Home > Backups and open up the settings.

    In there, you can add a new Backup option. Pick S3 Compatible:

    Backing up from cPanel to DreamObjects will soon be a reality.

    Configure for DreamObjects

    Now just throw in the right data:

    You’ll want to use objects-us-east1.dream.io for the endpoint, and then your bucket and keys.

    Back it Up

    And with that you’re done. Thank you, cPanel!