I love fonts. They’re a great way to make your site unique. Back in the early 2010s, we were all using them all over the place.
It’s time to retire them. (more…)
Thoughts From a Professional Lesbian
Making philosophy about the why behind technical things.

I love fonts. They’re a great way to make your site unique. Back in the early 2010s, we were all using them all over the place.
It’s time to retire them. (more…)

When I started my little experiment with showing icons on custom taxonomies, I began with a text field. This was because it was the easiest to edit and change right away. Next I changed it to a selection dropdown, based on the content of the folder where the I’d stored my images.
And for the most part, that’s pretty cool. Except for the part where drop-down select boxes can’t show images. This meant I had to show the image name. Generally that’s okay. I know the name of the image I wanted to use. The problem is that I know that because I know the images I uploaded. For someone else, that’s 800 images or so they have to guess at, based on names.
After playing around with selections and forms for a while, I determined there really isn’t a great way to show a list of images natively. No matter what I was going to have to mess with this the un-fun way because the easy way, CSS, failed. You can use background-image in Firefox but not Chrome or anything else that people use.
Good news! As of jQuery UI 1.11, the selectmenu widget exists. Bad news! It’s not actually simple to implement. In fact, I absolutely failed at implementation. Let’s talk about why I failed, because it’s not just a matter of how convoluted the code is.
A large part of my failure in this is due to the scale. If you only have one or two images, it’s less stupid. But when you get around to sets of 800 images, you start to understand why WordPress has a media uploader the way it does.
The biggest factor, though, is that searching ‘visually’ is a difficult system to maintain. Symbology, the art of getting a ‘thing’ to represent another ‘thing’ is incredibly hard to pin down. Look at your WordPress dashboard. Why does a pushpin represent posts when they’re not all that different than the ‘two sheet’ icon of pages in functionality? Why is users a single person icon and not multiple people?
Ask yourself this: if you’re adding in a new menu item, what do you make your icon? Do you make it your brand logo (a G in a circle for StudioPress’ Genesis theme)? Maybe you make it a cloud with a down arrow to indicate downloads (as does Easy Digital Downloads). We wanted to add in a new Custom Post Type for ‘Characters’ and ‘Shows’ for my database site and Tracy, who is amazing at this sort of thing, cheekily picked a video camera for ‘shows’ and a name tag for ‘characters.’
I had a related but slightly different issue when I wanted to use Mark Jaquith’s meta I Make Plugins plugin. I wanted to see my plugins on the sidebar with a more unique icon. I couldn’t use the most logical one, a plug, since we’re already using that for Plugins. I went with a little ‘awards’ icon.
In both cases, we studied the list of Dashicons and picked the ones that made the most sense, but it required a visual study of what we really needed and an understanding of how they all worked. Scrolling down a massive list of all of those is annoying, but necessary. You need to look at your options, think about them, and feel what they mean contextually.
I haven’t yet figured out a perfect solution, but I’ll let you know when I do.

For some reason I name my servers after Godzilla monsters. I currently have two – ogra and gamera. Gamera is a new DreamCompute box I’m using to test things on, but ogra is this server.
There used to be a server called Gamera before Gamera. Original Gamera was my very very first VPS. At the time, I had four websites, all with separate hosting plans, and I mathed out that it would be cheaper to combine them to one VPS and learn that.
Boy what a jump that was! This was my first experience with a VPS at all, but I think that, in retrospect, it gave me comprehension of the web in a new way. Before, I was just a user of the internet. After, I understood why so many sites behaved differently. Gamera was actually how I got to know Mike Schroder! We’d added ImageMagick to WordPress and he was an aficionado. I was trying to test some things and he pointed out I didn’t actually have it installed. Off to the races!
As Gamera got long in the tooth, I needed to upgrade some software and realized that doing so wasn’t going to be possible anymore without an OS upgrade. Instead of that, however, I looked into a whole new server on a new system: the cloud.
This really just meant I had a more dynamically upgradable server. It was easy to add more memory or diskspace. I could spin up new clusters as needed (though I haven’t needed to yet), and it was bigger and faster and better.
I learned a lot on Mothra. Everything from memcached and nginx to varnish and SSL was done there. I’ll miss it.
Of course time went by and I installed all sorts of shit on my server. And, eventually, I wanted to upgrade to the newest operating system and test it without downtime. This can be done, but I decided to build a new server on the latest and greatest OS and migrate my sites. Building out the server was easy, as I have a document called “How I Installed Shit On My Server” and I was able to use that to rebuild everything I needed. Some things had changed and got updated, but in general it was pretty much the same thing.
Since everything was on cPanel and WHM, transferring sites was incredibly painless. Where, five years ago, I had to ask for help, this time I was able to press the pretty buttons and do it myself. The most terrifying part was the DNS. What I did was move accounts over one at a time, starting with the smallest site and building up. Then I’d change my /etc/hosts file to point to the new IP and verify everything worked. As soon as the site was good, I changed it to the new nameservers (ns3 and ns4) and moved on to the next one.
This was only a problem when I ran into my Aunt’s website, which has the DNS over on Microsoft. A few emails later, I logged in as her and changed the setting (and saved her account information for the next time). Sadly, Microsoft doesn’t let you point a domain to a CNAME, you have to use an A record, which means IP and doing my migration this way required a new IP.
Now I’m waiting on EasyApache4 and full support for Let’s Encrypt in cPanel.

In a nutshell, the paradox is this:
The more likely a person is to test Beta and RC, the less likely they are to have bad code.
When people wonder why problems like the recent jQuery flub manage to make it all the way into the wild, they tend to assume the issue is not testing enough. They’re incorrect. The issue is not testing enough.
Put the lead pipe down.
What do you think ‘enough’ is? I promise you, the majority of people who have a broken site will say “It needed more time being tested.” And in return, the majority of developers will say “It needed more people with more diverse setups testing.”
So which one is right? They both are. We need more of the right people testing. We need people who are on the edges of ‘normal’ for WordPress. The problem is that the ‘right’ people are the people who have the worst code.
Take a look at WordPress 4.5. Not a single beta tester was using a plugin or a theme that used jQuery improperly. Look at Jetpack 4.0. Not a single beta tester had a server config that broke, but within 12 hours they had over 300 tickets about an error 500 on many sites. In both cases, a significant amount of testing was performed. People banged on the code and outright tried to break it. Just not the right people. And the worst thing is we cannot know who is the right person until after something breaks.
This leads to the other part of the paradox.
Someone whose site breaks after an upgrade is less likely to want to beta test.
This is because we’ve broken the trust. They cannot accept that their site went down, or their computer crashed. Or maybe they can’t afford to be Janie on the Spot and jump every time their site needs a bit of an extra hand when an upgrade goes bibbeldy.
Of course, this is why the people who are testing are less likely to have a problem. They don’t want to waste time debugging stupid code problems, so they put the time in front to make sure they don’t introduce bad code to their environment.
We create a vicious circle, because the only way to get better results from beta is to get more people beta testing. But few people are willing to fully beta test on a live site, and say what you will, there’s nothing quite like live testing on a real, production site. You can’t stress test reality until you get your setup on your server on your design on your everything.

Someone made a vague implication that my post about licenses were shots fired from someone who doesn’t ‘do’ but is only an ‘observer.’
This is quite inaccurate, though I don’t blog about it here and I don’t talk about it anywhere for one simple reason. I can’t. I signed a paper, years ago, that agreed the work I did for them would be private. I would neither reuse the code (which I can’t anyway) nor would I discuss it. In fact, I had to make a phone call to ask if I could blog about it in general. I understand why someone might assume I’m not speaking from experience, but that just makes an ass of you and me.
This isn’t about me refuting or dismissing allegations from someone who, for whatever reason, dislikes me and likes to make their hate public. No, this is about the interesting predicament about what happens when you can’t release information about your code.
Here’s your scenario. The front end is an open system, a plugin say, that one installs on WordPress. It’s GPLv2 (or later) compatible because it’s distributed code I want to put on WordPress.org. That right there is a requirement. Alright, so I have one half of a product that is GPL and Open Source. The other half lives on a server somewhere in the world and does all the backend work. The plugin? It just passes API data too and fro as needed.
I just described Akismet.
You and I know very little about how Akismet works on the backend. And here’s the thing, that’s how it should be. We have a lot of information on how to interact with the Akismet API but none about how it actually calculates what is and isn’t spam on the back end. I repeat – this is the way it should be.
Look at what Akismet does. It magically identifies spam. While it’s all well and good to be open source, the very first thing that would happen if they opened up all their code is we would see spammers read it and subvert it.
But then again, we have things like SpamAssassin, an open source product I use on my email servers. Does this mean SpamAssassin is too dangerous to use? Does it mean it should be avoided? No, absolutely not! While it’s far from perfect, SpamAssassin does a phenomenal job at catching and stopping spam. But at the same time, it’s imperfect and being public, it’s more likely to be subverted by clever spammers. Thankfully the things it checks for are parts of email that a clever server admin can protect from and, all in all, it’s useful.
If we accept the fact that having a code base open or closed actually has very little impact on it’s usability, then why do we lock down our systems? That’s easy. Security and profit.
Profit is the easy answer. If a system is closed then you can’t download it and install it for yourself. This means if you want to use it, you have to pay. Again, we can look at Akismet and VaultPress, which I would wager actually are built on open source code, as examples. They don’t have to be free, after all. There’s nothing wrong with being closed, either.
By making a system a closed system that no one sees the backend code for, we create a product where only people who have access to the source code can easily infiltrate. This, of course, offers no assurance that it will never be hacked, only it raises the bar and makes it harder to deal with when it does get hacked. But at the same time, it is harder to hack an unknown than a known, and it does make things somewhat safer.
Of course, if I told you all the ATM code in the world was not only open source but freely distributable and it was out there right now, how would you feel? That probably filled you with a little dread, thinking about how much trouble we already have with card skimmers and ATMs. If we have people who already know how to jack in, how much worse could it be if they knew how to encode software into the fake cards they make, and use them to backdoor your accounts?
Just because the code you work on is open source doesn’t mean you can talk about it in public. Just because the code is closed doesn’t mean you can’t.
I’m not talking about licenses here, though, I’m talking about contracts. I signed a paper about certain code I’ve written that prevents me from discussing it. So while I’d love to tell you everything about everything I’ve worked on, I can’t. But that’s not a bad thing. I’ve been privileged to work on the open and the closed, and it’s given me a greater appreciation and understanding of when we should and shouldn’t open our work. And this comes down to understanding the nature of the risks involved.
Things like ATMs, financial trading, and mortgages should be secured and private. Why? Because the risk is much too high. A license? Well a worst case scenario is that someone figures out how to backdoor a free license for themselves. Another is they figure out how to use someone else’s license to gain access to their information. Those are pretty bad. So if you want to make your license API open but the code behind it not, I support that call.
But. I do think you should have a way to manage your licenses and updates. That’s just business sense.

You may have heard of Semalt.com. I’ve heard them argue that they’re not spammers, they’re not evil, they’re not bad people.
You know what? They are. They are spamming, they are doing evil, and they’re bad people.
The other day I was checking my top-sites in Google Adsense, trying to think of how to increase revenue on my passive income, when I saw this random domain showing up on my list of sites. A site that wasn’t mine. A site that looked like a spammer:

According to Google, this happens when a site loads cached content of your domain (Google does this). It can also happen when someone copies your whole webpage into an HTML email, or if someone uses a bad iframe.
There’s also the obvious, but rare, case where someone uses your code without your knowledge.
No. Except for the part where they screw up your analytics metrics and cause load on your server. Keep reading, I’ll explain.
My first thought was “Oh shit, Google’s going to yell at me!” I quickly checked that I had site authorization on, which means only domains I’ve approved and added can show my ads. Whew.
This is a big deal by the way. While it would be nice to earn more views, if a site that isn’t mine uses my ads without knowing, I can get in trouble. More than once I’ve told off plugin developers about using Adsense in their plugins. This is for a couple reasons, first is that you can use it to track who uses your plugin (bad), but also because Google doesn’t want you to. They outright say that you cannot put ads “on any non-content-based page.” An admin dashboard is not a content page. Done and done. No ads in your plugins, thank you.
But that’s exactly why I was worried!
Where is Semalt showing my ads?
The URL was http://keywords-monitoring-your-success.com/try.php?u=http%3A%2F%2Fexample.com (not my real URL). The only reason I could find it was I dug into my Google stats and found it as a referrer. If you happen to pop that into a browser, you will be redirected to http://semalt.com/ — Real nice.
That is, by the way, how I knew it was Semalt.
Semalt is a professional SEO and marketing service. They literally make their money ‘crawling’ websites. When their site started, it was really the scamiest looking thing I’d seen in a long time. A year and a half later, they’ve cleaned up their act a bit but back in 2014 we all looked at them with a massive Spock eye.
As it turned out, they were using infected computers to scan the web. My personal guess was that they are leveraging hacked computers and using them to scan for vulnerable websites. Once they find a site, they hack it and use it to push malware.
That’s a guess. I have no proof. But based on their search patters and behavior, it’s looking pretty likely to me.
Yes! But there’s a catch.
You see, everyone says you can do this:
# Block visits from semalt.com
RewriteEngine on
RewriteCond %{HTTP_REFERER} ^http://([^.]+\.)*semalt\.com [NC]
RewriteRule .* - [F]
And while that works, it’s obvious that Semalt is on to us because now they use keywords-monitoring-your-success.com and other URLs as passthroughs.
Do you use WordPress.com? Or Jetpack? Great! Report the referrer as spam! WordPress.com blocked Semalt back in 2014, but obviously they’re on the rise again.
If you’re using Google Analytics, Referrer Spam Blocker is probably your best bet.