When WordPress 4.0.1 came out, a small number of sites broke.
For a while, we’ve been touting that minor releases to WordPress core, the ones we auto-upgrade for you, are very safe, very tested, and very important. While all this is true, it has brought a few people complaining to me that obviously I was wrong.
It’s true that the 4.0.1 release broke people. It was an object lesson in why I tell people not to reinvent the wheel. But this upgrade situation does not mean the upgrades aren’t safe, secure or smart. It does bring thoughts to mind, like what my friend David talks about when he considers WordPress at the enterprise level. I know people who are using this failed upgrade scenario as a reason to tout that WordPress isn’t ready for big business, but I think they’re looking at it from the wrong perspective.

Let’s step back.
I used to work for The Man. I’m well aware of the machinations you go through to upgrade anything at a massive enterprise. One of the things they do is a code review. Every single upgrade is checked and tested and a dry-run of the upgrade is run to ensure everything works the way they think it should. By allowing WordPress auto-upgrades, you remove that ability. For a massive corporation? I would turn off the auto-update.
But at the same time, this mythical major company running WordPress would have at least one person who knew WordPress. They would have someone who’s job it was to review every single bit of code that went into their WordPress site. Each plugin would be checked, tested, evaluated for security, and only installed if that WordPress Checker said it was good. Because that’s exactly what you must do in any and all enterprise situations.
David’s viewpoint is that the vetting of a site should be delegated.
My gut reaction to say that they should know better has to be tempered with the fact that no, they should not have to. It’s the job of every site owner to vet their system, but to make a platform that is truly global, that vetting should be delegated. Web hosts and security analysts should vet code for collisions and bugs. Theme and plugin shops should ensure that their products adhere to best practices. Putting accountability for the full stack on each site owner is not only inefficient, but impractical. Inherent trust should exist that code in the official repository maintains a baseline level of code, trust that is eroded when the problems that occurred with a subset of sites on this update occur.
And here, he and I disagree somewhat.
It’s the job of everyone who uses software to be aware of what they’re doing. Vetting the software before it goes in to your system has to be someone’s job. WordPress core does an amazing job of this for you. WordPress core is safe. The 45k plugins and themes in the world don’t always meet the same level of robust checking. Which means when you introduce WordPress to your environment, you absolutely have to seriously review those third party odds and sods you want to use because they’re so shiny and cool.
Web hosts vet code for collisions, sure. We do at DreamHost. That’s part of my and Mike’s jobs! We know what’s going into WordPress and if it’s going to blow things up at DreamHost for our customers. But like the site owner who found her site down one morning because we’d upgraded her from PHP 5.2 to 5.4 and it broke her WordPress 2.5 site, we cannot account for everything.
I think there’s a need for security specialists to review plugins, in a public forum, and point out who’s not doing things in the best way. I also think that there’s a need for developers to remember there’s a reason why we do things a certain way, and while it’s fine not to, you have to keep in mind that it’s now your responsibility to keep a close eye on anything that changes in core that might cause your code not to work as well.
For example. If I wrote a plugin that worked around the shortcode API for whatever reason, I would have a custom query on trac for any ticket related to Shortcodes and have it as an RSS feed to monitor. Or I might even subscribe to the trac firehose and use a filter to pull out anything that so much as mentioned the word. Because I’ve now made a change that I know might be a problem someday.
Every business owner should know the risks of all the software they use, be it website or desktop. This responsibility is the cost of doing business. The size of the business and the importance of the software will change what resources you can afford to allocate to that part of your business, but you absolutely cannot ignore it.
While I really want to say that because WordPress core does due diligence you don’t have to, I would be a lying liar who lies. Even if we do as David suggests and have everyone in the world making sure things are vetted and checked and stamped, it still requires the owners of a site listen to that information and not use the code that’s less optimal. Enforcing that would be impossible unless you wanted to suggest that WordPress outright deactivate code that doesn’t use the proper APIs. That would put a lot of weight on WordPress and slow it down and be pretty annoying for people who are legitimately using non-standard methods of development and implementation.
No matter what, at the end of the day, the person who is responsible for the code quality is the person who wrote and maintains it. But the person responsible for their site is the person using the code. You have to know what code you’re putting into the site and be aware of the risks you’re introducing to your environment by doing so. If your website is your entire business, you cannot afford to be cavalier about these things.
Disasters happen. Understanding the risks will prepare you for dealing with them when they do.



I hate the five nines. The Six Sigma Stigma has me wishing that everyone who tells me they’re a ‘black belt’ please die in a fire. It’s not that I don’t think that the process can work for some people, or that it’s useless as a whole, but that I think too many people treat it like an MBA. “I did this thing for a few months, I am now an expert.” I had a bunch of coworkers who did that. I hated them. I got to the point that if you said “We need five-nine reliability” I had a Pavlovian reaction that involved me rolling me eyes and tuning out.
I don’t expect anyone to do all that 100% of the time, but I expect them to care about the things that are important to them as an entity. My webhost should care about the severs not being on fire and serving up webpages. My bank should care that my money is safe and available. My government should care that it’s … Too soon? Anyway, the point is that you should care about what you do, and provide the best service you can. Now, if 50% uptime is your best, maybe I’ll look for someone else. I am reasonable about these things. If email goes down, how fast did you get it back up? But to me 50% isn’t reliable unless I’m looking for something that, intentionally, only works half the time.





I have a slightly selfish reason for worrying about it. I work for a company where using a proxy to get to websites they’ve blocked is grounds for being fired. I’m not the only person who has this concern. The worst part about this is if I went to a site that used a proxy, without telling me, I could get ‘caught’ and fired. Oh sure, I could argue ‘I didn’t know!’ but the fact remains that my job is in jeopardy. This is part of why I hate short-links I can’t trace back. A proxy being ‘right’ or ‘wrong’ doesn’t matter, what matters is the contract I signed that says I will not circumvent the office firewall knowingly. Now I have to be even more careful with every link I click, but the uneducated who don’t know anything about this are at a huge risk.
I do a lot of forum support, and I can easily envision people getting cease-and-desist orders from the Courts, telling them to remove their proxies. I can see webhosts shutting down sites because they don’t want to deal with the hassle, or because their servers happen to be located in a country where the site being proxied is blocked. And without any effort at all, I can see the users, who don’t understand the risk they’re getting into by running this proxy, screaming their heads off and blaming WordPress because they are uneducated. They’re not stupid, and they’re not evil, they just don’t see the big picture.
This was written without any special insider knowledge. I’ve simply watched, paid attention, and kept track for the last two years. Often when I report a plugin, Mark and Otto are nice enough to explain things to me, and I’ve listened.
There’s an argument that ‘Trust requires transparency’ when it comes to security (
To fix all this, you’d need to basically reboot the plugins directory, turn them all off, review each of the 18,000+ plugins, and turn them back on. Then you need an Otto or Nacin going through each one to make sure every check in is okay, every update and every change isn’t spamming. Oh yes, that’s what happens to theme devs, didn’t you know? All releases are approved before they go live. Can you see the plugin developers agreeing to that? That’s a nonsense complaint of mine, actually. If tomorrow the rules changed, maybe half the plugins in the repo would vanish and never come back, but most of the rest would be fine. Of course, we would need a dedicated team of people to do nothing but review and approve plugins to keep up with the traffic.
But why worry about a simple list of removed plugins? Because the first thing I would do, if I was a nefarious hacker, would be to script a pull from that list and scan the web looking for sites that use the plugins, thus implementing a vector for attack. See, we already know people don’t update plugins as often as they should (which is why Yoast’s ‘fix’ isn’t as good an idea as we’d hope), but now not only are we leaving people at risk, we’re opening them to even more risk. If we email them to tell them the plugin’s risky, we have the same problem.


