Half-Elf on Tech

Thoughts From a Professional Lesbian

Author: Ipstenu (Mika Epstein)

  • Varnish Cache and Cache-Control

    Varnish Cache and Cache-Control

    In our quest for speed, making websites faster relies on telling browsers when content is new and when it’s not, allowing them to only download the new stuff. At their heart, Cache Headers are what tell the browser how long to cache content. There’s a special header called Cache-Control which allows each resource to decide it’s own policy, such as who can cache the response, when, where, and for how long. By default, they time we set for the cache to expire is how old a visitor’s copy can be before it needs a refresh.

    A lot of the time, I see people setting Cache-Control to none and wondering why their site is slow.

    Since I spend a lot of time working on DreamPress, which uses Varnish, I do a lot of diagnostics on people with slow sites. One of my internal scripts checks for Cache-Control so I can explain to people that setting it to none will tell Varnish (and browsers) literally not to cache the content.

    The way it works is that they actually set things to ‘no-cache’ or ‘no-store.’ The first one says that the content can actually be cached, but it’s going to check and make sure the resources haven’t changed. It’s not really ‘no-cache’ but ‘check-cache.’ If nothing’s changed, there’s no new download of content, which is good, but it’s still not caching.

    On the other hand, ‘no-store’ is really what we think about when we say not to cache. That tells the browser and all intermediate caches that every time someone wants this resource, download it from the server. Each. Time.

    What does this have to do with Varnish? Well here’s the Varnish doc on Cache-Control:

    no-store: The response body must not be stored by any cache mechanism;

    no-cache: Authorizes a cache mechanism to store the response in its cache but it must not reuse it without validating it with the origin server first. In order to avoid any confusion with this argument think of it as a “store-but-do-no-serve-from-cache-without-revalidation” instruction.

    Since Cache-Control always overrides Expires, setting things not to cache or store means you’re slowing down your site. Related to this, if you set your Max-Age to 0, then you’re telling visitors that the page’s cache is only valid for 0 seconds…

    And some of you just said “Oh.”

    Out of the box, WordPress actually doesn’t set these things poorly. That generally means if your site kicks out those horrible messages, it’s a plugin or a theme or, worst of all, a rogue Javascript that’s doing it. The last one is nigh-impossible to sort out. I’ve only been able to do it when I disable plugins and narrow down what does it. The problem is that just searching for ‘Cache-control’ can come up short when things are stashed in Javascript.

    But there’s some kind of cool news. You can tell Wordpress to override and not send those headers. I’ve not had great success with using this when it’s a script being an idiot, but it works well for most plugins and themes that seem to think not caching is the way to go.

    From StackExchange:

    function varnish_safe_http_headers() {
        header( 'X-UA-Compatible: IE=edge,chrome=1' );
        session_cache_limiter('');
        header("Cache-Control: public, s-maxage=120");
      if( !session_id() )
      {
        session_start();
      }
    }
    add_action( 'send_headers', 'varnish_safe_http_headers' );
    

    And yes, it works on DreamPress.

  • Uniqueness Matters

    Uniqueness Matters

    This email gets sent a lot to plugin devs lately:

    All plugins should have unique function names, defines, and classnames. This will prevent your plugin from conflicting with other plugins or themes.

    For example, if your plugin is called “Easy Custom Post Types”, then you might prefix your functions with ecpt_{your function name here}. Similarly a define of LICENSE would be better done as ECPT_LICENSE.

    Please update your plugin to use more unique names.

    And once in a while someone asks why we care so much.

    There are 38,308 active plugins in the WordPress.org repository. If every one of them uses a global define of IS_PLUGIN_SETUP then they will all conflict with each other. If half use a script handle of plugincss then all those plugins will stomp over each other when it comes to enqueuing the CSS.

    It’s literally a numbers game.

    Every day we get at least 30 new plugin submissions to WordPress.org. That means every day at least 30 new potential conflicts show up. And it’s not just plugins. In WordPress 4.2, a new function was added: get_avatar_url()

    This was a great idea that saved people countless hours of work. Unless they logged in to see the error Fatal error: Cannot redeclare get_avatar_url() prance across their screen.

    Now in this case, theme authors had previously been told to include/make themselves, but was later added to core. All theme devs hosting on WordPress.org were notified and it was posted on the change blogs. But not everyone remembers to check those. And not everyone updates their themes right away. In a way, this probably could have been communicated better, but had the themes called their function mythemename_get_avatar_url() then this wouldn’t have been a problem.

    Prefix everything. Make it unique to your plugin or theme. WordPress is ‘home free’ and shouldn’t have to, but you should.

  • Mailbag: Multisite Subdomains Live Where?

    Mailbag: Multisite Subdomains Live Where?

    Heather is confused, and I don’t blame her:

    I am want to use subdomains in my multisite. 1. Install WordPress in the subfolder and set it up to run from root before you create your subsites. 2. You should not use www in your URL ….. Where exactly do need to change this? Settings/ General ( that’s how i saw it in your book) or in my file manager, having to change it in many different files…. ( saw and read this from other internet sources).

    Let’s take this by the numbers.

    1. Install WordPress in the subfolder and set it up to run from root before you create your subsites.

    If (and that’s a big if) you want to install WordPress at example.com/wordpress but have the URL look like example.com then you must do this before you activate Multisite. Can it be done after? Yes. But you will go insane.

    1. You should not use www in your URL

    If you’re using subdomains, just don’t. The issue where WordPress breaks if you use the www here is not to do with WordPress so much as the variant hosts out there and how they handle the www/non-www redirects. Save yourself a headache. Don’t use www. You don’t use www. Yes, I know Google does. They don’t care so long as you’re consistent.

    In both cases, on your single site install of Wordpress, you go to the General panel. The value for Site Address (URL) is what you want people to see when they visit your site. The one for WordPress Address (URL) is where WordPress is installed.

    Make sure they both match in terms of schema and www. Then change the Site Address from http://example.com/folder to http://example.com and save it.

    The official directions are on the codex – Giving WordPress it’s own Directory. There’s a bit more when it comes to moving a couple files, but really that’s it. Once it’s done and working, go ahead and activate Multisite.

    By the way, I always get the WordPress Address and Site Address confused. It’s not just you.

  • Trust the Changelog

    Trust the Changelog

    Recently there were a couple WordPress plugins with fairly major security fixes. But you wouldn’t know it by looking at their changelogs.

    The changelog is a section of a product’s readme that describes what changed. For most people, it’s a list of items like this:

    • Added feature X
    • Corrected typo
    • Security fix

    The problem many people have is that last one is often left rather vague. I’m guilty of this myself. In a recent fix, I simply said “Security fix: Sanitizing _POST calls to prevent evil.” and “Security Fix: Implementing nonces.”

    The primary reason we keep change logs a bit vague is because we don’t want to open the door to alert hackers as to vulnerabilities. People don’t update their code right away, so every time we publicize a security issue, the people who haven’t updated immediately are at greater risk of being hurt.

    But if we don’t tell people how important it is to update, how do they know how important it is to update?

    There’s the real issue. There’s not yet a proper balance between “You should upgrade as soon as possible” and “You need to upgrade now, or you’re doomed.” My security issue was only accessible by people with admin access. It would be possible to trick an admin, with a cleverly crafted page, but … The effort it took me to apply a nonce check and sanitize things is minimal. From my end, it’s very minor of a fix. From a user’s end, it’s an exceptionally rare hack and unlikely to occur.

    The right answer here is “Always upgrade to the latest version of code as soon as possible.” The problem is “as soon as you can” gets bumped out if it’s not mission critical. A patch that adds in a filter? Not a big deal. A patch that secures my site? Should be a big deal. I would argue that any time anyone says “This is a security fix” then you shouldn’t have to concern yourself about how likely the hack is to impact. Instead, security is a watchword to tell you to update the software “immediately.”

    Which brings us to two agreements we need to start making with people. The agreement of developers to do things ‘easier’ for users and the agreement of the users to trust developers. If we want people to upgrade, they have to trust us. And if they’re going to trust us, we have to be reliable and consistent.

    As developers, we promise not to flag something as a critical security fix that isn’t just security fix. If there’s a major issue with our code, we will push a patch as soon as possible that only deals with that issue. There will be no feature changes, no little fixes, no minor tweaks. A security release will only be a security release.

    Furthermore, to enable people to update properly, we will properly use semantic versioning. This will allow us to update minor releases as far back as logical, because you can know that version 1.2.8 is the latest version of the (old) 1.2 branch, and 1.5.3 is the latest of the (current) 1.5 branch. The next time we add in new features, we will properly version our code as 1.6 so that you know what branch is current.

    As users, we promise to trust your security-only releases and upgrade our copies of your code when a minor release that is a security issue is released. If you release a version 1.5.4 and not a version 1.4.4, we will trust that either the 1.4 branch is not subject to this security issue, or the fix could not be back-ported. If you inform us that we must upgrade to the 1.5 branch because there’s no way to secure 1.4, we will expedite our upgrade.

    In order to enable ease of upgrade, we will not edit our code to make it impossible to change. We will properly use functions and actions and filters and hooks. We will make regular backups as well as immediate ones before upgrading.

    Of course… That’s a perfect world. But I’m going to do my part as a developer and start versioning better. If I do that, and if I as a user hold up my other end, then we can get to a place where all disclosures of security issues happen in tandem with a release, as we know that everyone will upgrade immediately.

    A place of trust.

  • Heartbeat API

    Heartbeat API

    For the longest time I didn’t really get the heartbeat API. I get the very basics of it, but it took me a while to understand what it was really doing.

    We’re used to interacting with our websites in a very one-directional way. I click a link, I go to a page. Heartbeat API is a bi-directional communication, which can best be translated as ‘shit happens without you having to click.’ The browser and the server are happily chatting to each other all the time, without me having to do anything.

    If you’ve ever used Google Docs or MS Word or Apple Pages and noticed you don’t have to press ‘save’ all the time, that’s the idea of a Heartbeat API. It does the background things for you. That thing you should do (save often). In WordPress, the most obvious thing that Heartbeat does is it saves your posts as revisions.

    That doesn’t stop neurotics like me from pressing ‘save’ all the time.

    Of course, this can get a little expensive, checking all the time, so the Heartbeat API is smart. It only checks every 15 seconds by default (you can change this), and it only checks when you’re on a page doing something. If you just leave a page open, it slows down and, after an hour, turns off.

    But besides saving, the Heartbeat API can share information between the systems. For example, it can ping out to Jetpack and check if it’s up and everything’s working, or if you have to reconnect your LinkedIn settings. And since it’s javascript based, it doesn’t reload the page.

    You can use it to alert logged in users to new content. Imagine a post that suddenly had an overlay of ‘This post has been updated…’ Doing that requires two parts:

    1. Hook into the send call and add your data
    2. Hook into the receive call and show the data

    Pippin has a great heartbeat API example but he also warns people:

    I’ve managed to bring down server servers by using the Heartbeat API on the frontend.

    If you trigger things to be called too often, you can totally crash a server.

    The Heartbeat API is a step towards making our sites really responsive and modular. Instead of statically checking via PHP if someone’s logged in and changing display based on that, Heartbeat can dynamically change on the fly. This would allow caching systems like Varnish or static-file caches like WP Super Cache, to save the cache and speed up your site while still being dynamic. It lessens the weight on wp_ajax_ and makes things work with caches, rather than bypass them.

    And that will make our sites beat faster for everyone.

  • Mailbag: Would You Review…

    Mailbag: Would You Review…

    From ‘anon’ in Orange County:

    Why did you review plugins at WCOC but you said no when I asked you to review mine?

    That’s not actually exactly what he said. I’ve cleaned up the English.

    At WordCamp Orange Country, I was given the rare opportunity to put value judgements on plugins. I never get to do this. I review plugins on WordPress.org every day (pretty much). I test them and evaluate them and send back comments. I look for bugs and security and guideline issues.

    What I never get to do is tell people how I feel about their plugins. So when WCOC asked me to judge the Plugin-a-palooza contest ala American Idol, I replied “Oh I’m Simon Cowell. I’m in.” And when Ryan Seacrest — I mean Chris Lema asked me why I agreed to do it, I said it was because I never got to be myself when I reviewed plugins. I never got to just review and tell people how I felt.

    They should have known. Right?

    The plugin that won was not the one I liked the best. The one I liked the best was small, it was simple (if a little complex with the code). It did one thing and it did it very well. It wasn’t perfect by any means, but it worked. That plugin came in second.

    The plugin that won, I wanted to like. In fact, I said that. I said “I really wanted to like this plugin.” And everyone went ‘oooooh’ (or possibly ‘boooo’). Chris jumped to my defense and pointed out I had warned them. So I detailed out why I didn’t like the plugin, from the UX perspective, and from the functionality. Some of why I didn’t like it weren’t it’s fault at all. It was trying to fix an imperfect system in a sensible way.

    And I was asked, in the session, if it mattered to me if the flaws in the plugin were the fault of core, the APIs, or the plugin.

    No.

    Look. When you make a plugin, you’ve made it knowing the world around you and knowing what’s already broken (or non-optimal). So when you’ve made decisions on how you handle things, I firmly feel that you should be graceful and reliable and fail intelligently. And I don’t think they did. I think they solved the main problem, but in doing so created a host of others.

    When someone asks me to review their code (theme or plugin) it’s like being asked if the pants make their butt look big. They want to hear “This is great.” I believe coding is as much art as math (and yes, I see the beauty in math), so it’s also a very critical and personal thing to be told that I don’t like this thing you’ve created. It hurts!

    But for the folks in plugin-a-palooza, they knew what they were getting into. Maybe they didn’t know it was with me, but they knew. They knew they’d be judged on merits and flaws. Most people who email me asking for those reviews don’t know that. They think maybe they’ll get a security review or a comment about how things could be better.

    What they’re going to get is how I feel about their work. Did the art speak to me? Did I get confused when I used it? Did I enjoy it? Do I think it did what it set out to do? Do I think it fixes the issues it claims to address?

    Most people really aren’t ready for the real answers. And as much as it’s nice to be able to be me for a while, I’m still always chained by the reality that my reviews come from someone who’s involved in the WordPress community. Just imagine my ‘testimonials.’

    So in the interests of not starting fights, not giving myself headaches, and not making everyone grumpy, I will decline to review your code today.