Half-Elf on Tech

Thoughts From a Professional Lesbian

Tag: servers

  • Take my SPDY, Please

    Take my SPDY, Please

    When I upgraded my server to Apache 2.4, I lamented that the choice had killed any ability I had to run SPDY. This led to me installing an Nginx proxy on my box and being pretty happy with it.

    I wanted to like the idea of SPDY on Apache, but there were serious issues with mod_spdy. First of all, it’s incompatible with Apache version 2.4. That sucks. While someone had forked it, issue number two made me worry. You see, mod_spdy requires OpenSSL version 1.0.1c and modifies mod_ssl. If it was Google’s suggestions for it, I might be okay, but now we’re talking about trusting some random person out there. No. Finally the dippy thing hadn’t been updated in years.

    Someone finally shamed Google enough, because earlier this year Google gave mod_spdy to Apache. The plan is for it to be a part of Apache 2.4, as well as the future 2.6/3.0 world:

    Being a part of Apache core will make SPDY support even more widely available for Apache httpd users, and pave the way for HTTP/2.0. It will also further improve that support over the original version of mod_spdy by better integrating SPDY and HTTP/2.0’s multiplexing features with the core part of the server.

    Finally!

    Except not. Sadly, there’s been very little activity since this summer. You can look at the code on Apache SVN and mod_spdy hasn’t been touched in 3 months. It’s sad to see this linger. I had high hopes that Apache would jump and run, but they haven’t even made it work with Apache 2.4 yet.

    I’m not going to hold my breath for parity on this one just yet.

  • SSL Intermediary Certificates

    SSL Intermediary Certificates

    Every now and then, my Andriod friends tell me my store won’t work on their phones.

    Android warning: Your connection is not private

    Now my store works on Chrome, Firefox, Safari, and IE. I get a green lock, which is what you’re looking for on Chrome, and SSL Labs comes back … with varying results of stupidity. I tend to get this:

    Unexpected failure – our tests are designed to fail when unusual results are observed. This usually happens when there are multiple TLS servers behind the same IP address. In such cases we can’t provide accurate results, which is why we fail.

    Now this is a ‘valid’ failure. I have one IP and a multi-domain certificate (ipstenu.org, mothra.ipstenu.org, store.halfelf.org). It’s stupid, mind you, since sometimes it works and sometimes it doesn’t and it gives me a headache. If you look on Digicert or SSLShopper, they both come back just fine. I’ve started to think that the ssllabbs cache is drunk. I’m going to assume I’m okay based on sslcheck, who gives me a B because it can’t tell if I patched for BEAST (I did).

    That said, I did some research and determined I was not the only person having this issue specifically with a Comodo cert! As it happens, the issue was in part due to missing an intermediate certificate in my file. If someone’s already visited another website which has the same certificate seller, the intermediate certificate is remembered in the browser. Sounds great, right? The site loads faster! But if the visitor hasn’t hit my store, then they don’t have the intermediary certificate and it would fail.

    But why does this only happen on an Android phone? Your browser on your big computer has a whole mess of certificates it saves for you, to make things faster for everyone. Your phones don’t.

    To solve a missing intermediate certificate in the SSL connection, you have to add the intermediate certificate to your own certificate file. This is a little annoying with cPanel/WHM, because I can only do it as root. I’d previously added everything via cPanel as my ipstenu.org log in because it was per domain, right? The trick here is that I have to not just add the certificate by pasting that in, but I have to grab the other two certs that came with:

    Two More Certificates!

    Notice how there are four? The first one is my certificate, the one I pasted in. The second is my Root certificate, leave it alone. The bottom two I had to add at the bottom of the cert page, where it said “Certificate Authority Bundle (optional):” Those I pasted the content of, one after the other, and saved it. In my case, I was so annoyed I deleted them all and re-added everyone, pasting in the main cert and using auto-fill, and then manually adding in the bundle.

    I do find it interesting to note that this only failed on Android phones, though.

  • Learning nginx

    Learning nginx

    I’m an nginx rookie. In fact, I moved a site specifically to nginx so I could sit and learn it. While this site, today, is ‘on’ nginx, it’s actually an nginx proxy that sits in front of Apache 2.4, not because I think Apache is necessarily better, but because after all this time, I still can’t stand the loss of the dynamism of my .htaccess.

    When I was experimenting, though, one of the things I started to do was recreate my ‘tinfoil hat’ .htaccess rules in nginx. What are ‘tinfoil hat rules’? They’re things I’ve tweaked in .htaccess to make it harder for nefarious people to look at my code and get into my servers. They’re also general ‘stop being from being jerks’ rules (like preventing hotlinking).

    This isn’t complete, but it’s everything I’d started to compile and test.

    Header

    ######################
    # TinFoil Hat Rules
    

    This is pretty basic, I like to document my section before I get too far into this.

    Directory Listing

    # Directory Index Off
    location  /  {
      autoindex  on;
    }
    

    Directory listing is like when you go to domain.com/images/ and you get a list of all their images. This is just a bad idea, as people can also use it to list PHP files you might have (many plugins lack an index.php, and no, this isn’t a bad thing). This simple rule will protect you.

    Hotlinking

    # Hotlinking
    location ~* (.jpg|.png|.jpeg|.gif)$ {
        valid_referers blocked elftest.net *.elftest.net;
        if ($invalid_referer) {
            return 444;
        }
    }
    

    Ah. Hotlinking. This is in-line using images from someone else’s server, like <img src="http://example.com/images/yourimage.jpg" /> – If I’m on example.com, that’s fine. If I’m not then that’s bad. Never ever hotlink images unless the site provides you a hotlinking URL. I cannot stress this enough.

    This code comes straight from the nginx wiki, and works great.

    Protecting wp-config.php

    This is pretty straightforward. I want to block anyone from hitting that directly, any time, any where.

    location /wp-config.php {
        deny all;
    }
    

    Done.

    Brute Force Protection

    If you have ngx_http_limit_req_module module then you can rate-limit how many requests an IP can give to a file.

    location /wp-login.php {
        limit_req zone=one burst=5;
    }
    

    And that’s all I got to…

    And that is, sadly, as far as I got before I started playing with Apache 2.4 and enjoying the ifs of that, over nginx. What about you? What are your nginx security tweaks?

  • Home Affects Your Website

    Home Affects Your Website

    There’s a vulnerability with an old version of MailPoet, which according to Sucuri, is the reason for the breaking of ‘thousands’ of WordPress sites. I do not doubt their claim, nor the validity of the statement, but I did wince mightily at their wording.

    At the time of the post, the root cause of the malware injections was a bit of a mystery. After a frantic 72 hours, we are confirming that the attack vector for these compromises is the MailPoet vulnerability. To be clear, the MailPoet vulnerability is the entry point, it doesn’t mean your website has to have it enabled or that you have it on the website; if it resides on the server, in a neighboring website, it can still affect your website.

    All the hacked sites were either using MailPoet or had it installed on another sites within the same shared account (cross-contamination still matters).

    I bolded the important part here.

    I disagree with the broad, sweeping, implication that statement makes. While they do mitigate that with the next paragraph (and yes, you should read the links), it gives a bad impression as to what the issue really is there. If the vulnerable code resides on your server, under your user account, in a web-accessible directory, then yes, it can affect your website. However for any decent webhost, your site being vulnerable will not result in my domain being hacked.

    Good hosts don’t permit users to access each other’s files. I know it’s semantics, but the implication is that a stranger’s website on your server will make you vulnerable. And that’s just not a given. I know that explaining the nature of relationships between user accounts and access is fraught with complexity, but this is a place where I look at security sites and bang my head on the table because they’re not educating people.

    The way security works for most people is entirely an FUD scenario. They fear what they don’t understand, which generates more uncertainty and doubt. I spent time recently trying to break down that wall and talk about the behaviors in us that make things risky, and I’ll be speaking at WordCamp LA about it in September of this year. I understand totally why Sucuri, and many other people, phrase it this way, but since I firmly believe that education is the only true way to mitigate hacked sites, I want to explain the relationship of files to people.

    A bed in a jail cell

    If you’ve ever FTPd or SSHd into your website, you know you have a user ID. That ID owns the files on your server, but it’s not the only account on a server. Your ID is yours and yours alone. You can give someone else the password, but please don’t unless you trust them with your car. Once you’re logged in with your account, everything you see is connected. This means if you can see it, then anyone else who gets into your account can see it.

    How does WordPress play into this? Well if you can see it logged in, then so can WordPress, to an extent. If a plugin or a theme has a specific kind of vulnerability, then it can be used to extract information like everything under that user account. A pretty common vulnerability I see is where the plugin allows you to read any file on the system, including the wp-config.php file, which gives people your database username and password (and it’s why I tell people to change all their passwords).

    A very common thing for people to do, and I do this myself, is to run multiple domains under one user account. Many times they’re called ‘add on’ domains. In this case, you can actually visit https://ipstenu.org/helf.us/ and see the same site as you would at https://helf.us. This is problem fairly easily fixed with .htaccess (though if, like me, you also have mapped domains, it gets much messier):

    RewriteEngine On
    RewriteCond %{HTTP_HOST} ^(www.)?example.com$ [NC]
    RewriteCond %{REQUEST_URI} ^/addon1/(.*)$ [OR]
    RewriteCond %{REQUEST_URI} ^/addonN/(.*)$
    RewriteRule ^(.*)$ - [L,R=404]
    

    All that said, if someone knows that helf.us and ipstenu.org are on the same server, and the software I use on one is vulnerable, it can be shockingly trivial to infect the other.

    What is not trivial would be using an exploit on ipstenu.org to hack ipstenu.com. Yes, it redirects you to ipstenu.org, but it is a real website. The reason I would be shocked to find it infected, if ipstenu.org was, is that they’re under separate user accounts. If you logged in with the ipstenuorg ID, you would not, could not, see ipstenucom.

    ipstenuorg@ipstenu.org [/home]# ls -lah
    /bin/ls: cannot open directory .: Permission denied
    

    And even if they knew there was a folder called ipstenucom, the couldn’t do anything about it except get in:

    ipstenuorg@ipstenu.org [/home]# cd ipstenu.com
    ipstenuorg@ipstenu.org [/home/ipstenu.com]# ls -lah
    /bin/ls: cannot open directory .: Permission denied
    ipstenuorg@ipstenu.org [/home/ipstenu.com]# cd public_html
    -bash: cd: public_html: Permission denied
    

    The separation of the users is going to protect me.

    So to reiterate, if a site (or the account that owns a site) has access to other sites, and is hacked, yes, those other sites are at high risk. If the site has no access to anything but itself, they will not be hacked. And as I said before, most hosts go to tremendous lengths to ensure you cannot read someone else’s files or folders. The whole reason I can get into the ipstenucom is that the permissions on that folder allow it. Would it be safer to prevent it? Sure! And actually that’s not what you normally see when you’re on my servers.

    ipstenuorg@ipstenu.org [~]# cd ../
    ipstenuorg@ipstenu.org [/home]# ls -lah
    total 12K
    drwx--x--x 37 ipstenuorg ipstenuorg 4.0K Jul 23 02:04 ipstenu.org/
    ipstenuorg@ipstenu.org [/home]# cd ipstenu.com
    -jailshell: cd: ipstenu.com: No such file or directory
    

    That’s right, I use jailed shell to prevent shenanigans, and even when I don’t, things are remarkably safe because I don’t permit users to snoop on other users. That said, as I was reminded we must never underestimate the ability of a fool, playing at sys admin work, to take their own pants down. It’s possible for a user to set up their own domain to be searchable by other accounts on the server, and to make it writable to those other users, which can cause a lot of problems.

    Here’s your takeaway. Everything that is installed on your domain, active or not, is a potential vulnerability. Upgrade everything as soon as you can, delete anything you’re not using, don’t give more people the keys to the castle than you have to, and try really, really hard to think about what you’re doing.

  • La Vitesse 2: Cruise Control

    La Vitesse 2: Cruise Control

    Now you know all about caching and how it works. But what about speeding up the server itself?

    Near the end of the previous post, I mentioned that all the caching in the world didn’t really speed up the server. This is somewhat of a lie, since if, say, you’re using Varnish to cache your site, then your visitors won’t be hitting your WordPress install, speeding it up for you to do work. But it’s not the full picture.

    WordPress is big and it’s getting bigger and more complex and more impressive. So is Drupal and … well pretty much everything else. In order to make your site do more, like all that super fancy layout transformations, we have to upgrade and innovate. But then you start getting into extending these apps, like using custom fields and extra meta values to store more information so you can change search results in more impressive ways! Your site scrolls and changes backgrounds! Your site dynamically changes what products are available based on check boxes, without reloading!

    What did that have to do with caching? Well … how do you cache things that aren’t static?

    A Cruise Ship

    My coworker, Mike, likes to talk about things that should be cached and things that should never be cached. Things that have to be dynamic and run without a page refresh, like ajax and javascript, can be cached to an extent, since those plugins and Varnish will just keep the code in-line, which means it’ll still run. But when you start looking at dynamic things like shopping carts, we hit a new world and a new wall. But I’m not even talking about that level of caching. I’m talking about going back a layer into the part where WordPress (or any app) has PHP query the database. If we speed that up, caching safe content, can’t we speed things up? You bet we can!

    A few years ago I talked about APC and how I was using it to speed up PHP by having it cache things. Then less than a year later, I switched to Zend and memcached. I did those things because I decided that it would be better to have my server, a workhorse, do the hard work instead of asking WordPress to do it. And this was, in general, a pretty good idea.

    Memcached is an add-on for your server, and acts as “an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.” In English? It remembers what you did and can retrieve results faster because of it. Of course, you have to tell your apps to play well with it, but once you do, the backend of your site starts to speed up because, hey, caching! The Illustrated Memcache’d story is kind of an awesome explanation (the images on the archive page are broken, but the links work). And yes, I do use memcached and ZendOptomizer+ on my server, because it really does make things faster, even when two of the ten domains are having a 10k pageviews in a day.

    I keep telling everyone my server isn’t overkill….

    The point of that, though is the other end of speed is totally separate from your application code. When you install WordPress, you know it runs SQL and PHP, so if you can make those faster, WordPress will be faster. The same applies to speeding up Apache, maybe by putting Nginx in front of it, or maybe by tuning the hard to understand settings in httpd.conf to make sure that it elegantly handles the 300 people who come running to your site at once.

    But unlike plugins, this aspect of server speed is the hard stuff. When you look at WP Super Cache, you think it’s complicated until you see W3 Total Cache. Both are amazing and awesome, but they’re giving you a ton of options and asking you to make decisions. The same is true of the servers, but now you have even more options. The world is your oyster, and we haven’t even looked at hardware.

    For me, it boils down to how can I make my server do it’s job fast and efficiently. If that means I hire someone to install software to cache better, or I pay for more memory, then that’s what I do. I also trim the fat and get ride of everything I’m not using and don’t need, so my server doesn’t have to do more than it needs to. And one day, one day, I’ll be looking at nginx for everything.

  • Yum!

    Yum!

    I’ve touched on it a couple times, and it’s related to why I love things like Homebrew, but I like package installers. While I can, and have many times, installed and packaged from source, it scares me and I don’t like it. When I talked about Homebrew, I mentioned in passing that I use yum to manage packages on my servers, and someone asked me “What’s that?”

    When I started this blog in 2009, the very first post was about understanding how to mess with my VPS and tweak it to work well with WordPress and everything else. In the five years since, I’ve learned a great deal about servers, tweaks, how to break things, but also how to upgrade them smartly. I’ve had struggles with SVN and GIT, but I understand that managing versions and revisions is a sane way to handle upgrades. But at the same time, not compiling code means I have more free time to mess with the stuff I like.

    This brings us to Yum.

    Yum is a software package manager manager, which means it checks various RPM Package Managers, sees if there is software you have that has an update, and updates it. An RPM Package Manager (or an RPM, yes, it’s a recursive acronym, just like PHP) stores packaged versions of code… I have a feeling someone’s looking at me like I spoke a new language.

    Let’s step back further. Your server is a computer and runs software. Most people have the experience of installing software via packaged installers (I used to make them for a living). When you download an app onto your phone, the phone downloads the package and runs the installer. This is similar to WordPress, right? You download the zip, unpack it, and run an installer. But your server, well, for a very long time people didn’t have installer packages, they had source code. The source code was downloaded, unzipped (hush, you know what I mean), compiled, and then installed.

    'Are you stealing those LCDs?' 'Yeah, but I'm doing it while my code compiles.'
    Credit: xkcd comic “Compiling”

    Yes, compiling code takes a long time. That joke is less funny than it is accurate to some of us. It’s not something most of us do any more, though, because code now tends to come pre-compiled, and that’s where package managers come into play. You see, someone realized that they could pre-compile code for all servers. It’s not the same code for all the servers, though, because there are so many flavors of servers, it’s mind boggling.

    But that said, if you know your flavor of server, you can use an RPM that matches to install software. So the RPM is like a massive server that has all the available installs for your server. When you add in yum, which installs the packages, you can then enter a world of automation where every night your server checks for new packages and installs them!

    Yum has a meaning. “Yellowdog Updater, Modified.” I didn’t say it was a good meaning. Yum has a bunch of obvious commands like “yum install NAME” and “yum update” which you can use to install extra add-ons like Memcached and so on. There are also yum utilities (yum-utils, which let you further customize automation by scripting commands.

    Just to touch on one at random, today I got an email from my server saying that it had run /usr/bin/yum -c /etc/yum.conf -y update for me. This is normal, I configured it to do that at midnight every day.

    There are unfinished transactions remaining. You might consider running yum-complete-transaction first to finish them.

    Now it did run the rest of the installs, so I did what any smart person does. I went and looked up this new command, only to find a string of bug reports from 2007-2009. It’s 2014, so I went and ran it once. It cleaned up one package and said I had another 244 left. Interesting. I ran it again. 243. I saw my day flash before my eyes and then decided to run the safer version:

    yum-complete-transaction --cleanup-only
    

    Safer and faster. Everything was cleaned, and update runs great now.

    Is there a risk with this automation? Of course! Which is why I take a backup every night, right before the update happens. I’m not crazy after all.