Half-Elf on Tech

Thoughts From a Professional Lesbian

Tag: speed

  • Cleaning up SVGs

    Cleaning up SVGs

    If you were here on Friday, I talked about the headache of using SVGs on AMP pages. In that post I mentioned that one of the fixes was to ‘clean’ the SVG file.

    Why would be bother? The simple answer is that smaller files are better. But as we learned with AMP being picky, it’s also because we want our files to be fully compatible with all browsers and servers. SVG standards are important, after all.

    So let’s clean!

    Remove Headers

    If your SVG starts with this, just remove it:

    <!-- Generator: Adobe Illustrator 17.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0)  -->
    <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN">
    

    Remove Less Standard Tags

    I wouldn’t have realized these weren’t super standard if it hadn’t been for AMP alerts. But these aren’t:

    xmlns:x="&ns_extend;" xmlns:i="&ns_ai;" xmlns:graph="&ns_graphs;"
    

    Again, remove them. But also you want to remove tags like <g i:extraneous="self"> and that’s tricky because you’ll usually see it like this:

    <g i:extraneous="self">
        <g>
            [Image stuff]
        </g>
    </g>
    

    If you use a regular expression like I did, be very cautious. I checked every single one of my 700 SVGs after the edits to make sure they still worked. There were cases where the only call to the <g> tag was with the extraneous stuff, so I had to very cautiously replace. I missed about 40.

    Remove Illustrator’s Extra Shit

    Here’s where you’re going to save the most space. First remove the requiredExtensions stuff:

    	<foreignObject requiredExtensions="&ns_ai;" x="0" y="0" width="1" height="1">
    		<i:pgfRef  xlink:href="#adobe_illustrator_pgf">
    		</i:pgfRef>
    	</foreignObject>
    

    And then remove the big hunk of PGF:

    <i:pgf  id="adobe_illustrator_pgf">(.*)</i:pgf>
    

    I can’t show you the actual content. It’s huge. It’s way huge. It’s up to 20k huge.

    Remove Switch

    If you’re not using the switch, you want to remove it.

    <switch>
    	<g i:extraneous="self">
    		<rect fill="#231F20" width="28" height="28"/>
    	</g>
    </switch>
    

    End Result?

    A 29 KB file is 4KB. And that is a faster internet.

  • The Need for Mobile Speed

    The Need for Mobile Speed

    I took the train from NYC to Montreal, which I will never do again. It was too long, too uncomfortable, and customs actually made the TSA preferable. But while you ponder that in your back brain, I want you to consider this as well. The internet on the train sucks.

    For the first time in years I was back on pre-smartphone speeds. And the problem with that is I was in a world that expected 3G or faster speeds. Here’s what would not load:

    • Twitter
    • Facebook
    • Tumblr
    • Most news webpages
    • Anything with video

    Here’s what I could do:

    • Text

    That was a pretty shitty smartphone experience. As I sat on the train, I wondered why it was so shitty. Didn’t we build everything to be mobile first? Wasn’t the point of the responsive systems to make it faster? Turns out we didn’t.

    One of the things we do well in the modern web is device detection. If I’m on a mobile device, everything’s cool and perfect and my sites will load for that device. There are PHP libraries like Mobile Detect and Detect Mobile Browsers], but what they’re really doing is device checks, not mobile. Knowing what kind of device someone’s on lets us customize a web experience to that device, and that’s all we tend to do. We put in the hours to check “Is this a mobile device?” but not where we should be.

    Of course, that’s really hard to do. Apps like SpeedTest and TestMy.net work alright, but when you’re traveling by rail, your speed is incredibly variable and confusing. One minute I’d have 4 bars, the next 1, and then I’d drop from LTE to 3G and worse. Oh, and don’t bother asking about the WiFi. It was a joke.

    Somewhat related, I travel a lot for work. I recently did a 12 day run to NYC and then Montreal, where I was in hotels most of the time. Hotel Wifi is a spotty thing. Either they charge you up to $40 a day for the privilege of their shiternet, or they give you free wifi that loads everything but images. Trying to work from hotels is a hit and miss proposition as well. I can connect, but as soon as I hop onto my VPN, everything drags.

    Then we have conferences. I’ve yet to go to a tech conf where we didn’t kill the Wifi, or nearly so. While that’s kind of our faults for leaving on our various automated updaters and DropBoxes and the like, there isn’t a ‘Conference Wifi’ mode on laptops to say “Hey, I’m on a bandwidth so don’t do the automated background things please and thank you.” This is, by the way, why my presentations are always on my local box as well as online. I assume the wifi will die.

    In all cases, as soon as the internet quality drops to slow, our experience online crumbles. We simply haven’t built most tools to work in a one-bar world. And much of this isn’t a solution we can easily grasp. Even the big guys, who have servers built for stress and speed, are slow in these situations. Because we assume too much. We assume minimum connectivity.

    The race for faster wireless service is on, but we should step back and look at the simplification of our sites. If we can make a low-speed version that is as fully featured, we should.

  • Image Compression

    Image Compression

    If you’ve ever tested your site on Google PageSpeed Insights, GTMetix, Yahoo! YSlow, or any of those tools, you may have been told about the value found in compressing images.

    Google Pagespeed has some images for me to optimize

    There are some images you can fix, like the first one on that screenshot is for an image from Twitter I downloaded. There are some you can’t fix, like the last three are all telling me the default smilies for WordPress need some shrinking.

    Either way, the short and skinny of it is that if you make your images smaller then your webpage loads faster. I know, it’s shocking. Many long-term webheads know that you can compress images best on your own computer, using ‘Save For Web’ and other command line tools for compression. That’s great, but that doesn’t work for everyone. Sometimes I’m blogging from my phone or my iPad and I don’t have access to my tools. What then?

    Before You Upload

    I know I just said but what about when you can’t resize before you upload. There are things you can do for this. Photo Compress for iOS will let you make your images smaller. So can Simple Resize. There are a lot of similar apps for Android as well.

    For the desktop, I use Homebrew so I installed ImageMagik (which is also on my server for WP to use) and toss in a command line call for it. Sometimes I’ll use grunt (yes, Grunt, the same thing I use for Bower and coding) and ImageOptim to compress things en masse.

    Of course, if I only have one or two images, I just use Preview which does the same thing more or less. Photoshop, which is still stupid expensive, also lets you do this, but for the layman, I suggest Preview for the Mac. I haven’t the foggiest what you can use on Windows.

    If you’re not uploading images via WordPress (and very often I’m not), you pretty much have to do it old-school.

    While You Upload

    Okay great, but what about WordPress?

    The best way about it is to have WordPress magically compress images while you upload them. And actually it does this out of the box. My cohort in crime at DreamHost, Mike Schroder, was part of the brain trust behind making ImageMagick a part of core WordPress. This was a massive undertaking, but it allowed WordPress to compress images better, faster, and more reliably than GD. I hesitate to say it’s ‘safer’ since the image loss (that is that weird fuzzing you get some times) is less.

    If you want to make it even better, you need to use plugins. I used to tell people to use smush.it which was a Yahoo! run service. But then they deleted it and we were all very sad. WPMU Dev, who owns a SmushIt plugin, installed Smushing on their servers and you can now use WP Smush. I’m a bit of a fan of TinyPNG, which gives you 500 free compressions a month, and that’s enough for me. I like TinyPNG because there are no options. I install it, it runs when I upload images, and done. That doesn’t mean I don’t see value in things like ShortPixel, just that it’s not really what I want. Finally there’s Kraken IO which I love just for the name, but it makes people balk because it’s not free.

    If you don’t want to use an external service, and I totally get why you wouldn’t, there’s EWWW Image Optimizer.

    I personally use either EWWW or TinyPNG, depending on the site.

    Can a CDN Help?

    Maybe. I know a lot of people love offloading their images to CDNs. I currently don’t for no reason other than I’m a bit lazy and I hate trusting someone else to host my images. But that said, you actually can (rather easily) use a CDN if you have Jetpack installed. Their service, Photon, does exactly that. Now, I don’t use Photon because my users in Turkey and China like to visit my site, but there’s nothing at all wrong with those services.

  • Take my SPDY, Please

    Take my SPDY, Please

    When I upgraded my server to Apache 2.4, I lamented that the choice had killed any ability I had to run SPDY. This led to me installing an Nginx proxy on my box and being pretty happy with it.

    I wanted to like the idea of SPDY on Apache, but there were serious issues with mod_spdy. First of all, it’s incompatible with Apache version 2.4. That sucks. While someone had forked it, issue number two made me worry. You see, mod_spdy requires OpenSSL version 1.0.1c and modifies mod_ssl. If it was Google’s suggestions for it, I might be okay, but now we’re talking about trusting some random person out there. No. Finally the dippy thing hadn’t been updated in years.

    Someone finally shamed Google enough, because earlier this year Google gave mod_spdy to Apache. The plan is for it to be a part of Apache 2.4, as well as the future 2.6/3.0 world:

    Being a part of Apache core will make SPDY support even more widely available for Apache httpd users, and pave the way for HTTP/2.0. It will also further improve that support over the original version of mod_spdy by better integrating SPDY and HTTP/2.0’s multiplexing features with the core part of the server.

    Finally!

    Except not. Sadly, there’s been very little activity since this summer. You can look at the code on Apache SVN and mod_spdy hasn’t been touched in 3 months. It’s sad to see this linger. I had high hopes that Apache would jump and run, but they haven’t even made it work with Apache 2.4 yet.

    I’m not going to hold my breath for parity on this one just yet.

  • Why You Can’t (Always) Catch Cache

    Why You Can’t (Always) Catch Cache

    Or rather, why you don’t want to cache.

    When speeding up a site, the first thing we always talk about is streamlining the site. Ditch the redundant plugins, clean up the code, get a lightweight theme, dump the stuff you don’t use, and upgrade. Once you’ve done that, however, we talk about caching, and that is problematic. Caching is tricky to get right, because you can’t cache everything.

    Actually you can, but you don’t want to cache everything, and you really shouldn’t.

    The basic concept of a cache is ‘take a picture of my website in this moment.’ We think of it as a static picture, versus the dynamic one we use for WordPress. That static picture is not so static, though. You call CSS and JS still, and you call images. But the idea there is not to call PHP or the database and thus your site will be faster, as those are the slowest things.

    Great, so why don’t I want you to cache everything?

    Missed Catch

    The obvious answer is latency. If you’re designing a site or making a lot of style changes, you need to disable caching because otherwise you get a lag between edits and displaying text. But there’s also server latency.

    When we talk about things like PageSpeed (I use mod_pagespeed my server) to improve web page latency and bandwidth usage, we’re talking about actually changing the resources on that web page to the best practices. This sounds great but we have to remember that by asking the webserver to do the work before WordPress (such as having PageSpeed minify my CSS and JS), we’re still making the server do the work. Certainly it’s faster than having WordPress do it, but there will still be a delay in service.

    The next obvious answer is security. There’s some data you flat-out don’t want to cache, and it’s pretty much everything in wp-admin (on WordPress). Why? If you have multiple users, you don’t want them getting each other’s content. Some are admins, some aren’t, and I know I don’t need my guest editor seeing the post about where I’m firing her and why.

    Actually, we’ll extend this. Very rarely do I want my logged in users to see a cache at all. They’re special, they’re communicating and making edits on the fly. Having them see cached content, and constantly refresh it, will be more draining on my server than just loading the content in the first place. Remember the extra load caused by PageSpeed (or even your plugins) generating the cache? That would be constantly in progress when a logged in user made a change, so let’s skip it all together.

    Tagging on to that, you also don’t want to your admins and logged in users to generate a cached page. This isn’t the same as seeing a cached page, I don’t want non-logged in users to see the version of the site a logged in one gets. A logged in user on WordPress gets the toolbar, for example. I don’t want the non-logged in ones to see it.

    Finally we have to round back to security. If I have SSL on my box and I’m using HTTPS to serve up a page, no way no how do I want to cache anything related to users. In fact, I may not even try to cache JS/CSS either. The basic presumption of https is that I need security. And if I need security, I need to keep users out of each other’s pockets. The best example of this is a store. I don’t want users to see each other’s shopping carts, do I? And your store is going to be https for security, so this is just one more feature there of.

    Of course, there are still things to cache. I setup PageSpeed on my https site so it will compress images, make my URLs root-relative, and compress and minify HTML/CSS/JS. But I don’t have a traditional cache with it at all. This does mean as we start to look towards an https-only world (thank you Google) we’re going to run into a lot of caching issues. All the quick ways we have to speed up our sites may go away, to be replaced by the next new thing.

    I wonder what that will be.

  • How Many Plugins Is Too Many?

    How Many Plugins Is Too Many?

    Have you ever played “Name That Tune”? They used to have this show where they’d play music and you had to name that tune. One of the mini-games in the show was called “Bid-A-Note” where the host read a clue and the contestants would alternate bidding. “I can name that tune in X notes..” where X was a whole number, and the bids ended when one of the contestants challenged the other to “Name That Tune” (or if someone bid one or zero notes).

    Well. I can crash a WordPress site with one plugin.

    When people ask why their site is slow, sometimes my coworkers will say “It’s the plugins, right? He has 40 plugins!” and I’ll say “Maybe.” Then I look at what the plugins are, because it’s never the number of plugins, but their quality. Take a look at Jetpack, which is 33 plugins in one. Is that going to cause more or less overhead than if you had 33 separate plugins installed?

    WordPress is wonderful and beautiful because you can use plugins to do absolutely anything. At the same time, that beauty is it’s downfall, because you can use plugins to do anything. There are over 32,000 active plugins in the WordPress plugin repository. There are probably 4000 or so that are delisted or disabled. There are around 3000 more plugins on just one popular WordPress theme and plugin site. We haven’t even started listing themes.

    It’s a mathematical impossibility to test every possible plugin combination with every theme on every server on every host with every extra (like mod_pagespeed or CloudFlare) added on. It’s impractical to expect every combination to play nicely together, not because of any defectiveness in the code of the plugin or WordPress, but because of the reality that all of those things vary from place to place. We build out things to be flexible, after all.

    I love the flexibility. I think it’s awesome. But at the same time, I worry about it when people complain their site is slow. There’s, very rarely, one perfect answer. Not even “Oh, he was hacked” is the answer to why a site is slow, though it can be. The answer is invariably in the combinations and permutations of what someone has done with their site, what the visitors do with it, and how they interact. A site that posts once a week is different than one that posts four times a day. A site with no comments is different than one with 30 per post. And the more of those little differences you factor in, the harder it gets to determine how many plugins is too much.

    Maybe it’s your memory. One plugin may need more memory than another, but the combination of two may need more than either would individually! Sadly, it’s not something you’re going to know until you start studying your own site. There are cool tools like P3 Profiler which do a great job of giving you an idea as to what’s going on, but it’s not the whole picture. It can’t be. Just look at all the tools we list for performance testing and consider how many and varied the results are.

    How many plugins are too many? However many it takes to kill your site.

    Oh, the one plugin I can run to crash a site? It was BuddyPress and I was using PHP CGI. Once I changed it to a different flavor of PHP, the issue went away.