Half-Elf on Tech

Thoughts From a Professional Lesbian

Author: Ipstenu (Mika Epstein)

  • Dogfooding DreamPress

    Dogfooding DreamPress

    While I work for DreamHost, and I’ve used DreamPress since day one, I haven’t have the opportunity to put a serious, stress site up on it. The reason for that is simple. The two sites I have that I might do that for are ineligible for pretty much all managed WordPress hosts … because they’re not 100% WordPress.

    But. There is one domain that is, and I’ve moved it over. And this gives me an opportunity to determine what needs to be done to this site to make it fly on a different host.

    All Hosts Require Site Tuning

    This is possibly an unpopular opinion, but no matter which host you pick, there will never be a ‘drop it in and forget it’ option for any web host. The reason is that no two sites are the same. No two sites use the same code, have the same data, and have the same traffic patterns. And all those things come together to determine how well your site will run.

    No matter who your web host is, you will have to adjust. It may be something as simple as editing an .htaccess file or removing a plugin you no longer need. Maybe it’s as complex as PHP settings or having someone write you custom code.

    I don’t mean to say that no two hosts are equal, or that one is better than another. What I mean is that no two situations are identical. Of course a site will work, out of the box, better on one site than another. But when we have the capacity to understand what it is that makes our site work, what it does and how it does it, we can make any site work on nearly any host.

    For DreamPress, I Made Four Changes

    After I moved the site over to a test setup, I quickly debugged the ‘big’ issues. I had a white screen of death. Thankfully I knew right away what it was, because I was clearly aware of what went into my site and how it worked.

    First, a lot of sanity checks I’d put in to my .htaccess didn’t work. Primarily it was a case of how I’d handled redirects, since I was using an Apache4 format that didn’t work universally.

    Second, I wanted to use a phar file, so I had to add custom lines to my PHP settings to let it know how to run properly.

    Third, I replaced the old caching system with the new one. DreamPress has Memcached and Varnish, so I removed some of the object caching I didn’t need anymore.

    Fourth, I wrote some customized code to teach my site how to talk to Varnish, since I do weird things.

    That’s it. Four things. And I was able to debug this in about an hour because I took stock of the moving parts of my website. Not everyone can do this, I understand that, but when you build out a website, even if you can’t code, it’s incumbent on you to document what you did.

    When you add a plugin or a theme, you need to make a note somewhere of what it is and why you did it. I recommend someplace not on WordPress, since if your site is down, it won’t do you any good at all.

    The Immediate Impact

    With those four changes, the result was immediately obvious.

    A change in page timings - basically everything went balls faster.

    In the chart above, there are two major drops in speed (in this case, a drop is good). At the end of July, I reoptimized how images were processed by using Photon via Jetpack, I applied caching for FacetWP, and I removed all Google Ads. In doing so, I halved my time to load. This is a great thing.

    But then, just by moving to DreamPress and doing no changes to code, I halved it again.

    The average site load is now .8 seconds. It ‘spikes’ to 1.2 seconds if you hit a non-cached page. The time to first byte, which is shown on the graph as the yellow/orange line, went from 1.6 second to .1 second.

    Was Everything Perfect?

    No. But as it turned out, everything that wasn’t perfect had nothing to do with my new hosting setup. I reproduced the site locally and found I had some seriously slow database queries on a couple pages, which resulted in poor load times. Once I fixed that, a few of my archive pages began to fly.

    Also as I mentioned in the four things I changed, I did have to write new code. The bonus for the new code is that I was also able to back port some of that into the Varnish plugin itself, which means fewer people will have to write that code. You’re welcome. But that really was more for convenience as the site ran fine, it was just a bit overly aggressive in caching.

    Is DreamPress Right For Everyone?

    No host is right for everyone. Period. End of conversation.

    Is it right for you? Maybe. It certainly can be, and not just for generic ‘blog’ type sites. But you have to know what your site does, who uses it, and what they do to really know the right questions to be asking.

    The question isn’t “Is this good for me?” but “What does this do to a site that does what I do?”

  • Flushing Another Post’s Varnish

    Flushing Another Post’s Varnish

    The other day I talked about updating another post’s meta. That’s all well and good, but what happens when you’ve cached that other page?

    Most cache tools work via the simplistic understanding that if you edit a page, you want to empty the cache for that page. The problem with my clever post-meta updating is that doesn’t actually update the post. Oh, sure it updates the data, but that doesn’t perform an update in a way that most plugins understand.

    Thankfully this can be done programmatically in a few different ways, depending on what plugin is used.

    1. Tell Varnish to empty the page’s cache any time post meta is updated
    2. Add a force-empty command to the function that updates the meta

    Using Varnish HTTP Purge

    Since I’m running on DreamPress, my plugin of choice is Varnish HTTP Purge. This is a very simple plugin that has no real interface by design. Mike Schroder and I picked it for use because of it’s beautiful simplicity, and when it was abandoned, adopted it. I’m rather well acquainted with it and at the behest of others, I wrote in code where people could add extra URLs to flush as well as extra events to trigger. There’s even a wiki doc about it.

    However. That’s not what I did here because I wanted to limit when it runs as much as possible.

    When To Empty Cache

    There’s a part of caching that is always difficult to manage. When should a page be emptied and when should a whole section of pages be emptied? The obvious answer is that we should retain a cache as long as possible, emptying select pages only when necessary.

    With WordPress posts and pages it’s pretty easy to know “Update the post, empty the cache.” But when you start talking about things like the JSON API, it’s a lot harder. Most plugins handle the JSON API for you, but if you’ve built your own API (like say /wp-json/lwtv/) and built out a lot of custom APIs (like say stats or an Alexa skill) you will want to flush that whole darn thing every single time.

    The Code

    Okay so how do we do that easily?

    In the function I showed you the other day, I added this right before the re-hook at the end:

    if ( class_exists( 'VarnishPurger' ) ) {
    	$varnish_purge = new VarnishPurger();
    	$varnish_purge->purgeUrl( get_permalink( $post_id ) );
    }
    

    That makes sure the Varnish plugin is running and calls a URL to flush. Done. If you want to have it do more URLs, you can do this:

    if ( class_exists( 'VarnishPurger' ) ) {
    			
    	$purgeurls = array( 
    		get_permalink( $post_id ) . '?vhp-regex',
    		get_site_url() . '/page1/',
    		get_site_url() . '/custom_page/',
    		get_site_url() . '/wp-json/lwtv/?vhp-regex',
    	);
    			
    	foreach ( $purgeurls as $url ) {
    		$this->varnish_purge->purgeUrl( $url ) ;
    	}
    }
    

    And then you can add in as many as you want.

  • Updating Another Post’s Meta

    Updating Another Post’s Meta

    Over on LezWatch TV we have two post types, shows and characters. In an added wrinkle, there’s data that exists on the show pages that is updated by the character pages.

    That means when I save a character page, I need it to trigger a series of updates to post meta of whatever shows the character belongs to. But because of the complexities of the data being saved, I need to run it when the show saves too.

    The Code

    add_action( 'do_update_show_meta', 'lezwatch_update_show_meta', 10, 2 );
    add_action( 'save_post_post_type_shows', 'lezwatch_update_show_meta', 10, 3 );
    add_action( 'save_post_post_type_characters', 'lezwatch_update_show_meta_from_chars', 10, 3 );
    
    function lezwatch_update_show_meta( $post_id ) {
    
    	// unhook this function so it doesn't loop infinitely
    	remove_action( 'save_post_post_type_shows', 'lezwatch_update_show_meta' );
    
    	[Update all the post meta as needed here]
    
    	// re-hook this function
    	add_action( 'save_post_post_type_shows', array( $this, 'lezwatch_update_show_meta' ) );
    
    }
    
    function lezwatch_update_show_meta_from_chars( $post_id ) {
    	$character_show_IDs = get_post_meta( $post_id, 'lezchars_show_group', true );
    	if ( $character_show_IDs !== '' ) {
    		foreach ( $character_show_IDs as $each_show ) {
    			do_action( 'do_update_show_meta' , $each_show['show'] );
    		}
    	}
    }
    

    This runs three actions, saving shows and characters, and also a custom one that the character page will call. It needed to be split like that because some characters have multiple shows.

  • I Hate Migrating Sites, And That’s Okay

    I Hate Migrating Sites, And That’s Okay

    Friday I tweeted coming up for air after migrating a 1 gig website. And I had hated doing it.

    Apparently that was a signal for men with whom I’m acquainted with but am not friends to drop unsolicited advice on me, most of which I’ve already written about on this blog.

    I found this incredibly condescending from most people. When someone says “I hate X…” and your reply is “Have you tried…” then you aren’t listening to them. You’re addressing them from a place of superiority and arrogance. “I know better than you. Let me tell you what to do.” Or worse, you’re listening for keywords and then pitching your wares.

    Let me put this clearly:

    1. I did not ask for help (I didn’t need it)
    2. None of the men asked if I needed help (they assumed I did)
    3. None of them were friends.

    That last one is important. If my good friends, the people whom I play CAH with or share drinks, offer unsolicited advice, they know me well enough to do so intelligently. They do it respectfully and sometimes sarcastically. But they are friends. When non-friends, Internet people with whom I’ve exchanged words or reviewed code, do it, it’s not at all the same.

    Here’s a selection of how my weekend went:

    “But I love it!”

    That one wasn’t advice, and as it happened, he didn’t like moving sites, he liked writing the code to do it. Me too! But watching a site move is as fun as watching paint dry. And testing everything to make sure the code works on the new server is similarly dull. And yet you have to do it.

    “That’s not a large site.”

    Didn’t say it was. Again, not advice, and this was from someone I know fairly well, so I was more inclined to chat about it. Turns out he has to move a 5 G site multiple times a year. Which … something’s wrong there, first of all. But also his users need to sit down and talk about image sizes, because daaaaayyyyyyymmmmmnnn.

    “You should use wp-cli.”

    Funny thing, I did. I love using wp-cli for updating the database, and since I happened to be unraveling a multisite, it was perfect. But you know… I’ve written tutorial on it, added documentation, written extensions, and talked about that tool multiple times. Including on this site. Know your audience, folks.

    “You should use zip [instead of rsync].”

    When you’re looking at large files, like gigs, sometimes zip is stupid and won’t unpack. PHP has limitations of 2G you see. I did zip up the plugins and themes and then the uploads, but I had to move them from server A to server B. And I did that with rsync. My other option was to download and then re-upload. Maybe if you’d suggested SCP you would have been helpfull. Rsync made sure I didn’t re-copy anything.

    “You should use [my service].”

    No. Absolutely not. No. JM Dodd is the only human I would trust with that kind of a migration. Why? Because it was WordPress Multisite. None of your tools, not even VaultPress, is capable of handling that well. Plus, the added wrinkle was moving one multisite into it’s three separate sites.

    Also … I am very very very skeptical of using anyone’s tools. I review their code and their websites and their communication skills. And honestly, I’m not impressed by that company. They just don’t give me the feel-goods I’d want when going into business. To be fair, I’m not sure how I feel about VaultPress either, but they’re my experiment.

    To be fair, one of the services apologized after.

    “Why not move things manually?”

    I … did? This one takes the cake because when I pointed out that I did know that stuff, and perhaps one should think about to whom they are offering unsolicited advice, I got told that I should use a specific host and service. After I blocked him, he subtweeted about women/lesbians and their egos. Not a great way to win your case, buddy.

    “You should use host X.”

    Stop. I didn’t ask for hosting advice, nor was it mentioned. People move sites on the same host sometimes, you know. And once I had the site up on site.dream.press, moving it live was ten seconds of work.

    I Never Asked For Help

    This is the big deal. The word ‘help’ never came out of my mouth. I didn’t even break DNS or forget TTL this time. I didn’t need help. All this was, was me saying I hate moving sites when they’re a gig (or more) of data.

    Not single man asked me “Do you need help?”

    They all assumed I did.

    That, friends, is why I called it Mansplaining, and blocked over ten men on twitter.

    Comments on this post are disabled. Don’t reply, just think about it.

  • Remotely Hosting SVGs

    Remotely Hosting SVGs

    I prefer to use SVGs whenever possible instead of pngs or font icons. They can be resized, they can be colored, and they’re relatively small. But I also have a set of 700 icons I use and I don’t want to have to copy them to every single site. Instead, I’d like to have them all on a central repository and call them remotely.

    To The Cloud

    You can use S3 or really anything for this, but I chose DreamObjects since it’s open source and I work for DreamHost.

    For this to work, you create a bucket in DreamObjects with whatever name you want. I tend to name them after the website I’m working on, but in this case I’m making what is essentially a media library so a better bucket name would be the service. Keep in mind, while you can make a DNS alias like media.mydomain.com for this, you cannot use that with https at this time. Sucks. I know.

    On DreamObjects, the accessible URL will be BUCKET.objects-us-east-1.dream.io — hang on to that. We’ll need it in a bit.

    Upload Your Files

    Since I plan to use this as a collective for ‘media’ related to a network of sites, I named the bucket for the network and then made subfolders:

    • backups
    • site1
    • site2
    • svg
    • svgcolor

    Backups is for everything you’re thinking. Site1 and Site2 are special folders for files unique to those domains only. Think large videos or a podcast. Since they’re not updated a lot, and I want them accessible without draining my server resources, the cloud is perfect. I separated my SVG files into two reasonable folders, because I plan to generate a list of these later.

    Calling An Individual File

    This is the much easier part. In general, PHP can use file_get_contents() for this. But we’re on WordPress and there is a more reliable way about this.

    $svg = wp_remote_get( 'https://BUCKET.objects-us-east-1.dream.io/svg/ICON.svg' );
    if ( $svg['response']['code'] !== '404' ) {
    	$icon = $svg['body'];
    }
    

    By using wp_remote_get it’s possible to check if the file exists before displaying the content of the body. Which is, indeed, how one displays the content.

    The reason, by the way, we want to use wp_remote_get and not file_get_contents is that there’s no way to check if the file exists with the latter. You could still use file_get_contents to display the file, but once you’ve done the remote get, you may as well use it.

    Getting a List Of All SVGs

    I’d done this before with local images, getting a list of all the SVGs. You can’t do a foreach of a remote folder like that. So this is a little messier. You’ll need the Amazon AWS SDK for PHP for this. I’ve tested this on the latest of the 2 branch – 2.8.31 – and the 3 branch – 3.32.3 and the code is different but can work.

    In both cases, I downloaded the .phar file and included it in a function that I used to save a file with a list of the image names in them. By doing that, I can have other functions call the file and not have to query DreamObjects every time I want to get a list.

    function get_icons() {
    	include_once( dirname( __FILE__ ) . '/aws.phar' );
    
    	$svgicons = '';
    
    	// AWS SDK Code Goes Here
    
    	$upload_dir = wp_upload_dir();
    	$this_file  = $upload_dir['basedir'] . '/svgicons.txt';
    	$open_file  = fopen( $this_file, 'wa+' );
    	$write_file = fputs( $open_file, $svgicons );
    	fclose( $open_file );
    }
    

    I left out the SDK code. Let’s do that now.

    SDK V2

    The V2 is actually what’s still officially supported by CEPH, but the PHAR file is larger.

    	Version 2
    	$client = Aws\S3\S3Client::factory(array(
    		'base_url' => 'https://objects-us-east-1.dream.io',
    		'key'      => AWS_ACCESS_KEY_ID,
    		'secret'   => AWS_SECRET_ACCESS_KEY,
    		'version' => 'latest',
    		'region'  => 'us-east-1',
    	));
    
    	$files = $client->getIterator( 'ListObjects', array(
    		'Bucket' => 'BUCKET',
    		'Prefix' => 'svg/',
    	));
    
    
    	foreach ( $files as $file ) {
    		if ( strpos( $file['Key'], '.svg' ) !== false ) {
    			$name         = strtolower( substr( $file['Key'] , 0, strrpos( $file['Key'] , ".") ) );
    			$name         = str_replace( 'svg/', '', $name );
    			$symbolicons .= "$name\r\n";
    		}
    	}
    

    SDK V3

    I would recommend using V3 if possible. Since you’re only using it to get data and not write, it’s fine to use, even if it’s not supported.

    	// Version 3
    	$s3 = new Aws\S3\S3Client([
    		'version' => 'latest',
    		'region'  => 'us-east-1',
    		'endpoint' => 'https://objects-us-east-1.dream.io',
    		'credentials' => [
    			'key'    => AWS_ACCESS_KEY_ID,
    			'secret' => AWS_SECRET_ACCESS_KEY,
    		]
    	]);
    
    	$files = $s3->getPaginator( 'ListObjects', [
    		'Bucket' => 'BUCKET',
    		'Prefix' => 'svg/',
    	]);
    
    	foreach ( $files as $file ) {
    		foreach ( $file['Contents'] as $item ) {
    	
    			if ( strpos( $item['Key'], '.svg' ) !== false ) {
    				$name = strtolower( substr( $item['Key'] , 0, strrpos( $item['Key'] , ".") ) );
    				$name = str_replace( 'svg/', '', $name );
    				$symbolicons .= "$name\r\n";
    			}
    		}
    	}
    

    Side Note…

    If you’re on DreamHost, you’ll want to add this to your phprc file:

    extension=phar.so
    detect_unicode = Off
    phar.readonly = Off
    phar.require_hash = Off
    suhosin.executor.include.whitelist = phar
    

    That way it knows to run phar files properly. And by the way, that’s part of why phar files aren’t allowed in your WordPress plugins.

    Which is Better?

    That’s a difficult question. When it comes to loading them on the front end, it makes little difference. And having them on a CDN or a cloud service is generally better. It also means I don’t have to push the same 400 or so files to multiple servers. On the other hand, calling files from yet another domain gets you a down check on most site speed checks… But the joke’s on them. The image is loaded in PHP.

    Of course the massive downside is that if the cloud is down my site can be kinda jacked. But that’s why using a check on the response code is your lifeline.

  • Deploying from GitHub with Codeship

    Deploying from GitHub with Codeship

    For the longest time I self-hosted all my git repositories not so much because I enjoyed doing so but because there were limited options for easily pushing code from git to my servers. Invariably you will end up using an intermediary because GitHub and their peers have no real reason nor inclination to make those things easy for you. After all, if they can keep you in their systems, more money to ’em.

    And while that’s perfectly understandable and logical, it’s annoying. And if you’re on a budget and not a Git Expert, it’s extremely frustrating. The deployment API was written in High Level Geek, and I found it a headache to decipher and test.

    Thankfully there are tools for that, and one of them in Codeship. Codeship is free for 100 builds a month, and unlike DeployHQ, it uses rsync, which meshes with my preferred way of moving data.

    The Plan

    My plan is simple. I want to push code to a Github Repository and, when it’s a push to master, it would rsync the files over to my server. This will allow multiple people to work on code, among other things.

    The Setup

    First make an account with Codeship. You can log in as Github which is useful since you’ll want to connect your Github account with Codeship anyway. When you create a project, you’ll be prompted which SCM you want to use:

    Create a Project in Codeship

    I’m using Github, but it’s nice to see GitLab in there as well. Once you pick your SCM, paste in the clone URL.

    Examples:

    • git@github.com:<username>/<repository_name>.git
    • https://github.com/<username>/<repository_name>.git
    • https://github.com/codeship/<repository_name>

    Finally you can pick Codeship Pro or Basic – I picked Basic because it required the least amount of know-how. Not that it’s easy, but I don’t have to configure the server or anything annoying. In fact, Basic is so basic, than you can just accept the default setup commands and go. Which is what I did.

    The Settings

    Once you’ve done all that, you’re on a new screen that tells you to push code. Of course, you can’t do that until you set up a couple more things. Like tell it where to push the code.

    Click on “Project Settings” and go to the General tab. You’ll need to get that SSH key and add it to your server’s ~/.ssh/authorized_keys to allow passwordless deployment. I strongly recommend that.

    Now you can click Deploy and add a Deployment Pipeline. Pick ‘master’ unless you’re using something else.

    Now you have to add a deployment to your pipeline. There are a lot of options here, but for my plan of an rsync, the choice is “Custom Script”. Since I’m pushing code to my DreamPress server, the rsync looks like this:

    # DreamPress
    rsync -aCz -e "ssh" ~/clone/ wp_e413xh@mysite.dream.press:/home/wp_e414xh/mysite.dream.press/wp-content/themes/my-theme-name/ --delete
    

    You can customize your rsync commands however you want. I like mine to delete files I’ve removed.

    Deploy the Ships

    Everything is set up so go back to your project page and push some code to Github

    Magic happens.

    The code deploys.

    Huzzah.

    On the free version you only get 100 private builds and five private projects. Mine are public (so’s the Github repo for that matter) so it doesn’t matter. The only downside is only getting one push at a time, but since they take less than five minutes, it’s perfectly acceptable for a free solution.