Half-Elf on Tech

Thoughts From a Professional Lesbian

Author: Ipstenu (Mika Epstein)

  • Flushing Another Post’s Varnish

    Flushing Another Post’s Varnish

    The other day I talked about updating another post’s meta. That’s all well and good, but what happens when you’ve cached that other page?

    Most cache tools work via the simplistic understanding that if you edit a page, you want to empty the cache for that page. The problem with my clever post-meta updating is that doesn’t actually update the post. Oh, sure it updates the data, but that doesn’t perform an update in a way that most plugins understand.

    Thankfully this can be done programmatically in a few different ways, depending on what plugin is used.

    1. Tell Varnish to empty the page’s cache any time post meta is updated
    2. Add a force-empty command to the function that updates the meta

    Using Varnish HTTP Purge

    Since I’m running on DreamPress, my plugin of choice is Varnish HTTP Purge. This is a very simple plugin that has no real interface by design. Mike Schroder and I picked it for use because of it’s beautiful simplicity, and when it was abandoned, adopted it. I’m rather well acquainted with it and at the behest of others, I wrote in code where people could add extra URLs to flush as well as extra events to trigger. There’s even a wiki doc about it.

    However. That’s not what I did here because I wanted to limit when it runs as much as possible.

    When To Empty Cache

    There’s a part of caching that is always difficult to manage. When should a page be emptied and when should a whole section of pages be emptied? The obvious answer is that we should retain a cache as long as possible, emptying select pages only when necessary.

    With WordPress posts and pages it’s pretty easy to know “Update the post, empty the cache.” But when you start talking about things like the JSON API, it’s a lot harder. Most plugins handle the JSON API for you, but if you’ve built your own API (like say /wp-json/lwtv/) and built out a lot of custom APIs (like say stats or an Alexa skill) you will want to flush that whole darn thing every single time.

    The Code

    Okay so how do we do that easily?

    In the function I showed you the other day, I added this right before the re-hook at the end:

    if ( class_exists( 'VarnishPurger' ) ) {
    	$varnish_purge = new VarnishPurger();
    	$varnish_purge->purgeUrl( get_permalink( $post_id ) );
    }
    

    That makes sure the Varnish plugin is running and calls a URL to flush. Done. If you want to have it do more URLs, you can do this:

    if ( class_exists( 'VarnishPurger' ) ) {
    			
    	$purgeurls = array( 
    		get_permalink( $post_id ) . '?vhp-regex',
    		get_site_url() . '/page1/',
    		get_site_url() . '/custom_page/',
    		get_site_url() . '/wp-json/lwtv/?vhp-regex',
    	);
    			
    	foreach ( $purgeurls as $url ) {
    		$this->varnish_purge->purgeUrl( $url ) ;
    	}
    }
    

    And then you can add in as many as you want.

  • Updating Another Post’s Meta

    Updating Another Post’s Meta

    Over on LezWatch TV we have two post types, shows and characters. In an added wrinkle, there’s data that exists on the show pages that is updated by the character pages.

    That means when I save a character page, I need it to trigger a series of updates to post meta of whatever shows the character belongs to. But because of the complexities of the data being saved, I need to run it when the show saves too.

    The Code

    add_action( 'do_update_show_meta', 'lezwatch_update_show_meta', 10, 2 );
    add_action( 'save_post_post_type_shows', 'lezwatch_update_show_meta', 10, 3 );
    add_action( 'save_post_post_type_characters', 'lezwatch_update_show_meta_from_chars', 10, 3 );
    
    function lezwatch_update_show_meta( $post_id ) {
    
    	// unhook this function so it doesn't loop infinitely
    	remove_action( 'save_post_post_type_shows', 'lezwatch_update_show_meta' );
    
    	[Update all the post meta as needed here]
    
    	// re-hook this function
    	add_action( 'save_post_post_type_shows', array( $this, 'lezwatch_update_show_meta' ) );
    
    }
    
    function lezwatch_update_show_meta_from_chars( $post_id ) {
    	$character_show_IDs = get_post_meta( $post_id, 'lezchars_show_group', true );
    	if ( $character_show_IDs !== '' ) {
    		foreach ( $character_show_IDs as $each_show ) {
    			do_action( 'do_update_show_meta' , $each_show['show'] );
    		}
    	}
    }
    

    This runs three actions, saving shows and characters, and also a custom one that the character page will call. It needed to be split like that because some characters have multiple shows.

  • I Hate Migrating Sites, And That’s Okay

    I Hate Migrating Sites, And That’s Okay

    Friday I tweeted coming up for air after migrating a 1 gig website. And I had hated doing it.

    Apparently that was a signal for men with whom I’m acquainted with but am not friends to drop unsolicited advice on me, most of which I’ve already written about on this blog.

    I found this incredibly condescending from most people. When someone says “I hate X…” and your reply is “Have you tried…” then you aren’t listening to them. You’re addressing them from a place of superiority and arrogance. “I know better than you. Let me tell you what to do.” Or worse, you’re listening for keywords and then pitching your wares.

    Let me put this clearly:

    1. I did not ask for help (I didn’t need it)
    2. None of the men asked if I needed help (they assumed I did)
    3. None of them were friends.

    That last one is important. If my good friends, the people whom I play CAH with or share drinks, offer unsolicited advice, they know me well enough to do so intelligently. They do it respectfully and sometimes sarcastically. But they are friends. When non-friends, Internet people with whom I’ve exchanged words or reviewed code, do it, it’s not at all the same.

    Here’s a selection of how my weekend went:

    “But I love it!”

    That one wasn’t advice, and as it happened, he didn’t like moving sites, he liked writing the code to do it. Me too! But watching a site move is as fun as watching paint dry. And testing everything to make sure the code works on the new server is similarly dull. And yet you have to do it.

    “That’s not a large site.”

    Didn’t say it was. Again, not advice, and this was from someone I know fairly well, so I was more inclined to chat about it. Turns out he has to move a 5 G site multiple times a year. Which … something’s wrong there, first of all. But also his users need to sit down and talk about image sizes, because daaaaayyyyyyymmmmmnnn.

    “You should use wp-cli.”

    Funny thing, I did. I love using wp-cli for updating the database, and since I happened to be unraveling a multisite, it was perfect. But you know… I’ve written tutorial on it, added documentation, written extensions, and talked about that tool multiple times. Including on this site. Know your audience, folks.

    “You should use zip [instead of rsync].”

    When you’re looking at large files, like gigs, sometimes zip is stupid and won’t unpack. PHP has limitations of 2G you see. I did zip up the plugins and themes and then the uploads, but I had to move them from server A to server B. And I did that with rsync. My other option was to download and then re-upload. Maybe if you’d suggested SCP you would have been helpfull. Rsync made sure I didn’t re-copy anything.

    “You should use [my service].”

    No. Absolutely not. No. JM Dodd is the only human I would trust with that kind of a migration. Why? Because it was WordPress Multisite. None of your tools, not even VaultPress, is capable of handling that well. Plus, the added wrinkle was moving one multisite into it’s three separate sites.

    Also … I am very very very skeptical of using anyone’s tools. I review their code and their websites and their communication skills. And honestly, I’m not impressed by that company. They just don’t give me the feel-goods I’d want when going into business. To be fair, I’m not sure how I feel about VaultPress either, but they’re my experiment.

    To be fair, one of the services apologized after.

    “Why not move things manually?”

    I … did? This one takes the cake because when I pointed out that I did know that stuff, and perhaps one should think about to whom they are offering unsolicited advice, I got told that I should use a specific host and service. After I blocked him, he subtweeted about women/lesbians and their egos. Not a great way to win your case, buddy.

    “You should use host X.”

    Stop. I didn’t ask for hosting advice, nor was it mentioned. People move sites on the same host sometimes, you know. And once I had the site up on site.dream.press, moving it live was ten seconds of work.

    I Never Asked For Help

    This is the big deal. The word ‘help’ never came out of my mouth. I didn’t even break DNS or forget TTL this time. I didn’t need help. All this was, was me saying I hate moving sites when they’re a gig (or more) of data.

    Not single man asked me “Do you need help?”

    They all assumed I did.

    That, friends, is why I called it Mansplaining, and blocked over ten men on twitter.

    Comments on this post are disabled. Don’t reply, just think about it.

  • Remotely Hosting SVGs

    Remotely Hosting SVGs

    I prefer to use SVGs whenever possible instead of pngs or font icons. They can be resized, they can be colored, and they’re relatively small. But I also have a set of 700 icons I use and I don’t want to have to copy them to every single site. Instead, I’d like to have them all on a central repository and call them remotely.

    To The Cloud

    You can use S3 or really anything for this, but I chose DreamObjects since it’s open source and I work for DreamHost.

    For this to work, you create a bucket in DreamObjects with whatever name you want. I tend to name them after the website I’m working on, but in this case I’m making what is essentially a media library so a better bucket name would be the service. Keep in mind, while you can make a DNS alias like media.mydomain.com for this, you cannot use that with https at this time. Sucks. I know.

    On DreamObjects, the accessible URL will be BUCKET.objects-us-east-1.dream.io — hang on to that. We’ll need it in a bit.

    Upload Your Files

    Since I plan to use this as a collective for ‘media’ related to a network of sites, I named the bucket for the network and then made subfolders:

    • backups
    • site1
    • site2
    • svg
    • svgcolor

    Backups is for everything you’re thinking. Site1 and Site2 are special folders for files unique to those domains only. Think large videos or a podcast. Since they’re not updated a lot, and I want them accessible without draining my server resources, the cloud is perfect. I separated my SVG files into two reasonable folders, because I plan to generate a list of these later.

    Calling An Individual File

    This is the much easier part. In general, PHP can use file_get_contents() for this. But we’re on WordPress and there is a more reliable way about this.

    $svg = wp_remote_get( 'https://BUCKET.objects-us-east-1.dream.io/svg/ICON.svg' );
    if ( $svg['response']['code'] !== '404' ) {
    	$icon = $svg['body'];
    }
    

    By using wp_remote_get it’s possible to check if the file exists before displaying the content of the body. Which is, indeed, how one displays the content.

    The reason, by the way, we want to use wp_remote_get and not file_get_contents is that there’s no way to check if the file exists with the latter. You could still use file_get_contents to display the file, but once you’ve done the remote get, you may as well use it.

    Getting a List Of All SVGs

    I’d done this before with local images, getting a list of all the SVGs. You can’t do a foreach of a remote folder like that. So this is a little messier. You’ll need the Amazon AWS SDK for PHP for this. I’ve tested this on the latest of the 2 branch – 2.8.31 – and the 3 branch – 3.32.3 and the code is different but can work.

    In both cases, I downloaded the .phar file and included it in a function that I used to save a file with a list of the image names in them. By doing that, I can have other functions call the file and not have to query DreamObjects every time I want to get a list.

    function get_icons() {
    	include_once( dirname( __FILE__ ) . '/aws.phar' );
    
    	$svgicons = '';
    
    	// AWS SDK Code Goes Here
    
    	$upload_dir = wp_upload_dir();
    	$this_file  = $upload_dir['basedir'] . '/svgicons.txt';
    	$open_file  = fopen( $this_file, 'wa+' );
    	$write_file = fputs( $open_file, $svgicons );
    	fclose( $open_file );
    }
    

    I left out the SDK code. Let’s do that now.

    SDK V2

    The V2 is actually what’s still officially supported by CEPH, but the PHAR file is larger.

    	Version 2
    	$client = Aws\S3\S3Client::factory(array(
    		'base_url' => 'https://objects-us-east-1.dream.io',
    		'key'      => AWS_ACCESS_KEY_ID,
    		'secret'   => AWS_SECRET_ACCESS_KEY,
    		'version' => 'latest',
    		'region'  => 'us-east-1',
    	));
    
    	$files = $client->getIterator( 'ListObjects', array(
    		'Bucket' => 'BUCKET',
    		'Prefix' => 'svg/',
    	));
    
    
    	foreach ( $files as $file ) {
    		if ( strpos( $file['Key'], '.svg' ) !== false ) {
    			$name         = strtolower( substr( $file['Key'] , 0, strrpos( $file['Key'] , ".") ) );
    			$name         = str_replace( 'svg/', '', $name );
    			$symbolicons .= "$name\r\n";
    		}
    	}
    

    SDK V3

    I would recommend using V3 if possible. Since you’re only using it to get data and not write, it’s fine to use, even if it’s not supported.

    	// Version 3
    	$s3 = new Aws\S3\S3Client([
    		'version' => 'latest',
    		'region'  => 'us-east-1',
    		'endpoint' => 'https://objects-us-east-1.dream.io',
    		'credentials' => [
    			'key'    => AWS_ACCESS_KEY_ID,
    			'secret' => AWS_SECRET_ACCESS_KEY,
    		]
    	]);
    
    	$files = $s3->getPaginator( 'ListObjects', [
    		'Bucket' => 'BUCKET',
    		'Prefix' => 'svg/',
    	]);
    
    	foreach ( $files as $file ) {
    		foreach ( $file['Contents'] as $item ) {
    	
    			if ( strpos( $item['Key'], '.svg' ) !== false ) {
    				$name = strtolower( substr( $item['Key'] , 0, strrpos( $item['Key'] , ".") ) );
    				$name = str_replace( 'svg/', '', $name );
    				$symbolicons .= "$name\r\n";
    			}
    		}
    	}
    

    Side Note…

    If you’re on DreamHost, you’ll want to add this to your phprc file:

    extension=phar.so
    detect_unicode = Off
    phar.readonly = Off
    phar.require_hash = Off
    suhosin.executor.include.whitelist = phar
    

    That way it knows to run phar files properly. And by the way, that’s part of why phar files aren’t allowed in your WordPress plugins.

    Which is Better?

    That’s a difficult question. When it comes to loading them on the front end, it makes little difference. And having them on a CDN or a cloud service is generally better. It also means I don’t have to push the same 400 or so files to multiple servers. On the other hand, calling files from yet another domain gets you a down check on most site speed checks… But the joke’s on them. The image is loaded in PHP.

    Of course the massive downside is that if the cloud is down my site can be kinda jacked. But that’s why using a check on the response code is your lifeline.

  • Deploying from GitHub with Codeship

    Deploying from GitHub with Codeship

    For the longest time I self-hosted all my git repositories not so much because I enjoyed doing so but because there were limited options for easily pushing code from git to my servers. Invariably you will end up using an intermediary because GitHub and their peers have no real reason nor inclination to make those things easy for you. After all, if they can keep you in their systems, more money to ’em.

    And while that’s perfectly understandable and logical, it’s annoying. And if you’re on a budget and not a Git Expert, it’s extremely frustrating. The deployment API was written in High Level Geek, and I found it a headache to decipher and test.

    Thankfully there are tools for that, and one of them in Codeship. Codeship is free for 100 builds a month, and unlike DeployHQ, it uses rsync, which meshes with my preferred way of moving data.

    The Plan

    My plan is simple. I want to push code to a Github Repository and, when it’s a push to master, it would rsync the files over to my server. This will allow multiple people to work on code, among other things.

    The Setup

    First make an account with Codeship. You can log in as Github which is useful since you’ll want to connect your Github account with Codeship anyway. When you create a project, you’ll be prompted which SCM you want to use:

    Create a Project in Codeship

    I’m using Github, but it’s nice to see GitLab in there as well. Once you pick your SCM, paste in the clone URL.

    Examples:

    • git@github.com:<username>/<repository_name>.git
    • https://github.com/<username>/<repository_name>.git
    • https://github.com/codeship/<repository_name>

    Finally you can pick Codeship Pro or Basic – I picked Basic because it required the least amount of know-how. Not that it’s easy, but I don’t have to configure the server or anything annoying. In fact, Basic is so basic, than you can just accept the default setup commands and go. Which is what I did.

    The Settings

    Once you’ve done all that, you’re on a new screen that tells you to push code. Of course, you can’t do that until you set up a couple more things. Like tell it where to push the code.

    Click on “Project Settings” and go to the General tab. You’ll need to get that SSH key and add it to your server’s ~/.ssh/authorized_keys to allow passwordless deployment. I strongly recommend that.

    Now you can click Deploy and add a Deployment Pipeline. Pick ‘master’ unless you’re using something else.

    Now you have to add a deployment to your pipeline. There are a lot of options here, but for my plan of an rsync, the choice is “Custom Script”. Since I’m pushing code to my DreamPress server, the rsync looks like this:

    # DreamPress
    rsync -aCz -e "ssh" ~/clone/ wp_e413xh@mysite.dream.press:/home/wp_e414xh/mysite.dream.press/wp-content/themes/my-theme-name/ --delete
    

    You can customize your rsync commands however you want. I like mine to delete files I’ve removed.

    Deploy the Ships

    Everything is set up so go back to your project page and push some code to Github

    Magic happens.

    The code deploys.

    Huzzah.

    On the free version you only get 100 private builds and five private projects. Mine are public (so’s the Github repo for that matter) so it doesn’t matter. The only downside is only getting one push at a time, but since they take less than five minutes, it’s perfectly acceptable for a free solution.

  • Review: Twitter vs Twitter

    Review: Twitter vs Twitter

    Like most people, I use Twitter. I don’t always use the official Twitter tools for that, though. For example, I rarely use the website itself as it’s slow and annoys me. I’ve almost always used Twitter apps on my computers, too.

    In July I decided to try using the official Twitter app again. I’d stopped when, at some point, they stopped working on it. Since they’ve picked it back up, I felt I should try it out. As a whole, I like it 90% but … well let me explain.

    Tweetbot

    This is my normal, go-to Twitter app. I’ve used it for a couple years, and it meets nearly all my needs for a Twitter app. I can have multiple accounts, which is a must-have for me, and it separates my @-replies from my likes/retweets. I really like that. It also makes those @-replies a different color in my timeline so they stand out.

    But… It only tracks likes/retweets for while the app is open. So if I log off, get a million retweets, and log back in, I’ll never know. Also it lacks some of the basic Twitter features like being able to report users and seeing if my other accounts have messages etc I need to pay attention to. I have to click to expand the other accounts to see what’s going on.

    Interestingly, all third party apps, be it iOS or MacOS, seem to have this issue with the notifications. They just can’t keep track if you’re not logged in. I’m guessing it’s a matter of the API and limiting calls. Most annoyingly, recently Twitterific has just stopped showing me @-mentions from people I don’t follow.

    Twitter

    Currently it has a brain and does everything you can do on the web. Multiple accounts, which I can see at a glance who has an alert or message, and it’s easy to switch between them. One click. I can see

    Downside? Sometimes when I log back in, the Live Stream is messed up. Also the notifications and mentions and replies are all jumbled up, in the worst ways. I can’t just see replies or likes or mentions. There’s no indication that a message IS a mention in my main-timeline. And the alerts that tell me I have an unread are for notifications and mentions combined.

    That I can’t easily find my mentions, which I want to reply to, sucks.

    So Which Wins?

    Right now, Twitter’s app is winning for the simple reason of easy user status, easy user switching, and actually being able to see all my likes/retweets. Which is important in some cases. It certainly sucks that I can’t easily identify my replies, and I can’t separate mentions from notifications like I can in the web-app, but the reliability of the API is (currently) worth it.