Half-Elf on Tech

Thoughts From a Professional Lesbian

Author: Ipstenu (Mika Epstein)

  • Bundling – Not What We Wanted

    Bundling – Not What We Wanted

    There is a problem with bundling.

    We like to bundle things together, to say “If you buy X, get Y as well for less!” And the simple problem is that our customers don’t actually always want X. We treat bundling like it solves our problems, but it doesn’t. It just makes angry customers who have more than they need, and pay more than they wanted for things they don’t use.

    Let’s think of it like a coffee shop for a moment.

    You go to Mooncoins and you want a latte and a gluten free donut. When you get there, you look for your $5 latte and see that you can’t buy it anymore but you can spend $10 for a latte and a vegan donut, non-gluten-free. If you want the gluten free bundle that’s another $15 and it comes with a muffin. You don’t want the muffin. You want the option to name your own bundle.

    Okay, so how does this relate to software?

    If you live in the US, you’ve probably heard about the Progressive insurance company. Yes, the Flo ones. Since they own Jacob’s Field in Cleveland, I hear a lot more of their commercials than I care about, but they have a weird deal with a ‘name your own price bundle’ where you pick what you want, and how much you want to pay for it. Within reason.

    This means we ask “What are you bundling?”

    I get waxed once a month or so and they offer ‘packages.’ There’s a ‘whole face’ package and a then separate services for lip, chin, and eyebrows. What I want is lip and chin, which they don’t offer, so instead of paying less for two services (which is what you get in a package), I have a choice of paying more for a package I don’t want (whole face) or more for two services I want (lip and chin).

    The company wants me to pay for a package, which would save me quite a bit more than buying the lip, chin, and eyebrows as separates would cost. But they don’t have a ‘pick two’ option.

    When you decide what you want to put in a bundle, you presume you know more about what the customer wants than the customer. And the flaw in this plan is that you don’t know more than the customers. You presume you know what works best, but you don’t.

    You may have an idea of what works best for the people you’ve run into, but some of them would look at the price difference between paying for two services and paying for a package, see than it works out to less than $50 a year, and go for it. Then you have someone who thinks that $50 a year isn’t nothing, and would like to save it and not use services they don’t really want.

    Bundling is meant to reduce options and let people pick what they need. What it more often does is require people to make different decisions of what they want and need, but also what the value is on those things in time and money.

    Maybe we should start rethinking what we bundle and instead consider how we bundle. Let the customers have options. Use features like “People who bought X also bought Y.” Add in discounts “If you buy 3 products, get 5% off your entire purchase.” Offer them example bundles that are your current deals “Most people buy these 3 together. Purchase them now and save 5%, or mix and match your own.”

    But start looking at how people purchase your products as a whole, and give them discounts not on the bundle you invent, but on the bundle they create. Guide them to what they need, not what you think they want.

  • Git Subtrees

    Git Subtrees

    I have a project in Hugo where I wanted the content to be editable by anyone but the theme and config to remain mine. In this way, anyone could add an article to a new site, but only I could publish. Sounds smart, right? The basic concept would be this:

    • A private repository, on my own server, where I maintained the source code (themes etc)
    • A public repository, on GitHub or GitLab, where I maintained the content

    Taking into consideration how Hugo stores data, I had to rethink how I set up the code. By default, Hugo has two main folders for your content: content and data. Those folders are at the main (root) level of a Hugo install. This is normally fine, since I deploy by having a post-deploy hook that pushes whatever I check in at Master out to a temp folder and then runs a Hugo build on it. I’m still using this deploy method because it lets me push commit without having to build locally first. Obviously there are pros and cons, but what I like is being able to edit my content and push and have it work from my iPad.

    Now, keeping this setup, in order to split my repository I need to solve a few problems.

    Contain Content Collectively

    No matter what, I need to have one and only one location for my content. Two folders is fine, but it has to be within a single folder. In order to do this, it’s fairly straightforward.

    In the config.toml file, I set two defines:

    contentdir = "content/posts"
    datadir = "content/data"
    

    Then I moved the files in content to content/posts and moved data to content/data. I ran a quick local test to make sure it worked and, since it did, pushed that change live. Everything was fine. Perfect.

    Putting Posts Publicly

    The second step was making a public repository ‘somewhere.’ The question of ‘where’ was fairly simple. You have a lot of options, but for me it boils down to GitLab or GitHub. While GitHub is the flavor du jour, GitLab lets you make a private repository for free, but both require users to log in with an account to edit or make issues. Pick whichever one you want. It doesn’t matter.

    What does matter is that I set it up with two folders: posts and data

    That’s right. I’m replicating the inside of my content folder. Why? Well that’s because of the next step.

    Serving Subs Simply

    This is actually the hardest part, and led me to complain that every time I use Submodules in Git, I remember why I hate them. I really want to love Submodules. The idea is you check out a module of a specific version of another repository and now you have it. The problem is that updates are complicated. You have to update the Submodule separately and if you work with a team, and one person doesn’t, there’s a possibility you’ll end up pushing the old version of the Submodule because it’s not version controlled in your git repository.

    It gets worse if you have to solve merge conflicts. Just run away.

    On the other hand, there’s a tool called Subtree, which two of my twitter friends introduced me to after I tweeted my Submodule complaint. Subtree uses a merge trick to get the same result of a Submodule, only it actually stores the files in the main repository, and then merges your changes back up to it’s own. Subtrees are not a silver bullet, but in this case it was what I needed.

    Checking out the subtree is easy enough. You tell it where you want to store the repository (a folder named content) and you give it the location of your remote, the branch name, and voila:

    $ git subtree add --prefix content git@github.com:ipstenu/hugo-content.git master --squash
    git fetch git@github.com:ipstenu/hugo-content.git master
    From gitlab.com:ipstenu/hugo-content
     * branch            master     -> FETCH_HEAD
    Added dir 'content'
    

    Since typing in the full path can get pretty annoying, it’s savvy to add the subtree as a remote:

    $ git remote add -f hugo-content git@github.com:ipstenu/hugo-content.git
    

    Which means the add command would be this:

    $ git subtree add --prefix content hugo-content master --squash
    

    Maintaining Merge Manuverability

    Once we have all this in, we hit a new problem. The subtree is not synced by default.

    When a subproject is added, it is not automatically kept in sync with the upstream changes so you have to pull it in like this:

    $ git subtree pull --prefix content hugo-content master --squash
    

    When you have new code to add, run this:

    $ git subtree push --prefix content hugo-content master --squash
    

    That makes the process for a new article a little extra weird but it does work.

    Documenting Data Distribution

    Here’s how I update in the real world:

    1. Edit my local copy of the content folder in the hugo-library repository
    2. Add and commit the changed content with a useful message
    3. Push the subtree
    4. Push the main repository

    Done.

    If someone else has a pull request, I would need to merge it (probably directly on GitHub) and then do the following:

    1. Pull from the subtree
    2. Push to the main repository

    My weird caveat is that updating via Coda can get confused as it doesn’t always remember what repository I want to be on, but since I do all of my pushes from command line, that really doesn’t bother me much.

  • Gmail: Handling Bad Emails

    Gmail: Handling Bad Emails

    No, not bad emails as in the ones that you consider saving and posting for someone’s everlasting internet shame. Bad emails are the ones that go to the wrong place, through none of your fault. We’re talking about the people using an email you’ve not used in a decade, or someone who can’t remember your name is spelled with an A and not an E, and so on. You know, typos.

    One of the things I did on my old email was set up a trash email box. That is, people could email not-me@domain.com or me@olddomain.com and they’d get an auto-reply telling them the email was no longer in service. It was more important for an old domain I owned but didn’t use and yet people I needed to talk to still thought it was real. I could have forwarded it to me, but after 10 years, I upgraded to the “Folks, seriously!” alert.

    Doing this on cPanel was pretty easy, making a custom alias that dev/null’d and sent a reply. Doing it on Gmail was a little weirder and made me think about the situation.

    Canned Replies

    First you have to set up Canned Responses, which is a Lab (go to Gmail -> Settings -> Labs). You made a response like you make an email, only instead of sending it you save it by clicking on the down arrow and saving as a Canned Response:

    Canned Response Save

    Once you have it saved, set up a filter so any email to @domain.com gets a reply of that Canned.

    Don’t Be Sneaky

    If you’re thinking “Aha! I can use this to be sneaky!” with the intent of sending people emails to pretend you really are reading it, there is a problem with that. The email comes back from YOU+canned.response@example.com and no, there’s no really easy way around that. Someone did come up with a Google Script for it, but it’s not for the faint of heart.

    Now the question is, is that a bad thing? Is it bad for people to know they got a canned reply? No, not really. By putting in the +canned.response it’s obvious that it’s a canned, but it’s also obvious for you and you can filter the emails however you want. People who reply to canned? Auto-trash ’em. Or block them.

    Filters

    Instead of the canned reply, though, you can also just discard the email. Either don’t even bother to set up the email (or it’s alias at all), or if you do, filter it out and dump it. The only reason I could see bothering to make an alias for email you don’t want is if you either plan to review it later, or if you have a catch all email address. If you do this, making an alias, make sure you filter the emails and mark them read so you don’t get distracted by them.

    Catch All

    There’s a slightly different approach to all this, though. The idea of a catch-all email. By default, G Suites sends all your misdirected emails to trash. Accidentally mailed bob@example.com instead of b0b@example.com because the numbers and letters look the same? Tough luck. Unless Bob was smart enough to set that up as an alias (which I tend to do), your email was lost. The alternative is to designate a user as a ‘catch all’ account that gets everything that doesn’t belong to an existing user.

    That catch-all can auto-reply to all emails, forward ones that are important, and everything else. If you’re a business, you should do this so you don’t lose any misdirected emails from customers (they can’t spell after all), but remember to check that email often as it will also collect all the spam for all your accounts.

  • Review: Spark Love for Your Gmail

    Review: Spark Love for Your Gmail

    Moving my email to Google Apps has, thus far, been interesting. I don’t regret it, and consolidating multiple emails down to three was a good choice. The learning curve of adding in email aliases so I can mail from all the accounts I use, and the limits of Gmails shitty filters so everything is funneled to the right place, has been tricky.

    As I mentioned before, I have a ton of aliases. Adding them in on the Google Admin back end (just renamed G Suite) is weird but easy enough. To be able to email from them, you have to also add them in via the normal Gmail web app. It’s tucked under Settings > Accounts, and under “Send mail as”, click Add another email address.

    But if you don’t want to use the web app (and I don’t), Gmail can be a bit of a turd. It doesn’t work great with the desktop Mail.app, and it works terribly with iOS’s mail. Gmail and Apple are just at odds with how email works. They both want to control your experience and redefine email in different ways. Frankly I prefer the Mac way, but that’s personal preference.

    What is a universal problem is that I needed a way to email from my aliases, and if you set up email as Google Mail in the iOS mail app … you can’t.

    Yes, you read that right. It is flat out impossible to set up email aliases for a Google mail account. If you want to use the iOS mail app and Goggle email and aliases, you have to set up Gmail as an IMAP app, and that’s sort of a shit show in the making. Gmail’s IMAP implementation is non-standard, to put it simply. Among other things, you can only use 15 connections to IMAP per account. If I had the desktop app open and my iPhone and iPad, weird shit happened.

    Now, there are solutions. You could use the Gmail app, but it sucks and doesn’t have an Apple Watch component. Also it’s ugly. Excuse me. It’s basic. You could also use Google’s Inbox app, but you have to use Inbox and the email filters aren’t as robust.

    This leads us to our final solution. Spark.

    This app was something I’d played with before, as it had email alerts on the Apple Watch, and I wanted to get pinged for some work emails while updating all DreamPress installs over at DreamHost. Sadly, the fault of the app not meeting that need is Gmail, again, which has no way to filter properly and send an alert only when an email meets specific criteria.

    What Spark does do is everything else. It has a Watch component, it syncs between my iPad and iPhone, it looks like an iOS app, it acts like a Google app, it pulls in the features people rave about Inbox, and it has email aliases that are simple to set up. Whew. The only thing it doesn’t do is show me a count for unread messages in my folders.

    I can live with that.

  • Optimizing Images

    Optimizing Images

    After running a regeneration of my images (due to changing things on my theme) my Gtmetrix score dropped from an A to a D! In looking at why, I saw it was telling me my images should be optimized.

    Best to get on that, eh?

    The easiest way is to install jpegoptim on the server:

    $ yum install jpegoptim
    

    And then to run a compression on my images:

    $ jpegoptim IMAGE.jpg
    

    Anyone fancy running that on a few thousand images? Hell no. We have a couple options here. One is to go into each folder and run this:

    $ jpegoptim *.jpeg *.jpg
    

    The other is to sit in the image root folder and run this:

    $ find . -type f -name '*.jp?g' -exec jpegoptim {} --strip-all \;
    

    I picked --strip-all since that removes all the image meta data. While I would never want to consider that on a photoblog, I needed to compress everything and, for some reason, unless I stripped that data I didn’t get smaller sizes. For this case, it wasn’t an issue.

    What about PNGs? Use optipng ($ yum install optipng) and run this:

    $ find . -type f -name '*.png' -exec optipng *.png \;
    

    Going forward, I used Homebrew to install those locally and compress my images better, though I’m usually pretty good about remember that part. Well. I thought I was.

  • Yoast SEO: Selective Stopwords

    Yoast SEO: Selective Stopwords

    Stopwords are those small words that should be removed from URLs. You know like ‘a’ and ‘and’ or ‘or’ and so on and so forth. They make for ungainly and long URLs and really you should remove them.

    If you happen to use Yoast SEO for WordPress, and you want to disable stopwords, there’s a simple way about that. Go to SEO -> Advanced and disable the feature for stopwords.

    Disable stop-word cleanup, but why?

    If you want to kill it with fire and prevent everyone on your site from being able to activate them ever, you can toss this into an MU plugin.

    add_filter( 'wpseo_stopwords', '__return_empty_array' );
    remove_action( 'get_sample_permalink', 'wpseo_remove_stopwords_sample_permalink' );
    

    The first filter makes the stopwords kick back nothing, and the remove action stops the process from running. You probably only need the second one, but better safe than sorry, I always say.

    But … what if you want stop words removed, but you don’t want them removed on certain custom post types? Welcome to my world! I wanted to remove them from two post types only.

    Enter my frankencode:

    <?php
    
    /*
    Plugin Name: Yoast SEO Customizations
    Description: Some tweaks I have for Yoast SEO
    Version: 2.0
    */
    
    // Unless we're on a post or a post editing related page, shut up
    global $pagenow;
    
    $pagenow_array = array( 'post.php', 'edit.php', 'post-new.php' );
    if ( !in_array( $pagenow , $pagenow_array ) ) {
    	return;
    }
    
    // Since we are, we need to know exactly what we're on and this is a hassle.
    global $typenow;
    
    // when editing pages, $typenow isn't set until later!
    if ( empty($typenow) ) {
        // try to pick it up from the query string
        if (!empty($_GET['post'])) {
            $post = get_post($_GET['post']);
            $typenow = $post->post_type;
        }
        // try to pick it up from the query string
        elseif ( !empty($_GET['post_type']) ) {
    	    $typenow = $_GET['post_type'];
        }
        // try to pick it up from the quick edit AJAX post
        elseif (!empty($_POST['post_ID'])) {
            $post = get_post($_POST['post_ID']);
            $typenow = $post->post_type;
        }
        else {
    	    $typenow = 'nopostfound';
        }
    }
    
    $typenow_array = array( 'post_type_shows', 'post_type_characters' );
    if ( !in_array( $typenow , $typenow_array ) ) {
    	return;
    }
    
    add_filter( 'wpseo_stopwords', '__return_empty_array' );
    remove_action( 'get_sample_permalink', 'wpseo_remove_stopwords_sample_permalink', 10 );
    

    There was something funny to this, by the way. Originally I didn’t have the $pagenow code. Didn’t need it. But when I left it out, Yoast SEO broke with a weird error. It refused to load any of the sub-screens for the admin settings!

    Cannot Load Yoast SEO Admin Pages

    After some backpacking of “Okay, was it working before…?” I determined it was the call for global $typenow; – a global that isn’t used at all in the Yoast SEO source code that I could find. Still, by making my code bail early if it’s not even on a page it should be on, I made the rest of the WP Admin faster, and that’s a win for everyone.