Half-Elf on Tech

Thoughts From a Professional Lesbian

Tag: automation

  • Zap a Daily Tweet

    Zap a Daily Tweet

    Last week I told you how I made a random post of a day. Well, now I want to Tweet that post once a day.

    Now there are a lot (a lot) of possibilities to handle something like that in WordPress, and a lot of plugins that purport to Tweet old posts. The problem with all of them was that they used WordPress.

    There's nothing wrong with WordPress

    Obviously. But at the same time, asking WP to do 'things' that aren't it's business, like Tweeting random posts, is not a great idea. WordPress is the right tool for some jobs, but not all jobs, after all.

    What is WordPress' job is generating a random post and setting a tracker (transient) to store for a day. And it's also WordPress' job to output that data how I want in a JSON format.

    The rest, we turn to a service. Zapier.

    A Service?

    Like many WordPressers, I like to roll my own whenever humanly possible. In this case, I could have added an OAuth library and scripted a cron job, but that puts a maintenance burden on me and could slow my site down. Since I have the JSON call, all I need is 'something' to do the following:

    1. Every day, at a specific time, do things
    2. Visit a specific URL and parse the JSON data
    3. Craft a Tweet based on the data in 2

    I dithered and kvetched for days (Monday and Tuesday) before complaining to Otto on Tuesday night. He pointed out he'd written those scripts. On Wednesday, he and I bandied about ideas, and he said I should use IFTTT. Even using IFTTT's Maker code, though, the real tool needed is one that lets me code logically.

    Zapier

    The concept of IFTTT is just "If This, Then That." If one thing is true, then do another. It's very simple logic. Too simple. Because what I needed was "If this, then do that, and tell another that." There wasn't an easy way I could find to do it with IFTTT so I went to the more complicated.

    Example of what the flow looks like - Trigger is every day, action is GET, final action is tweet

    Three steps. Looks like my little three item'd list, doesn't it?

    The first step is obvious. Set a specific time to run the zap. It's a schedule. The second step is just a web hook saying 'Get the data from URL.' And the third step is aware!

    Showing the example of the tweet, with placeholders for the name and URL

    Pretty nice. If you click on the 'add field' box in the message content (upper right), it knows how to grab the variables from the previous steps and insert them. Which is damn cool.

  • Updating Bower with Grunt

    Updating Bower with Grunt

    The goal of automation is to make the annoying stuff I don’t want to have to remember to do easier to do. Bower is useful for updating things. Grunt is useful for running a series of commands. Using them together makes life easier.

    In my little world, everything lives in a folder called ‘assets’ and its very simple.

    Add a Package to Bower and call it in Grunt

    First I have a .bowerrc file which very simply says this:

    {
       "directory": "vendor"
    }
    

    That tells Bower where to install things. So when I run bower install jquery-backstretch --save in my asset folder, it saves backstretch to the vendor folder.

    In my gruntfile.js, I have this to pull in my backstretch arguments and the main file into one file, uncompressed, a folder level up:

    		concat: {
    			// Combine all the JS into one
    		    backstretch: {
    		    		src: ['js/backstretch.args.js', 'vendor/jquery-backstretch/jquery.backstretch.min.js'],
    				dest: '../js/backstretch.js',
    		    },
    		},
    

    Just like magic.

    Tell Grunt to Update Bower

    But while Bower pulled in the packages, I don’t want to have to tell Bower ‘Hey, make sure everything’s up to date!’ every few days. I want to make sure I’m on the latest version of a branch most of the time, for security reasons at the very least. That means I have this in my bower.json file:

      "dependencies": {
        "bourbon": "~4.2.3",
        "neat": "~1.7.2",
        "jquery-backstretch": "~2.0.4"
      }
    

    So if I run bower update I would get this:

    bower jquery-backstretch#~2.0.4 cached git://github.com/srobbin/jquery-backstretch.git#2.0.4
    bower jquery-backstretch#~2.0.4         validate 2.0.4 against git://github.com/srobbin/jquery-backstretch.git#~2.0.4
    bower bourbon#~4.2.3                      cached git://github.com/thoughtbot/bourbon.git#4.2.3
    bower bourbon#~4.2.3                    validate 4.2.3 against git://github.com/thoughtbot/bourbon.git#~4.2.3
    bower neat#~1.7.2                         cached git://github.com/thoughtbot/neat.git#1.7.2
    bower neat#~1.7.2                       validate 1.7.2 against git://github.com/thoughtbot/neat.git#~1.7.2
    bower jquery#~1.9.1                       cached git://github.com/jquery/jquery.git#1.9.1
    bower jquery#~1.9.1                     validate 1.9.1 against git://github.com/jquery/jquery.git#~1.9.1
    bower neat#~1.7.2                        install neat#1.7.2
    bower bourbon#~4.2.3                     install bourbon#4.2.3
    
    neat#1.7.2 vendor/neat
    └── bourbon#4.2.3
    

    Cool. But who wants to run that every day?

    Instead, I ran npm install grunt-bower-update --save-dev to install a new Grunt tool, Bower Update. With that code added to my gruntfile.js, every time I run my grunt update command, it first updates my libraries and then runs the processes.

    There is a downside to this. I use git to keep track of my work and either track the vendor packages in my repo (which can make it a little large) or I can remember to install the packages. There are other ways around this, like using grunt-bower-task to set up things to install if not found, update if they are found. I went with including them in my repos, which makes the git pull a bit large (it added about 6000 files), but since I properly delete my assets folder when I deploy from git, it won’t impact the size of my server’s package.

    Register a New Bower Package

    Randomly, when I initially tried to install backstretch, I forgot it was named ‘jquery-backstretch’ and did this:

    $ bower install backstretch --save
    bower                        ENOTFOUND Package backstretch not found
    

    Obviously the right fix is to use the right repo name (and bower has a great search tool to help me to that). But what if I did want to package it up as backstrech? Or if I wanted to add my own repo? Well I would have to register that package first. And that’s pretty easy:

    bower register backstretch git://github.com/srobbin/jquery-backstretch.git
    

    Your Tricks?

    Do you have Bower tricks?

  • Bower To The Master

    Bower To The Master

    I recently mastered using Grunt to handle automation.

    And then I was handed some Bower code by Carrie Dils. I’m up for a challenge, I muttered under my breath. I already have Node and NPM and Git, so this shouldn’t be too terrible.

    Turns out I didn’t need to change a damn thing!

    First off, Carrie and I are on the same wavelength, having named our files nearly the same and separated them the same. Second, I had been incredibly brilliant and put all of my code in separate files (my.css, my-config.php, etc etc). Third, I had documented everything that I had changed in all of my files in a my-readme.txt file.

    But if I’m using Grunt, what am I going to get out of Bower?

    Bower is a ‘front end’ package manager.

    To install packages, I make a folder for my work and I go there in a command line, I type this:

    $ bower install jquery

    That would install bower into my folder. It’s dependency aware as well, so if I install Bourbon, it will include Neat. This is much the same as Grunt, which can install its plugins and dependencies, but where Grunt is for installing Node modules, Bower is for js and CSS and html as well.

    Grunt is for running tasks. Bower is for managing components. They’re friends.

    Bower lets me set up all the required components for my site (jquery for example). Grunt lets me compress, join, minify, and automate the deployment of those components.

    I tell Bower to get the files and what versions they could be. I tell Grunt to combine all my mini-js files into one, combine them, compress them, and put them in another location. That means I tell Bower to bring in jquery, but it puts it in a development folder. Grunt takes that and copies it to the js folder.

    Personally I take it a step further and, when I use Git to push my code, I tell it to delete the development folder off the server. I also do as Chase Adams does, and I don’t version control my dev packages. I may define jquery’s version, but I dont worry about capturing that in my repositories.

    You don’t have to use Grunt. You could use Gulp. I had a sticker for Grunt on my laptop from a friend, so I tried it first and found I liked it.

    Taking all this a step further, there are tools like Yeoman that will let you kickstart a project by saying ‘yo’ and telling it what kind of project you want to make. Yes, there’s a WordPress project called YeoPress.

    The point of all this is that automation is the queen of development. Don’t do manually what you can safely, reliably, and responsibly automate. Like the Queen on the chessboard, strike out in all directions and control the board. Use the tools to repeat the hard work, to keep dependencies up to date, and to automate the annoying work.

  • Deploying With Git

    Deploying With Git

    gitI’ve been banging my head on this for a while. It really did take me a year and reading lots of things to begin to understand that I was totally wrong. As with many things, I have to sit down and use them for a while to understand what I’m doing wrong, and what I need to learn. I finally had my git breakthrough. It’s very possible (no, likely) that I got some of this wrong, but I feel like I now understand more about git and how it should be used, and that made me more confident in what I’m doing with it.

    Speaking as a non-developer (hey, sometimes I am!), I just want a command line upgrade for my stuff. This code also lacks a WordPress-esque click to upgrade, so I have to do a three step tango to download, unpack, copy, delete, in order to upgrade.(By the way, more software should have one-click upgrades like that, it would make life easier for everyone. I do know that the backend support is non-trivial, so I would love to see a third-party act as a deployment hub, much like GitHub is a repository hub.) The more steps I have, the more apt I am to make an error. So in the interests of reducing my errors and my overhead, I wanted to find a faster and safer way to deploy.(My previous job was all about deployment. We had a lot of complicated scripts to take our code, compile it, compress it, move it to a staging site, and then email that it was ready. From there, we had more scripts to ‘move to test’ and ‘move to prod’ which made sense.)

    Since I already thing that automating and simplifying deployment is good, and all I want to do is get one version, the ‘good’ version, of code and be able to easily update it. One or two lines is best. Simple, reliable, and easy to use. That’s what I want.

    Recently, Ryan Hellyer pointed out git archive, which he claims is faster than clone. I’d believe it if I could get it to work. When I tried using HTTPS, I got this: fatal: Operation not supported by protocol.. So I tried using ssh and got Could not resolve hostname… instead. Basically I had all these problems. Turns out github turned off ‘git archive -remote’ so I’m dead in the water there for any code hosted there, which is most of my code.

    I kicked various permutations of this around for a couple afternoons before finally throwing my hands up, yet again, and looking into something else, including Capistrano, which Mark Jaquith uses in WP Stack. It’s something I’m personally interested in for work related reasons. Capistrano is a Ruby app, and vulnerability fears aside, it’s not very user friendly. At my old job, we used ant a lot to deploy, though there don’t seem to be ant tasks yet for Git. The problem with both of those is that they require you to pull down the whole hunk ‘o code and I’m trying to avoid that in this use case. Keep it simple, stupid. Adding more layers of code and complication onto a project that doesn’t need it is bad.

    Finally I went back to git and re-read how the whole distributed deployment works. I know how to clone a repository, which essentially gets me ‘trunk.’ And I know that a pull does a fetch followed by a merge, in case I’d done any edits, and it saves my edits. Hence merge, and why I dig it for dev. At length it occurred to me that what I wanted was to check out the git repo without downloading the code at first. Well I know how to do that:

    $ git clone --no-hardlinks --no-checkout https://github.com/wp-cli/wp-cli.git wp-cli
    Cloning into 'wp-cli'...
    remote: Counting objects: 10464, done.
    remote: Compressing objects: 100% (3896/3896), done.
    remote: Total 10464 (delta 6635), reused 10265 (delta 6471)
    Receiving objects: 100% (10464/10464), 1.20 MiB | 1.04 MiB/s, done.
    Resolving deltas: 100% (6635/6635), done.
    

    That brings down just a .git folder, which is small. And from there, I know how to get a list of tags:

    $ git tag -l
    v0.3.0
    [...]
    v0.8.0
    v0.9.0
    

    And now I can check out version 8!

    $ git checkout v0.8.0
    Note: checking out 'v0.8.0'.
    
    You are in 'detached HEAD' state. You can look around, make experimental
    changes and commit them, and you can discard any commits you make in this
    state without impacting any branches by performing another checkout.
    
    If you want to create a new branch to retain commits you create, you may
    do so (now or later) by using -b with the checkout command again. Example:
    
      git checkout -b new_branch_name
    
    HEAD is now at 8acc57d... set version to 0.8.0
    

    Well damn it, that was simple. But I don’t want to be in a detached head state, as that means it’s a little weird to update. I mean, I could do it with a switch back to master, a pull, and a checkout again, but then I thought about local branches. Even though I’m never making changes to core code (ever), let’s be smart.

    Linus Torvalds
    Linus Torvalds flipping Nvidia the bird
    One codebase I use has the master branch as their current version, which is cool. Then there’s a 1.4.5 branch where they’re working on everything new, so when a new version comes out, I can git pull and be done.(In this moment, I kind of started to get how you should be using git. In SVN, trunk is where you develop and you check into tags (for WordPress at least) to push finished versions. In git, you make your own branch, develop there, and merge back into master when you’re ready to release. Commence head desking.)

    One conundrum was that there are tags and branches, and people use them as they see fit. While some of the code I use defines branches so I can check out a branch, others just use tags, which are treeish. Thankfully, you can make your own branch off a tag, which is what I did.

    I tried it again with a non-Github slice of code: Mediawiki.(Mediawiki, compared to wp-cli, is huge, and took a while to run on my laptop. It was a lot faster on my server. 419,236 objects vs 10464. I’m just saying someone needs to rethink the whole ‘Clone is faster!’ argument, since it’s slow now, or slow later, when downloading large files. Large files is large.) Now we have a new issue. MediaWiki’s .git folder is 228.96 MiB… Interestingly, my MediaWiki install is about 155MiB in and of itself, and diskspace is cheap. If it’s not, you’ve got the wrong host. Still, it’s a drawback and I’m not really fond of it. Running repack makes it a little smaller. Running garbage collection made it way smaller, but it’s not recommended. This, however, is recommended:

    git repack -a -d --depth=1 --window=1

    It doesn’t make it super small, but hey, it worked.

    Speaking of worked, since the whole process worked twice, I decided to move one of my installs (after making a backup!) over to this new workflow. This was a little odd, but for Mediawiki it went like this:

    git clone --no-hardlinks --no-checkout https://gerrit.wikimedia.org/r/p/mediawiki/core.git wiki2
    mv wiki2/.git wiki/
    rmdir wiki2
    cd wiki
    git reset --hard HEAD
    

    Now we’re cloning the repo, moving our files, resetting where HEAD is, and I’m ready to set up my install to use the latest tag, and this time I’m going to make a branch (mysite-1.20.3) based on the tag (1.20.3):

    git checkout -b mysite-1.20.3 1.20.3
    

    And this works great.

    The drawback to pulling a specific tag is that when I want to update to a new tag (1.20.4 let’s say), I have to update everything and then checkout the new tag in order to pull down the files. Now, unlike svn, I’m not making a full copy of my base code with every branch or tag, it’s all handled by head files, so there’s no harm keeping these older versions. If I want to delete them, it’s a simple git branch -D mysite-1.20.3 call and I’m done. No code changes (save themes and .htaccess), no merging needed. And if there’s a problem, I can switch back really fast to the old version with git checkout mysite-1.20.3. The annoyance is that I just want to stay on the 1.20 branch, don’t I? Update the minors as they come, just like the WP minor-release updater only updates changed files.

    Thus, I asked myself if there was a better way and, in the case of MediaWiki, there is! In world of doing_it_right(), MediaWiki has branches and tags(So does WP if you looked at trac.), and they use branches called ‘REL’. If you’re not sure what branches your repo uses, type git remote show origin and it will list everything. There I see REL1_20 and since I’m using version 1.20.3 here, I surmised that I can actually do this instead:

    git checkout -b mysite-REL1_20 origin/REL1_20 
    

    This checks out my branch and says “This branch follows along with REL1_20.” so when I want to update my branch it’s two commands:

    git fetch --all
    git pull
    

    The fetch downloads the changesets and the pull applies it. It looks like this in the real world (where I’m using REL1_21 since I wanted to test some functionality on the alpha version):

    $ git fetch --all
    Fetching origin
    remote: Counting objects: 30, done
    remote: Finding sources: 100% (14/14)
    remote: Getting sizes: 100% (17/17)
    remote: Total 14 (delta 10), reused 12 (delta 10)
    Unpacking objects: 100% (14/14), done.
    From https://gerrit.wikimedia.org/r/p/mediawiki/core
       61a26ee..fb1220d  REL1_21    -> origin/REL1_21
       80347b9..431bb0a  master     -> origin/master
    $ git pull
    Updating 61a26ee..fb1220d
    Fast-forward
     includes/actions/HistoryAction.php | 2 +-
     1 file changed, 1 insertion(+), 1 deletion(-)
    

    This doesn’t work on all the repos, as not everyone follows the same code practices. Like one repo I use only uses tags. Still, it’s enough to get me fumbling through to success in a way that doesn’t terrify me, since it’s easy to flip back and forth between versions.

    Fine. I’m sold. git’s becoming a badass. The only thing left is to protect myself with .htaccess:

     # SVN and GIT protection
     RewriteRule ^(.*/)?(\.svn|\.git)/ - [F,L]
     ErrorDocument 403 "Access Forbidden"
    

    Now no one can look at my svn or git files.

    And to figure out how to get unrelated instances of git in subfolders to all update(Mediawiki lets you install extensions via git, but then you don’t have a fast/easy way to update…):

    #!/bin/sh
     
    for i in `find ./ -maxdepth 1 -mindepth 1 -type d`; do
            cd $i
            git pull
            cd ../
    done
    

    And I call that via ./git.sh which lives in /wiki/extensions and works great.

    From here out, if I wanted to script things, it’s pretty trivial, since it’s a series of simple if/else checks, and I’m off to the races. I still wish every app had a WordPress-esque updater (and plugin installer, hello!) but I feel confident now that I can use git to get to where my own updates are faster.