Half-Elf on Tech

Thoughts From a Professional Lesbian

Author: Ipstenu (Mika Epstein)

  • Personal Version Control With Git

    Personal Version Control With Git

    Git RelationshipsOne thing I am personally bad at is version control. Oh, don’t get me wrong, I’m great decent at it for code at work, but I still have a tendency to cowboy code. Bad me.

    Part of why is that I don’t want to have my stuff public. At the same time, I don’t want to pay Github while I’m learning, and I want a GUI. Why not overcomplicate my life! I’ll add on one more, I want a website where I can look at my changes all pretty like. Last time I did this via SVN, hated it, never used it, and walked away. This time I decided to use Git, and once I started using it via the command line, I realized how freaking awesome this was. And since I’m using CLI, I ran git config --global color.ui true to get pretty colors.

    I should note that I use Coda2 by Panic and I love it. But. It doesn’t have an easy way to properly branch and tag in SVN or Git, which is something I’m used to having. And my master plan is this. Master will have the ‘live’ real code. The branches are releases. When a branch is ready to go live, I merge it with master.

    So let’s get started.

    Stage One: Install Git

    I followed Gits official directions for installing Git on my server, so once you’ve got things like that up and running, it’ll be time for the webpage.

    Cart comes after horse, though. I had a hell of a time getting my own SVN crap up and running, but git was way easier. There was far less to mess with to install, since I’d already done it before. With Git, I don’t need to mess with svnserve or daemons, since git isn’t the ‘server’ where my code is stored, it’s just another repository/clone. It breaks my head sometimes, but this is a case where it’s going to be easy. Install any Git dependencies (on CentOS that means I ran yum -y install zlib-devel openssl-devel cpio expat-devel gettext-devel ) and then install from command line:

    cd /usr/local/src
    wget http://git-core.googlecode.com/files/git-1.8.2.2.tar.gz
    tar xvzf git-1.8.2.2.tar.gz
    rm git-1.8.2.2.tar.gz
    cd git-1.8.2.2
    
    ./configure
    make
    make install
    

    Done, and now I’m on 1.8.2.2. Upgrading is as easy as doing the wget again.

    Stage Two: Make Repositories

    At this point, the official git directions say you should make a git account, but after some pondering, I decided to put my repos in /home/myuser/gitrepos/ so I can easily add my repositories to my local computer. Everything is locked down via RSA keys anyway, so I can secure connect from my computers and everything is basic and simple and safe. If I want to make more repositories for other accounts, I can do it as a one-by-one basis. Right now I have no need for group stuff, this is a me repository after all. If I did, I’d use git’s directions for making a git user instead.

    I made projects for my domains (ipstenu and jfo) and then one for my server (mothra). I hooked into them with Coda2, which is my app of choice, and I tested that I could get and push files. I’ll come back to this in a second.

    Stage Three: Webify it!

    Next up was webifying things. I want to do this, even though I’m putting it behind a password, because I’m visual and being able to see what I’ve got going on is helpful to me. I went with GitList because it looked nice. The drawback to these things is that if I used a shared repository (say /home/gituser/gitrepos/ for example) then I’d have to web-alias to the gituser instead of having it hosted as ‘Ipstenu’ … which is okay. If I was really making a full blown shared group, I’d probably buy gitstenu.org and point it there.

    My Gitlist

    Awesomesauce! It works! Except for this error Unnamed repository; edit this file ‘description’ to name the repository. That’s easily fixed on the server by editing the description file.

    Those initial repositories are for my non-public files, actually. That’s where I keep copies of the obscure shell scripts I run, stuff that isn’t web accessible. After I got those set up, I made repositories for my themes and my mu-plugins. This gave me folders like ipstenu-mu-plugins and jfogenesis-wp because I named the theme ‘jfogenesis’ for WordPress, MediaWiki, and ZenPhoto. Still this let me progress to …

    Stage Four: Control

    The whole reason I did all the rest of that was so I could do this, which seems redundant, but bear with me.

    1. I checked out all the code locally, edit it, and push it back up to the repos on my server
    2. On the webserver, I check out the code with a simple git clone to folder2 (trust me)
    3. switch folder and folder2 (you know how to move things)
    4. Checkout the new code with a git pull. If I want to checkout a specific version, it’s git checkout -b mysite-REL_1.0 origin/REL_1.0(It’s worth mentioning that I made a branch for everything at the start as REL_1.0 because reasons.)

    And that’s it.

    Stage Five: Updates

    Here’s my current process.

    Locally, I start work on my new version so I make a new branch: git checkout -b REL_2.0

    Edit my files and add and remove them as needed:

    git rm <filename>
    git add .
    

    Check them in when I’m done with the files: git commit -m "Editing foo and bar, replacing baz."

    When I’m ready with it to be tested, I push it: git push origin REL_2.0

    Everything tests well on the dev server(Just pretend there is one.)? Okay! Let’s merge!

    git checkout master
    git merge REL_2.0
    git push origin master

    Summary

    Since this is just me running stuff, it’s pretty easy to keep track of everything. Having the website makes it easy for me to see what’s I last did and what’s changed:

    Diff Screen

    Again, yes, I put the git website behind password protection. I may change this later, if I determine I never put anything secret up there. But right now, this is good for me. After a couple days of using it, I liked it a lot. Enough that, unlike my foray into SVN, I kept using it. It was easy for me to keep my code organized and up to date. If I had to update ‘live’ I could still use branches easily and roll things back painlessly. That’s pretty darn cool.

  • DreamHost Logo ala CSS

    This was done by Clee, my coworker. I tweaked it, prefacing things with dh to make it more portable and then slapping it into a shortcode so I could embed it here:

    The code is as follows (you can see his original code at codepen):

    &lt;div id=&quot;dhbackground&quot;&gt;
      &lt;div id=&quot;dhmoon&quot;&gt;
        &lt;div id=&quot;dheclipse&quot;&gt;
          &lt;div id=&quot;dhchevron&quot;&gt;
          &lt;/div&gt;
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    
    @dhbg: rgba(27, 53, 100, 1);
    @size: 240px;
    @corner: (0.16 * @size);
    @offset: (@corner - 6px);
    @chevron: (0.36 * @size);
    
    div#background {
      background-color: @dhbg;
      width: @size;
      height: @size;
      border-radius: @corner;
      margin: auto;
    }
    
    div#moon {
      background-color: #fff;
      width: @size - (2 * @corner);
      height: @size - (2 * @corner);
      position: relative;
      top: @corner;
      margin: auto auto;
      border-radius: 50%;
    }
    
    div#eclipse {
      background-color: @dhbg;
      position: relative;
      width: @size - (2 * @corner);
      height: @size - (2 * @corner);
      top: -@offset;
      left: -@offset;
      border-radius: 50%;
    }
    
    div#chevron {
      background-color: rgba(51, 104, 156, 1);
      position: relative;
      width: @chevron;
      height: @chevron;
      left: @offset;
      top: @offset;
      border-radius: @corner 0px (2 * @corner) 0px;
    }
    

    Now actually that’s not CSS, it’s LESS, which I can’t inline apparently (or maybe I just don’t know how). Either way, it’s a funky cool CSS trick!

  • Jetpack Koolaid

    Jetpack Koolaid

    Unicorn with a jetpack
    Credit: Nivole Lorenz 2010. Pencil sketch on copy paper.
    Sometimes I draw unicorns. Sometimes they have jetpacks.
    WordPress Jetpack gets a lot of grief, and for an understandable reason. From some perspectives, it does everything we hate about plugins. But there are reasons and methods behind the madness. I’m going to hit them from my perspective as best I can, without ever considering the ‘Automattic sucks!’ arguments I’ve heard. I’m reasonably sure Automatic is neither trying to be a dick nor are they evil (egotistical maybe, but not evil). If you go in assuming there has to be a reason for all this, even if you don’t like it, you can understand it a little better.

    Why one big plugin and not 20+ separate ones?

    Actually I’m going to come back to this one, but there’s a reason, so hang on.

    Why do I have to connect to WordPress.com?

    This is a better place to start. You have to connect to WordPress.com because they’re providing a service. Not everything runs on your server, so in order for the modules to work, you have to let your server talk to WordPress.com. That one makes sense to just about everyone, I hope.

    Less obvious is exactly how this benefits you. I’ll give you a real annoying example: Twitter’s API. You may not know, but Twitter throttles API usage based on IP address. So if you’re on a shared server, and everyone uses Twitter on their blogs, you may get your API cut off and no Twitter updates show on your site. Bummer! On the other hand, WordPress.com has a gimmie from Twitter, letting them post as much as they want.(Probably not unlimited, but enough for us.) This is also true of Highlander (aka the comments plugin), which transmits data between multiple hosts instead of you having to set up OAuth, which if you’ve tried, you’ll see why Jetpack Comments are way easier.

    But why do I have to connect to use any of the modules?

    This usually comes up when someone only wants to use one feature, let’s pick the contact form, which doesn’t need to communicate with WordPress.com to run. The best reason I had is ‘It’ll make it easier if you later decide to turn on the other features later.’ I tossed this around for a while, considering the users I work with every day, and I’ve finally agreed that for the common user, it’s better to have to do one setup, once, and be done.

    After years of free support, and now doing this for a living, the common user doesn’t have the experience to understand that while I may need to connect to WordPress.com for the stats, why do I need it for a mobile theme? The problem comes up when the user wants to start with the items they don’t need to connect. Springing it on them later is an uphill battle I wish I didn’t have to make. So yes, it’s sensible for the average user. As Helen said, Jetpack is for users, not developers.

    Anyway, if you know for sure you (or your client) will never want to connect to .com, then you can use the new development mode (as of version 2.2.1) and add define( 'JETPACK_DEV_DEBUG', true); to your wp-config.php file. Done.

    If the .org repository doesn’t let people host marketplace plugins, what’s up with VaultPress?

    VaultPress is selling a service, not a plugin. It’s hairsplitting, but look at Akismet (which also could be a pay-to-use product). It’s ‘free’ but you’re encouraged to pay. If they went pay-only, which I could easily see them doing if they started over, they would be a perfect candidate for Jetpack. Where VaultPress rubs the ‘Hey wait…’ button is when you remember that there is a separate plugin for it. So this is like if they let you signup for Akismet within Jetpack… Oh, wait, Akismet’s links are in the Jetpack menu now. Still, of all the questionable things Jetpack does, this is actually the only one that really makes me Spock the Eyebrow(Yes, that is what the favicon is.) because it’s just a little off.

    Kool-Aid

    Why the auto-activate?

    Users won’t know otherwise. If you don’t turn things on, they’ll never see it. I don’t like it, especially when I’m using a plugin that got folded in (see CSS editor or Grunion), but they learned the Grunion lesson! When the CSS editor joined Jetpack and I upgraded, it turned off the old editor. That was smart, and takes away my User-Ipstenu complaint of auto activation. Remember! This is a user plugin. Not a developer one. Calm yourself.

    Why is it one big plugin?

    This needs some history.

    A few years ago, the concept of ‘Canonical Plugins’ (or Core Plugins depending on if you asked a core contributor or anyone else) came up, and it was an idea for stuff that is (or used to be) in core, but wasn’t used all the time. Examples of this would be the content importer plugins which are used once or twice in a site’s lifetime. To quote the original poll and announcement:

    Canonical plugins would be plugins that are community developed (multiple developers, not just one person) and address the most popular functionality requests with superlative execution.

    That was 2009, and here it’s 2013 and we don’t really have any yet. Jetpack certainly isn’t one, though in many ways it hits those ‘popular functionality’ feelers dead center. In a way it is a canonical plugin, but it also clearly illustrates why some of the most popular plugins would be very difficult to pull off without the infrastructure that Automattic already has. So while I wouldn’t call Jetpack a ‘Core Plugin’ (it’s not community developed), it’s sort of a great example of what a core plugin suite would look like and the issues with it.

    Now, why is it a ton of plugins in one? Well why not? A lot of people hate installing six or seven plugins because “it makes their site slower” (not really), and the way Jetpack does it is remarkably elegant, in that you can turn off the parts you don’t use. The problem I have with Jetpack is it’s size. It’s big. It’s almost the size of WordPress and at 4.2 megs, it’s slow to install. I find that it’s way easier for me to run via wp-cli and upgrade, but not everyone has that option.(That said, I don’t have a problem upgrading on my smaller sites when they don’t get a ton of traffic. Upgrading on my busiest site while it’s busy is always stupid and I know it.) The size problem is also a hassle because WordPress doesn’t (yet?) do incremental updates for Plugins. When you have a series of upgrades then security fixes on a large plugin, it’s annoying.

    More likely is the idea that these plugins can share APIs and features if you lump them together, making one big plugin smaller than twenty-odd separate ones.

    What is this dev mode of which you speak?

    Oh it’s neat. Put define( 'JETPACK_DEV_DEBUG', true); into your wp-config.php file and here’s what happens:

    Very Mobile Home

    1. Everything defaults to OFF
    2. You can only activate the following:
      • Carousel
      • Sharing
      • Gravatar Hovercards
      • Contact Form
      • Shortcodes
      • Custom CSS
      • Mobile Theme
      • Extra Sidebar Widgets
      • Infinite Scroll

    The only ones missing that surprised me was LaTeX (I guess it phones home to parse…) and the new Tiled Galleries. Why is that cool? Well now you don’t need to connect to WordPress.com to run those things!

    Are you drinking the kool-aid?

    Oh. Probably. I honestly like Jetpack. I hate having to set it up for clients (I end up in an Incognito Window, creating a new WP.com account for them, and that whole hassle), but once it’s done, it’s really worthwhile. It does everything I need and while there are parts I don’t need, I’ll live with it. That said, having it set up for people like my father means there’s less I have to worry about with finding plugins for him. Most of what he needs is right there in Jetpack.

    I hate Jetpack, I’m never going to use it!

    Okay. Don’t use it.

    I’m not defending it’s uses for all cases. If Jetpack doesn’t fit what you need, don’t use it! That’s totally fine. I just hate reinventing wheels. There are always alternatives to what Jetpack’s got (Contact Form 7, Google Analytics, and so on), so you can use any tool you like. There are pros and cons with everything, and it’s up to you to decide where your own break point is.

  • World Time Event Shortcode

    World Time Event Shortcode

    I had a need. A need for timezones. I happened to post about an event happening at 2pm ET, under the mistaken assumption that people knew how to convert timezones, or at least go to a website for it. Instead, after a weekend full of emails and me snapping at them for being under educated (don’t schools teach this stuff anymore?) I linked them to a page.

    Then I thought “Well heck, they won’t click that. Why not embed it.” Alas, TimeAndDate.com didn’t let you embed, but worldtimebuddy.com does. With a hat tip to Rarst for the link, I pulled this shortcode out:

    // World Time Buddy Event Time
    // Usage: [eventtime time="14:00" length="1" date="2/20/2013" tz="ET"]
    function eventtime_func( $atts ) {
    	extract( shortcode_atts( array(
    		'time' => '14:00',
    		'length' => '1',
    		'date' => '2/20/2013',
    		'tz' => 'ET',
    	), $atts ) );
    	
    	if ( $tz == 'ET') $timezone = '5128581';
    	if ( $tz == 'PT') $timezone = '5368361';
    		
    	return '<span class="wtb-ew-v1" style="width: 560px; display:inline-block;padding-bottom:5px;"><script src="http://www.worldtimebuddy.com/event_widget.js?h='.$timezone.'&amp;md='.$date.'&amp;mt='.$time.'&amp;ml='.$length.'&amp;sts=0&amp;sln=0&amp;wt=ew-lt"></script><i><a target="_blank" href="http://www.worldtimebuddy.com/">Time converter</a> at worldtimebuddy.com</i><noscript><a href="http://www.worldtimebuddy.com/">Time converter</a> at worldtimebuddy.com</noscript><script>window[wtb_event_widgets.pop()].init()</script></span>';
    	}
    add_shortcode( 'eventtime', 'eventtime_func' );
    

    I can’t make this a plugin because it has a powered-by link, and while I could remove it, I won’t. If the link was put in by the script itself and not my return, it’d be fine for the repo, but since I’m only going to use this a few times, I’m leaving it be.

  • Deploying With Git

    Deploying With Git

    gitI’ve been banging my head on this for a while. It really did take me a year and reading lots of things to begin to understand that I was totally wrong. As with many things, I have to sit down and use them for a while to understand what I’m doing wrong, and what I need to learn. I finally had my git breakthrough. It’s very possible (no, likely) that I got some of this wrong, but I feel like I now understand more about git and how it should be used, and that made me more confident in what I’m doing with it.

    Speaking as a non-developer (hey, sometimes I am!), I just want a command line upgrade for my stuff. This code also lacks a WordPress-esque click to upgrade, so I have to do a three step tango to download, unpack, copy, delete, in order to upgrade.(By the way, more software should have one-click upgrades like that, it would make life easier for everyone. I do know that the backend support is non-trivial, so I would love to see a third-party act as a deployment hub, much like GitHub is a repository hub.) The more steps I have, the more apt I am to make an error. So in the interests of reducing my errors and my overhead, I wanted to find a faster and safer way to deploy.(My previous job was all about deployment. We had a lot of complicated scripts to take our code, compile it, compress it, move it to a staging site, and then email that it was ready. From there, we had more scripts to ‘move to test’ and ‘move to prod’ which made sense.)

    Since I already thing that automating and simplifying deployment is good, and all I want to do is get one version, the ‘good’ version, of code and be able to easily update it. One or two lines is best. Simple, reliable, and easy to use. That’s what I want.

    Recently, Ryan Hellyer pointed out git archive, which he claims is faster than clone. I’d believe it if I could get it to work. When I tried using HTTPS, I got this: fatal: Operation not supported by protocol.. So I tried using ssh and got Could not resolve hostname… instead. Basically I had all these problems. Turns out github turned off ‘git archive -remote’ so I’m dead in the water there for any code hosted there, which is most of my code.

    I kicked various permutations of this around for a couple afternoons before finally throwing my hands up, yet again, and looking into something else, including Capistrano, which Mark Jaquith uses in WP Stack. It’s something I’m personally interested in for work related reasons. Capistrano is a Ruby app, and vulnerability fears aside, it’s not very user friendly. At my old job, we used ant a lot to deploy, though there don’t seem to be ant tasks yet for Git. The problem with both of those is that they require you to pull down the whole hunk ‘o code and I’m trying to avoid that in this use case. Keep it simple, stupid. Adding more layers of code and complication onto a project that doesn’t need it is bad.

    Finally I went back to git and re-read how the whole distributed deployment works. I know how to clone a repository, which essentially gets me ‘trunk.’ And I know that a pull does a fetch followed by a merge, in case I’d done any edits, and it saves my edits. Hence merge, and why I dig it for dev. At length it occurred to me that what I wanted was to check out the git repo without downloading the code at first. Well I know how to do that:

    $ git clone --no-hardlinks --no-checkout https://github.com/wp-cli/wp-cli.git wp-cli
    Cloning into 'wp-cli'...
    remote: Counting objects: 10464, done.
    remote: Compressing objects: 100% (3896/3896), done.
    remote: Total 10464 (delta 6635), reused 10265 (delta 6471)
    Receiving objects: 100% (10464/10464), 1.20 MiB | 1.04 MiB/s, done.
    Resolving deltas: 100% (6635/6635), done.
    

    That brings down just a .git folder, which is small. And from there, I know how to get a list of tags:

    $ git tag -l
    v0.3.0
    [...]
    v0.8.0
    v0.9.0
    

    And now I can check out version 8!

    $ git checkout v0.8.0
    Note: checking out 'v0.8.0'.
    
    You are in 'detached HEAD' state. You can look around, make experimental
    changes and commit them, and you can discard any commits you make in this
    state without impacting any branches by performing another checkout.
    
    If you want to create a new branch to retain commits you create, you may
    do so (now or later) by using -b with the checkout command again. Example:
    
      git checkout -b new_branch_name
    
    HEAD is now at 8acc57d... set version to 0.8.0
    

    Well damn it, that was simple. But I don’t want to be in a detached head state, as that means it’s a little weird to update. I mean, I could do it with a switch back to master, a pull, and a checkout again, but then I thought about local branches. Even though I’m never making changes to core code (ever), let’s be smart.

    Linus Torvalds
    Linus Torvalds flipping Nvidia the bird
    One codebase I use has the master branch as their current version, which is cool. Then there’s a 1.4.5 branch where they’re working on everything new, so when a new version comes out, I can git pull and be done.(In this moment, I kind of started to get how you should be using git. In SVN, trunk is where you develop and you check into tags (for WordPress at least) to push finished versions. In git, you make your own branch, develop there, and merge back into master when you’re ready to release. Commence head desking.)

    One conundrum was that there are tags and branches, and people use them as they see fit. While some of the code I use defines branches so I can check out a branch, others just use tags, which are treeish. Thankfully, you can make your own branch off a tag, which is what I did.

    I tried it again with a non-Github slice of code: Mediawiki.(Mediawiki, compared to wp-cli, is huge, and took a while to run on my laptop. It was a lot faster on my server. 419,236 objects vs 10464. I’m just saying someone needs to rethink the whole ‘Clone is faster!’ argument, since it’s slow now, or slow later, when downloading large files. Large files is large.) Now we have a new issue. MediaWiki’s .git folder is 228.96 MiB… Interestingly, my MediaWiki install is about 155MiB in and of itself, and diskspace is cheap. If it’s not, you’ve got the wrong host. Still, it’s a drawback and I’m not really fond of it. Running repack makes it a little smaller. Running garbage collection made it way smaller, but it’s not recommended. This, however, is recommended:

    git repack -a -d --depth=1 --window=1

    It doesn’t make it super small, but hey, it worked.

    Speaking of worked, since the whole process worked twice, I decided to move one of my installs (after making a backup!) over to this new workflow. This was a little odd, but for Mediawiki it went like this:

    git clone --no-hardlinks --no-checkout https://gerrit.wikimedia.org/r/p/mediawiki/core.git wiki2
    mv wiki2/.git wiki/
    rmdir wiki2
    cd wiki
    git reset --hard HEAD
    

    Now we’re cloning the repo, moving our files, resetting where HEAD is, and I’m ready to set up my install to use the latest tag, and this time I’m going to make a branch (mysite-1.20.3) based on the tag (1.20.3):

    git checkout -b mysite-1.20.3 1.20.3
    

    And this works great.

    The drawback to pulling a specific tag is that when I want to update to a new tag (1.20.4 let’s say), I have to update everything and then checkout the new tag in order to pull down the files. Now, unlike svn, I’m not making a full copy of my base code with every branch or tag, it’s all handled by head files, so there’s no harm keeping these older versions. If I want to delete them, it’s a simple git branch -D mysite-1.20.3 call and I’m done. No code changes (save themes and .htaccess), no merging needed. And if there’s a problem, I can switch back really fast to the old version with git checkout mysite-1.20.3. The annoyance is that I just want to stay on the 1.20 branch, don’t I? Update the minors as they come, just like the WP minor-release updater only updates changed files.

    Thus, I asked myself if there was a better way and, in the case of MediaWiki, there is! In world of doing_it_right(), MediaWiki has branches and tags(So does WP if you looked at trac.), and they use branches called ‘REL’. If you’re not sure what branches your repo uses, type git remote show origin and it will list everything. There I see REL1_20 and since I’m using version 1.20.3 here, I surmised that I can actually do this instead:

    git checkout -b mysite-REL1_20 origin/REL1_20 
    

    This checks out my branch and says “This branch follows along with REL1_20.” so when I want to update my branch it’s two commands:

    git fetch --all
    git pull
    

    The fetch downloads the changesets and the pull applies it. It looks like this in the real world (where I’m using REL1_21 since I wanted to test some functionality on the alpha version):

    $ git fetch --all
    Fetching origin
    remote: Counting objects: 30, done
    remote: Finding sources: 100% (14/14)
    remote: Getting sizes: 100% (17/17)
    remote: Total 14 (delta 10), reused 12 (delta 10)
    Unpacking objects: 100% (14/14), done.
    From https://gerrit.wikimedia.org/r/p/mediawiki/core
       61a26ee..fb1220d  REL1_21    -&gt; origin/REL1_21
       80347b9..431bb0a  master     -&gt; origin/master
    $ git pull
    Updating 61a26ee..fb1220d
    Fast-forward
     includes/actions/HistoryAction.php | 2 +-
     1 file changed, 1 insertion(+), 1 deletion(-)
    

    This doesn’t work on all the repos, as not everyone follows the same code practices. Like one repo I use only uses tags. Still, it’s enough to get me fumbling through to success in a way that doesn’t terrify me, since it’s easy to flip back and forth between versions.

    Fine. I’m sold. git’s becoming a badass. The only thing left is to protect myself with .htaccess:

     # SVN and GIT protection
     RewriteRule ^(.*/)?(\.svn|\.git)/ - [F,L]
     ErrorDocument 403 &quot;Access Forbidden&quot;
    

    Now no one can look at my svn or git files.

    And to figure out how to get unrelated instances of git in subfolders to all update(Mediawiki lets you install extensions via git, but then you don’t have a fast/easy way to update…):

    #!/bin/sh
     
    for i in `find ./ -maxdepth 1 -mindepth 1 -type d`; do
            cd $i
            git pull
            cd ../
    done
    

    And I call that via ./git.sh which lives in /wiki/extensions and works great.

    From here out, if I wanted to script things, it’s pretty trivial, since it’s a series of simple if/else checks, and I’m off to the races. I still wish every app had a WordPress-esque updater (and plugin installer, hello!) but I feel confident now that I can use git to get to where my own updates are faster.

  • Two Factor Authentication

    Two Factor Authentication

    originalThis is something that Tony Perez and Sam “Otto” Wood both recommend, so you know I have to look at it seriously!

    I think I need to point out that I’m willing to accept that I’m wrong about things. After all, I can’t know everything, and I am well aware of that. But one of the things I work hard to do is learn, adapt, grown and get better at all this. The whole reason I started talking about tech on this site was I was trying to understand cloud hosting back in August of 2010(A lot of tech posts were ported over from Ipstenu.org after the fact.).

    The point is I do this site because I want to learn, and when I learn, even if I don’t understand all of a thing, I want to share what I’ve learned specifically because I know people will come and correct me. Next to answering people’s questions, this is the fastest way I know of to really understand things.

    So.

    I didn’t mention Two Factor Authentication in my security post. Using it certainly would have mitigated the brute-force attack, though not the DDoS implications of it, and that remains why I am a fan of ModSecurity. That doesn’t mean I didn’t just add another tool to my arsenal, or that I’m not willing to try something out.

    I am now using Two Factor Authentication.

    Two-factor authentication (aka multi-factor authentication, or TFA, T-FA, or 2FA) is a way to verify your authenticity by providing two (ore more) of the following factors:

    1. Something the user has – aka a possession factor
    2. Something the user knows – aka a knowledge factor
    3. Something the user is – aka an inherence factor

    For most of us, we authenticate only via knowledge – that would be your standard username and password. You “know” your password, thus you pass the knowledge factor. A PIN (like for your bank card) is the same thing. This is simple, it’s easy, and most of us can remember a password.

    Something you have is easy to explain if you’ve ever worked for a company and had a RSA ID or a keyfob with a random generated string. That’s the possession factor at work. In fact, your bank card (again!) is one of these too! It’s something else, something physical that you must have to prove you are actually you.

    Inherence factors are things like biometrics, so a fingerprint or retina scan. That’s all you need to know about that. Arguably it’s something you have, but it’s a part of you, something you always have with you, so it’s inherent or innate to your very person. Latin. You’re welcome.

    It’s pretty obvious that a strong password only goes so far. If I can’t log into my laptop without a USB keyfob, then my site is super secure. This is better than using the picture and keyphrase that a lot of banks use right now, but it’s also harder. It’s very easy for a company to have you pick a photo, a sentence, and a password and make you verify them when you log in. But to instead make sure you have a specific device with you that verifies who you are and that you’re you in this very second?

    drew_barrymore_04How, exactly, they work depend on which methods your using. There are myriad different methods of possession factors you could use, and how each one works is a little different. But we like multiple factors because if you needed (say) my retina scan and a password to log in and a titanium ring, and another person with those three items, then I’ve just described the plot of Charlie’s Angels: Full Throttle. I’ve also described a pretty tough nut to crack if you’re not Drew Barrymore.

    The issue with these methods is they’re not (yet) practical for the common man, and that’s really a large part of why I don’t like TFA very much.

    The knowledge factor is the most easiest to hack. We’ve see that. That’s the whole reason we want to use two or more factors to authenticate. I’m not arguing that. The possession factor is the easiest to break (lose your keyfob or be out of cell phone range). Unless there’s some backup to let me in even if I don’t have the second factor, I’m SOL in a lot of ways. Of course, once you have a backup method, then that’s vulnerable. The inherence factor is the least reliable so far and the hardest to implement correctly. There’s a whole Mythbusters on how easy it is to make a fake fingerprint. It’s not that this is easy to hack, it’s that it’s hard to protect.

    Okay, so what should we do?

    The Google Authenticator Plugin for WordPress comes recommended by my man Otto and I know I’m not Google’s biggest fan, but this is one instance where I think they did it right.

    The plugin uses open source code for Google Authenticator, which is not something Google really invented so much as perfected. In fact, my old keyfob at work did the same thing.

    Here’s how it works. The site you visit generates a string of characters called your Secret Key. This key can be a string (like hE337tusCFxE) or a QR code embedded with all the information from your site (like site name and so on). You enter the data into the app on your phone, and that uses secret string plugins the date and current time, to generate another random number string you use when you log into the phone.

    SNP_2909001_en_v0It’s like a password that always changes, and since your phone and your (say) blog have clocks running, they know what time it is, parse the math on login, and off you go. So yes, this will work if you’ve got no cell reception. But no, it won’t work if you’ve lost your phone (which remains an issue for me). Since each site has a unique key and time is always changing, the code is never the same twice. No two users or sites will have the same key either. There’s more math to it, and you can read what Otto commented about it.

    Now to log in to my blog I need the username and password, plus a random number I can only get at if I have my cellphone and know the passcode there too. In my case, if I lose my phone, I can’t get into my site. This is, most of the time, okay. If I’m on a strange computer, I need the phone anyway to get the password out of 1Password, and I tend not to log on when I’m not on my own computer or my iPad (which requires the use of an app password, less secure all around, but needed).

    To me, it’s not risk versus reliability, or even risk versus vulnerability. It’s risk verus risk. So far, the risk of losing my phone is less than the risk of what happens if I lose my website. After all, my website is my life.