Half-Elf on Tech

Thoughts From a Professional Lesbian

Category: How To

  • Personal Version Control With Git

    Personal Version Control With Git

    Git RelationshipsOne thing I am personally bad at is version control. Oh, don’t get me wrong, I’m great decent at it for code at work, but I still have a tendency to cowboy code. Bad me.

    Part of why is that I don’t want to have my stuff public. At the same time, I don’t want to pay Github while I’m learning, and I want a GUI. Why not overcomplicate my life! I’ll add on one more, I want a website where I can look at my changes all pretty like. Last time I did this via SVN, hated it, never used it, and walked away. This time I decided to use Git, and once I started using it via the command line, I realized how freaking awesome this was. And since I’m using CLI, I ran git config --global color.ui true to get pretty colors.

    I should note that I use Coda2 by Panic and I love it. But. It doesn’t have an easy way to properly branch and tag in SVN or Git, which is something I’m used to having. And my master plan is this. Master will have the ‘live’ real code. The branches are releases. When a branch is ready to go live, I merge it with master.

    So let’s get started.

    Stage One: Install Git

    I followed Gits official directions for installing Git on my server, so once you’ve got things like that up and running, it’ll be time for the webpage.

    Cart comes after horse, though. I had a hell of a time getting my own SVN crap up and running, but git was way easier. There was far less to mess with to install, since I’d already done it before. With Git, I don’t need to mess with svnserve or daemons, since git isn’t the ‘server’ where my code is stored, it’s just another repository/clone. It breaks my head sometimes, but this is a case where it’s going to be easy. Install any Git dependencies (on CentOS that means I ran yum -y install zlib-devel openssl-devel cpio expat-devel gettext-devel ) and then install from command line:

    cd /usr/local/src
    wget http://git-core.googlecode.com/files/git-1.8.2.2.tar.gz
    tar xvzf git-1.8.2.2.tar.gz
    rm git-1.8.2.2.tar.gz
    cd git-1.8.2.2
    
    ./configure
    make
    make install
    

    Done, and now I’m on 1.8.2.2. Upgrading is as easy as doing the wget again.

    Stage Two: Make Repositories

    At this point, the official git directions say you should make a git account, but after some pondering, I decided to put my repos in /home/myuser/gitrepos/ so I can easily add my repositories to my local computer. Everything is locked down via RSA keys anyway, so I can secure connect from my computers and everything is basic and simple and safe. If I want to make more repositories for other accounts, I can do it as a one-by-one basis. Right now I have no need for group stuff, this is a me repository after all. If I did, I’d use git’s directions for making a git user instead.

    I made projects for my domains (ipstenu and jfo) and then one for my server (mothra). I hooked into them with Coda2, which is my app of choice, and I tested that I could get and push files. I’ll come back to this in a second.

    Stage Three: Webify it!

    Next up was webifying things. I want to do this, even though I’m putting it behind a password, because I’m visual and being able to see what I’ve got going on is helpful to me. I went with GitList because it looked nice. The drawback to these things is that if I used a shared repository (say /home/gituser/gitrepos/ for example) then I’d have to web-alias to the gituser instead of having it hosted as ‘Ipstenu’ … which is okay. If I was really making a full blown shared group, I’d probably buy gitstenu.org and point it there.

    My Gitlist

    Awesomesauce! It works! Except for this error Unnamed repository; edit this file ‘description’ to name the repository. That’s easily fixed on the server by editing the description file.

    Those initial repositories are for my non-public files, actually. That’s where I keep copies of the obscure shell scripts I run, stuff that isn’t web accessible. After I got those set up, I made repositories for my themes and my mu-plugins. This gave me folders like ipstenu-mu-plugins and jfogenesis-wp because I named the theme ‘jfogenesis’ for WordPress, MediaWiki, and ZenPhoto. Still this let me progress to …

    Stage Four: Control

    The whole reason I did all the rest of that was so I could do this, which seems redundant, but bear with me.

    1. I checked out all the code locally, edit it, and push it back up to the repos on my server
    2. On the webserver, I check out the code with a simple git clone to folder2 (trust me)
    3. switch folder and folder2 (you know how to move things)
    4. Checkout the new code with a git pull. If I want to checkout a specific version, it’s git checkout -b mysite-REL_1.0 origin/REL_1.0(It’s worth mentioning that I made a branch for everything at the start as REL_1.0 because reasons.)

    And that’s it.

    Stage Five: Updates

    Here’s my current process.

    Locally, I start work on my new version so I make a new branch: git checkout -b REL_2.0

    Edit my files and add and remove them as needed:

    git rm <filename>
    git add .
    

    Check them in when I’m done with the files: git commit -m "Editing foo and bar, replacing baz."

    When I’m ready with it to be tested, I push it: git push origin REL_2.0

    Everything tests well on the dev server(Just pretend there is one.)? Okay! Let’s merge!

    git checkout master
    git merge REL_2.0
    git push origin master

    Summary

    Since this is just me running stuff, it’s pretty easy to keep track of everything. Having the website makes it easy for me to see what’s I last did and what’s changed:

    Diff Screen

    Again, yes, I put the git website behind password protection. I may change this later, if I determine I never put anything secret up there. But right now, this is good for me. After a couple days of using it, I liked it a lot. Enough that, unlike my foray into SVN, I kept using it. It was easy for me to keep my code organized and up to date. If I had to update ‘live’ I could still use branches easily and roll things back painlessly. That’s pretty darn cool.

  • DreamHost Logo ala CSS

    This was done by Clee, my coworker. I tweaked it, prefacing things with dh to make it more portable and then slapping it into a shortcode so I could embed it here:

    The code is as follows (you can see his original code at codepen):

    &lt;div id=&quot;dhbackground&quot;&gt;
      &lt;div id=&quot;dhmoon&quot;&gt;
        &lt;div id=&quot;dheclipse&quot;&gt;
          &lt;div id=&quot;dhchevron&quot;&gt;
          &lt;/div&gt;
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
    
    @dhbg: rgba(27, 53, 100, 1);
    @size: 240px;
    @corner: (0.16 * @size);
    @offset: (@corner - 6px);
    @chevron: (0.36 * @size);
    
    div#background {
      background-color: @dhbg;
      width: @size;
      height: @size;
      border-radius: @corner;
      margin: auto;
    }
    
    div#moon {
      background-color: #fff;
      width: @size - (2 * @corner);
      height: @size - (2 * @corner);
      position: relative;
      top: @corner;
      margin: auto auto;
      border-radius: 50%;
    }
    
    div#eclipse {
      background-color: @dhbg;
      position: relative;
      width: @size - (2 * @corner);
      height: @size - (2 * @corner);
      top: -@offset;
      left: -@offset;
      border-radius: 50%;
    }
    
    div#chevron {
      background-color: rgba(51, 104, 156, 1);
      position: relative;
      width: @chevron;
      height: @chevron;
      left: @offset;
      top: @offset;
      border-radius: @corner 0px (2 * @corner) 0px;
    }
    

    Now actually that’s not CSS, it’s LESS, which I can’t inline apparently (or maybe I just don’t know how). Either way, it’s a funky cool CSS trick!

  • World Time Event Shortcode

    World Time Event Shortcode

    I had a need. A need for timezones. I happened to post about an event happening at 2pm ET, under the mistaken assumption that people knew how to convert timezones, or at least go to a website for it. Instead, after a weekend full of emails and me snapping at them for being under educated (don’t schools teach this stuff anymore?) I linked them to a page.

    Then I thought “Well heck, they won’t click that. Why not embed it.” Alas, TimeAndDate.com didn’t let you embed, but worldtimebuddy.com does. With a hat tip to Rarst for the link, I pulled this shortcode out:

    // World Time Buddy Event Time
    // Usage: [eventtime time="14:00" length="1" date="2/20/2013" tz="ET"]
    function eventtime_func( $atts ) {
    	extract( shortcode_atts( array(
    		'time' => '14:00',
    		'length' => '1',
    		'date' => '2/20/2013',
    		'tz' => 'ET',
    	), $atts ) );
    	
    	if ( $tz == 'ET') $timezone = '5128581';
    	if ( $tz == 'PT') $timezone = '5368361';
    		
    	return '<span class="wtb-ew-v1" style="width: 560px; display:inline-block;padding-bottom:5px;"><script src="http://www.worldtimebuddy.com/event_widget.js?h='.$timezone.'&amp;md='.$date.'&amp;mt='.$time.'&amp;ml='.$length.'&amp;sts=0&amp;sln=0&amp;wt=ew-lt"></script><i><a target="_blank" href="http://www.worldtimebuddy.com/">Time converter</a> at worldtimebuddy.com</i><noscript><a href="http://www.worldtimebuddy.com/">Time converter</a> at worldtimebuddy.com</noscript><script>window[wtb_event_widgets.pop()].init()</script></span>';
    	}
    add_shortcode( 'eventtime', 'eventtime_func' );
    

    I can’t make this a plugin because it has a powered-by link, and while I could remove it, I won’t. If the link was put in by the script itself and not my return, it’d be fine for the repo, but since I’m only going to use this a few times, I’m leaving it be.

  • Deploying With Git

    Deploying With Git

    gitI’ve been banging my head on this for a while. It really did take me a year and reading lots of things to begin to understand that I was totally wrong. As with many things, I have to sit down and use them for a while to understand what I’m doing wrong, and what I need to learn. I finally had my git breakthrough. It’s very possible (no, likely) that I got some of this wrong, but I feel like I now understand more about git and how it should be used, and that made me more confident in what I’m doing with it.

    Speaking as a non-developer (hey, sometimes I am!), I just want a command line upgrade for my stuff. This code also lacks a WordPress-esque click to upgrade, so I have to do a three step tango to download, unpack, copy, delete, in order to upgrade.(By the way, more software should have one-click upgrades like that, it would make life easier for everyone. I do know that the backend support is non-trivial, so I would love to see a third-party act as a deployment hub, much like GitHub is a repository hub.) The more steps I have, the more apt I am to make an error. So in the interests of reducing my errors and my overhead, I wanted to find a faster and safer way to deploy.(My previous job was all about deployment. We had a lot of complicated scripts to take our code, compile it, compress it, move it to a staging site, and then email that it was ready. From there, we had more scripts to ‘move to test’ and ‘move to prod’ which made sense.)

    Since I already thing that automating and simplifying deployment is good, and all I want to do is get one version, the ‘good’ version, of code and be able to easily update it. One or two lines is best. Simple, reliable, and easy to use. That’s what I want.

    Recently, Ryan Hellyer pointed out git archive, which he claims is faster than clone. I’d believe it if I could get it to work. When I tried using HTTPS, I got this: fatal: Operation not supported by protocol.. So I tried using ssh and got Could not resolve hostname… instead. Basically I had all these problems. Turns out github turned off ‘git archive -remote’ so I’m dead in the water there for any code hosted there, which is most of my code.

    I kicked various permutations of this around for a couple afternoons before finally throwing my hands up, yet again, and looking into something else, including Capistrano, which Mark Jaquith uses in WP Stack. It’s something I’m personally interested in for work related reasons. Capistrano is a Ruby app, and vulnerability fears aside, it’s not very user friendly. At my old job, we used ant a lot to deploy, though there don’t seem to be ant tasks yet for Git. The problem with both of those is that they require you to pull down the whole hunk ‘o code and I’m trying to avoid that in this use case. Keep it simple, stupid. Adding more layers of code and complication onto a project that doesn’t need it is bad.

    Finally I went back to git and re-read how the whole distributed deployment works. I know how to clone a repository, which essentially gets me ‘trunk.’ And I know that a pull does a fetch followed by a merge, in case I’d done any edits, and it saves my edits. Hence merge, and why I dig it for dev. At length it occurred to me that what I wanted was to check out the git repo without downloading the code at first. Well I know how to do that:

    $ git clone --no-hardlinks --no-checkout https://github.com/wp-cli/wp-cli.git wp-cli
    Cloning into 'wp-cli'...
    remote: Counting objects: 10464, done.
    remote: Compressing objects: 100% (3896/3896), done.
    remote: Total 10464 (delta 6635), reused 10265 (delta 6471)
    Receiving objects: 100% (10464/10464), 1.20 MiB | 1.04 MiB/s, done.
    Resolving deltas: 100% (6635/6635), done.
    

    That brings down just a .git folder, which is small. And from there, I know how to get a list of tags:

    $ git tag -l
    v0.3.0
    [...]
    v0.8.0
    v0.9.0
    

    And now I can check out version 8!

    $ git checkout v0.8.0
    Note: checking out 'v0.8.0'.
    
    You are in 'detached HEAD' state. You can look around, make experimental
    changes and commit them, and you can discard any commits you make in this
    state without impacting any branches by performing another checkout.
    
    If you want to create a new branch to retain commits you create, you may
    do so (now or later) by using -b with the checkout command again. Example:
    
      git checkout -b new_branch_name
    
    HEAD is now at 8acc57d... set version to 0.8.0
    

    Well damn it, that was simple. But I don’t want to be in a detached head state, as that means it’s a little weird to update. I mean, I could do it with a switch back to master, a pull, and a checkout again, but then I thought about local branches. Even though I’m never making changes to core code (ever), let’s be smart.

    Linus Torvalds
    Linus Torvalds flipping Nvidia the bird
    One codebase I use has the master branch as their current version, which is cool. Then there’s a 1.4.5 branch where they’re working on everything new, so when a new version comes out, I can git pull and be done.(In this moment, I kind of started to get how you should be using git. In SVN, trunk is where you develop and you check into tags (for WordPress at least) to push finished versions. In git, you make your own branch, develop there, and merge back into master when you’re ready to release. Commence head desking.)

    One conundrum was that there are tags and branches, and people use them as they see fit. While some of the code I use defines branches so I can check out a branch, others just use tags, which are treeish. Thankfully, you can make your own branch off a tag, which is what I did.

    I tried it again with a non-Github slice of code: Mediawiki.(Mediawiki, compared to wp-cli, is huge, and took a while to run on my laptop. It was a lot faster on my server. 419,236 objects vs 10464. I’m just saying someone needs to rethink the whole ‘Clone is faster!’ argument, since it’s slow now, or slow later, when downloading large files. Large files is large.) Now we have a new issue. MediaWiki’s .git folder is 228.96 MiB… Interestingly, my MediaWiki install is about 155MiB in and of itself, and diskspace is cheap. If it’s not, you’ve got the wrong host. Still, it’s a drawback and I’m not really fond of it. Running repack makes it a little smaller. Running garbage collection made it way smaller, but it’s not recommended. This, however, is recommended:

    git repack -a -d --depth=1 --window=1

    It doesn’t make it super small, but hey, it worked.

    Speaking of worked, since the whole process worked twice, I decided to move one of my installs (after making a backup!) over to this new workflow. This was a little odd, but for Mediawiki it went like this:

    git clone --no-hardlinks --no-checkout https://gerrit.wikimedia.org/r/p/mediawiki/core.git wiki2
    mv wiki2/.git wiki/
    rmdir wiki2
    cd wiki
    git reset --hard HEAD
    

    Now we’re cloning the repo, moving our files, resetting where HEAD is, and I’m ready to set up my install to use the latest tag, and this time I’m going to make a branch (mysite-1.20.3) based on the tag (1.20.3):

    git checkout -b mysite-1.20.3 1.20.3
    

    And this works great.

    The drawback to pulling a specific tag is that when I want to update to a new tag (1.20.4 let’s say), I have to update everything and then checkout the new tag in order to pull down the files. Now, unlike svn, I’m not making a full copy of my base code with every branch or tag, it’s all handled by head files, so there’s no harm keeping these older versions. If I want to delete them, it’s a simple git branch -D mysite-1.20.3 call and I’m done. No code changes (save themes and .htaccess), no merging needed. And if there’s a problem, I can switch back really fast to the old version with git checkout mysite-1.20.3. The annoyance is that I just want to stay on the 1.20 branch, don’t I? Update the minors as they come, just like the WP minor-release updater only updates changed files.

    Thus, I asked myself if there was a better way and, in the case of MediaWiki, there is! In world of doing_it_right(), MediaWiki has branches and tags(So does WP if you looked at trac.), and they use branches called ‘REL’. If you’re not sure what branches your repo uses, type git remote show origin and it will list everything. There I see REL1_20 and since I’m using version 1.20.3 here, I surmised that I can actually do this instead:

    git checkout -b mysite-REL1_20 origin/REL1_20 
    

    This checks out my branch and says “This branch follows along with REL1_20.” so when I want to update my branch it’s two commands:

    git fetch --all
    git pull
    

    The fetch downloads the changesets and the pull applies it. It looks like this in the real world (where I’m using REL1_21 since I wanted to test some functionality on the alpha version):

    $ git fetch --all
    Fetching origin
    remote: Counting objects: 30, done
    remote: Finding sources: 100% (14/14)
    remote: Getting sizes: 100% (17/17)
    remote: Total 14 (delta 10), reused 12 (delta 10)
    Unpacking objects: 100% (14/14), done.
    From https://gerrit.wikimedia.org/r/p/mediawiki/core
       61a26ee..fb1220d  REL1_21    -&gt; origin/REL1_21
       80347b9..431bb0a  master     -&gt; origin/master
    $ git pull
    Updating 61a26ee..fb1220d
    Fast-forward
     includes/actions/HistoryAction.php | 2 +-
     1 file changed, 1 insertion(+), 1 deletion(-)
    

    This doesn’t work on all the repos, as not everyone follows the same code practices. Like one repo I use only uses tags. Still, it’s enough to get me fumbling through to success in a way that doesn’t terrify me, since it’s easy to flip back and forth between versions.

    Fine. I’m sold. git’s becoming a badass. The only thing left is to protect myself with .htaccess:

     # SVN and GIT protection
     RewriteRule ^(.*/)?(\.svn|\.git)/ - [F,L]
     ErrorDocument 403 &quot;Access Forbidden&quot;
    

    Now no one can look at my svn or git files.

    And to figure out how to get unrelated instances of git in subfolders to all update(Mediawiki lets you install extensions via git, but then you don’t have a fast/easy way to update…):

    #!/bin/sh
     
    for i in `find ./ -maxdepth 1 -mindepth 1 -type d`; do
            cd $i
            git pull
            cd ../
    done
    

    And I call that via ./git.sh which lives in /wiki/extensions and works great.

    From here out, if I wanted to script things, it’s pretty trivial, since it’s a series of simple if/else checks, and I’m off to the races. I still wish every app had a WordPress-esque updater (and plugin installer, hello!) but I feel confident now that I can use git to get to where my own updates are faster.

  • WordPress Login Protection With .htaccess

    WordPress Login Protection With .htaccess

    bandaided computerThe Brute Force attack on WordPress and other CMS apps is getting worse and worse right now. Some people don’t have Mod Security (or can’t get it to work on their server), and asked me if there was anything to be done. Yes, you can still protect your site from those brute forcers via .htaccess.

    To be honest, I worried about posting this, since the last thing I want to do is to inspire hackers to do even more inventive things, but I can’t think of another way to get the word out. I want to say this: If your site is being hammered hard by these attacks, CONTACT YOUR WEBHOST RIGHT NOW. The best protections will be done on their end, because using WP to try and stop this still means people are hammering WP.

    So okay, what can you do with .htaccess? Well, taking into account the ages old .htaccess block that has been posted up at WordPress.org, you can block comments made by people with out a Referrer Request, I made the logical extension!

    ### Blocking Spammers Section ###
    
    # Stop protected folders from being narked. Also helps with spammers
    ErrorDocument 401 /401.html
    
    # Stop spam attack logins and comments
    <IfModule mod_rewrite.c>
    	RewriteEngine On
    	RewriteCond %{REQUEST_METHOD} POST
    	RewriteCond %{REQUEST_URI} .(wp-comments-post|wp-login)\.php*
    	RewriteCond %{HTTP_REFERER} !.*(ipstenu.org|halfelf.org).* [OR]
    	RewriteCond %{HTTP_USER_AGENT} ^$
    	RewriteRule (.*) http://%{REMOTE_ADDR}/$ [R=301,L]
    </ifModule>
    

    So the rule does the following:

    1. Detects when a POST is being made
    2. Check to see if the post is on wp-comments-post.php or wp-login.php
    3. Check if the referrer is in your domain or if no referrer
    4. Send the spam-bot BACK to its originating server’s IP address.

    The reason my referrer line has (ipstenu.org|halfelf.org) is because … well I have multiple domains on my multisite! If you don’t, you can just have it be one domain, but this lets me have one line instead of eight (yes, eight, you heard me). Not all of them are .org, but if they were, that line would look like this:

    RewriteCond %{HTTP_REFERER} !.*(ipstenu|halfelf)\.org.* [OR]
    

    Only make variable what you have to, eh?

    Edit: If you’re using Jetpack for comments, you should also add in jetpack.wordpress.com as a referrer.

    Above that, though, you saw the 401 rule, right? The 401 takes any errors, which can be abused by this, out of the hands of WP. However for this to work best, you have to actually make a 401.html file.

    Here’s mine:

    <html>
    <head>
    <title>401 Error - Authentication Failed</title>
    </head>
    
    <body>
    
    <h1>401 - Authentication Failed</h1>
    
        <p>Through a series of highly sophisticated and complex algorithms, this system has determined that you are not presently authorized to use this system function. It could be that you simply mistyped a password, or, it could be that you are some sort of interplanetary alien-being that has no hands and, thus, cannot type. If I were a gambler, I would bet that a cat (an orange tabby named Sierra or Harley) somehow jumped onto your keyboard and forgot some of the more important pointers from those typing lessons you paid for. Based on the actual error encountered, I would guess that the feline in question simply forgot to place one or both paws on the appropriate home keys before starting. Then again, I suppose it could have been a keyboard error caused by some form of cosmic radiation; this would fit nicely with my interplanetary alien-being theory. If you think this might be the cause, perhaps you could create some sort of underground bunker to help shield yourself from it. I don't know that it will work, but, you will probably feel better if you try something.</p>
    
    </body>
    </html>
    

    RepairThere are more WordPress specific tips and tricks on the WordPress Codex Brute Force Attacks which we Forum Volunteers have been working on all morning. If you want to do even more, then you’ll want to password protect your wp-login.php page, or perhaps whitelist it so only your IPs have access. Personally, I’m not doing that (I have ModSec working) and I’m not using any plugins. While the plugins are great, they still require WordPress to process something, which means that while they can, and will, prevent people from getting in, they won’t stop the traffic on your server.

    I wish I knew more about nginx, but if someone can translate all those things to nginx, please post a link and share it! We need to know!

    At a last gasp, CloudFlare and Sucuri CloudProxy both claim to stop this before it hits you. I don’t use either, as you may know, so I can’t speak for them, but if your host is unable of doing anything, and you can’t make anything work, you may as well try it.

  • WordPress Login Protection with ModSecurity

    WordPress Login Protection with ModSecurity

    No ModSec? Check out WordPress Login Protection With .htaccess

    LockIf you’re on Liquid Web servers, this was already done for you. If you’re not, you should still be able to use this code on your own ModSecurity instance. Since this is a way better method to block people than via a plugin, in my opinion, I thought it would be a good idea to share it here. With this rule, you won’t have quite as many http requests.

    WordPress is a popular publishing platform which is known for its robust features, numerous templates, and large support community. Unfortunately, due to such popularity, WordPress is also constantly subject to attempts at exploiting vulnerabilities. Ensuring WordPress and any associated plugins are installed with the most current versions is an important means of securing your site. However, ModSecurity provides a significant amount of further security by providing an application firewall.

    ModSecurity (also known as “modsec”) has proven itself useful in a variety of situations, and again this is true in assisting with WordPress brute force attempts resulting in a Denial of Service (DoS) attack. While a number of WordPress plugins exist to prevent such attacks, custom modsec rules can prevent such attacks for all WordPress installations on a server. Modsec immediately filters incoming HTTP requests, which assists against taxing server resources.

    These rules will block access for the offending IP address for 5 minutes upon 10 failed login attempts over a 3 minute duration. These rules have been automatically updated in the custom rules for Liquid Web’s ServerSecure service. For customers without ServerSecure, these rules can be added to their custom modsec rules. To accomplish this, edit your custom modsec user rules and append the file with the rules provided below. For CPanel servers, this file is likely located at /usr/local/apache/conf/

    SecAction phase:1,nolog,pass,initcol:ip=%{REMOTE_ADDR},initcol:user=%{REMOTE_ADDR},id:5000134
    <Locationmatch "/wp-login.php">
        # Setup brute force detection.
    
        # React if block flag has been set.
        SecRule user:bf_block "@gt 0" "deny,status:401,log,id:5000135,msg:'ip address blocked for 5 minutes, more than 10 login attempts in 3 minutes.'"
    
        # Setup Tracking.  On a successful login, a 302 redirect is performed, a 200 indicates login failed.
        SecRule RESPONSE_STATUS "^302" "phase:5,t:none,nolog,pass,setvar:ip.bf_counter=0,id:5000136"
        SecRule RESPONSE_STATUS "^200" "phase:5,chain,t:none,nolog,pass,setvar:ip.bf_counter=+1,deprecatevar:ip.bf_counter=1/180,id:5000137"
        SecRule ip:bf_counter "@gt 10" "t:none,setvar:user.bf_block=1,expirevar:user.bf_block=300,setvar:ip.bf_counter=0"
    </locationmatch>
    

    Source: Liquidweb and Frameloss and MNX Solutions

    Logically, someone can extend this code to any file, like bb-login.php or Special:UserLogin, depending on where they’re being hacked.

    ETA: Rarst asked if I’d have to use wildcards with Locationmatch since WP is often in a subfolder. I read the Apache doc on locationmatch and it says that it’s using regex, so it should just look for ‘/wp-login.php’ in the URL. If I wanted to only look for example.com/wp-login.php then I’d use ^wp-login.php instead. If I got that wrong, please let me know!