Half-Elf on Tech

Thoughts From a Professional Lesbian

Author: Ipstenu (Mika Epstein)

  • Hotlinking is Evil (And So Is Google)

    Hotlinking is Evil (And So Is Google)

    We all know hotlinking is a bad thing. Hotlinking uses up someone else’s bandwidth, which costs them money. It takes away from any profit they might make on ads, because you’re not going to their site. It removes their credit from images. So why did Google decide to hotlink when they made their faster image search?

    This is what the new image search looks like:

    Faster Image Search

    I’ll admit, that looks pretty nifty. It’s a fast way to see images. But it’s also a fast way to lose attribution. Here’s what just the new image box looks like.

    Close Up

    This image now loads ‘seemingly’ locally. It’s totally a part of Google, though, there’s no reference back to how the site looks (it used to be an overlay). In fact, most people will just see the image, copy if they want, and move on to the next site. No one has any reason to dig deeper and to visit the image’s page.

    By contrast, the thumbnail images you see on Google, if you viewed source, look like this: https://encrypted-tbn3.gstatic.com/images?q=tbn:ANd9GcQPNtMLkk8rwj3lLv6a2kEQ8_eo6BuiUZYn3N5z3cbMu6rVPo3Xkw If you go to gstatic.com though, all you get is a 404 error page, but it’s pretty easy to find out this is where Google saves all their static content. Including images. These thumbnails are in moderate to low quality, and if that was all Google did, show small, iffy, thumbnails and redirect people to the real site, that would be great. Instead, now they actively hotlink from you. Oh yes, that full image you saw in my screenshot was directly linked to the owner’s media file.

    The first thing I did after noticing this was to add the following to my robots.txt:

    User-agent: Googlebot-Image
    Disallow: /
    

    Those directions are right from Google, who doesn’t even pitch you any reason to why you wouldn’t want to do this. Normally they’ll tell you ‘You can, we’d rather you didn’t because of XYZ, but here it is anyway.’ This time, it’s a straight up ‘Here’s how.’ I find that rather telling.

    Naturally I went on to read why Google thought this was a good idea

    The following points are all reasons Google thinks this is better.

    We now display detailed information about the image (the metadata) right underneath the image in the search results, instead of redirecting users to a separate landing page.

    The first part about this, the detailed information, is great. Having the meta-data right there without redirecting to the separate page like the used to, with the data on the side that no one read, is an improvement. Thank you for that.

    We’re featuring some key information much more prominently next to the image: the title of the page hosting the image, the domain name it comes from, and the image size.

    Again, this is great. I think that the data should be more visible than it is, especially the ‘This image may be copyright protected’ stuff. Considering Google won’t allow you to use ads if you use copyright protected material (which they claim I do here, by the way), they really have a higher measure of standard to live up to when it comes to informing people of the stick by which they are measured.

    The domain name is now clickable, and we also added a new button to visit the page the image is hosted on. This means that there are now four clickable targets to the source page instead of just two. In our tests, we’ve seen a net increase in the average click-through rate to the hosting website.

    I can see this being true. Again, the links should be more obvious, and really they should link not to the image directly, but the contextual page in all cases. Traffic is important, and if you send people to the image page where they don’t see the ads, you’re causing them to lose money. So the idea behind this part is really nice, and I’m for it, it just needs some kick-back improvements. Google should give people a good reason to go to the parent site. And this next item is where they fail…

    The source page will no longer load up in an iframe in the background of the image detail view. This speeds up the experience for users, reduces the load on the source website’s servers, and improves the accuracy of webmaster metrics such as pageviews. As usual, image search query data is available in Top Search Queries in Webmaster Tools.

    And now we hit the problem. While this is true (it will be both faster and use less of my bandwidth while decreasing load), it’s still showing my image off my servers! Worse? It’s got the full sized image from my server, which means if I have a 4 meg photo (and I do), they’ll be pulling all 4 megs down, and the reader can just right-click and save. They never need to touch my site.

    As Bill and Ted would say, Bogus.

    Go back to how Google shows thumbnails. They have their own, lower-rez version. I regularly post other people’s images on a site, and when I do, I purposefully keep a lower resolution version on my site, and link to them for the best. Why? Because it’s their image. They did the work, they made it, I should honor them and respect them, and be a good net-denizen. Google’s failing on that.

    For me their search has always been a little questionable for images. Now it’s outright evil.

  • Backwards Settings API

    Backwards Settings API

    backwardsThe biggest problem with documentation is that if you don’t think the same way the doc was written, the examples will frustrate you to the point where you want to cry. If you’re not techy at all, directions to move WordPress to a subfolder will piss you off. If you’re a basic webmaster, who can edit posts and maybe FTP, they’re scary but doable. It’s all a measure of leveling up and seeing things in a way you understand.

    Recently I was banging around with the Settings API in WordPress to fix some debug errors in a plugin. It took me about 5 hours to figure out what it was I was doing, and how to fix it. Actual fixing? About half an hour, including the time it took to make fresh coffee.

    What was my problem? I was looking at code from a different angle. If you start ‘right’ it’s really easy to follow tutorials, but when you start from an already functioning plugin and want to correct it, it’s a nightmare. Reverse engineering things isn’t easy. You’re used to looking at things in one way, and changing that is a headache. What I had was working, up until you turned on debug, which is why I got away with it for so long, and why I hated having to change it.

    But I did. I had a plugin admin page that let you enter two text strings, an access key and a secret key, which the rest of the code used to do it’s magic. Because of that, I couldn’t just be lazy and use a basic register_setting() call like this:

    register_setting( 'myplugin-retain-settings', 'myplugin-retain');
    

    That’s easy, you put it in, you call it with settings_fields('myplugin-retain-settings');, and you go to the races. But I have two strings, and if you call the settings fields twice, you’ll find all sorts of fun errors.

    To explain, let me show you what I did wrong, and what right is.

    Doing It Wrong

    <form method="post" action="options.php">
    <input type="hidden" name="action" value="update" />
    <?php wp_nonce_field('update-options'); ?>
    <input type="hidden" name="page_options" value="myplugin-accesskey,myplugin-secretkey" />
    
    <table class="form-table">
        <tbody>
            <tr valign="top"><th colspan="2"><h3><?php _e('MyPlugin Settings', myplugin); ?></h3></th></tr>
            <tr valign="top">
                <th scope="row"><label for="myplugin-accesskey"><?php _e('Access Key', myplugin); ?></label></th>
                <td><input type="text" name="myplugin-accesskey" value="<?php echo get_option('myplugin-accesskey'); ?>" class="regular-text"/></td>
            </tr>
    
            <tr valign="top">
                <th scope="row"><label for="myplugin-secretkey"><?php _e('Secret Key', myplugin); ?></label></th>
                <td><input type="text" name="myplugin-secretkey" value="<?php echo get_option('myplugin-secretkey'); ?>" class="regular-text"/></td>
            </tr>
    </tbody>
    </table>
    
    <p class="submit"><input class='button-primary' type='Submit' name='update' value='<?php _e("Update Options", dreamobjects); ?>' id='submitbutton' /></p>
    </form>
    

    Doing It Right

    <form method="post" action="options.php">
    	<?php
            settings_fields( 'myplugin-keypair-settings' );
            do_settings_sections( 'myplugin-keypair_page' );
            submit_button();
    	?>
    </form>
    

    Making Wrong Right

    As you can tell, there’s a huge difference between the two, and one is a lot easier to look at than the other.

    First you have to set up registering settings. Since I already had an admin action for my settings page, I wanted to just add to that:

    function add_settings_page() {
            load_plugin_textdomain(myplugin, MYPLUG::getPath() . 'i18n', 'i18n');
            add_menu_page(__('MyPlugin Settings'), __('MyPlugin'), 'manage_options', 'myplugin-menu', array('MYPLUG', 'settings_page'), plugins_url('myplugin/images/myplugin-color.png'));
    }
    

    I added this in(I’m using an array to isolate my plugin functions, so I don’t have to worry as much about namespace clases.):

            add_action('admin_init', array('MYPLUG', 'add_register_settings'));
    

    That was the easy part. The hard part is converting all my table information into settings. Making another new function, I want to add a setting section, just as you would for a one-off. But in this case I need to make a group. Since I named my settings field myplugin-keypair-settings, and my section myplugin-keypair_page, that’s the first important information I need. But backwards.

    First I have to add my section using add_settings_section():

    add_settings_section( 'myplugin-keypair_id', 'MyPlugin Settings', 'myplugkeypair_callback', 'myplugin-keypair_page' );
    

    Once you do that, you want to register the setting, just like normal:

    register_setting( 'myplugin-keypair-settings','myplugin-key');
    

    The complete function

    function add_register_settings() {
         // Keypair settings
            add_settings_section( 'myplugin-keypair_id', 'MyPlugin Settings', 'myplugkeypair_callback', 'myplugin-keypair_page' );
            
            register_setting( 'myplugin-keypair-settings','myplugin-key');
            add_settings_field( 'myplugin-key_id', 'Access Key', 'myplugkey_callback', 'myplugin-keypair_page', 'myplugin-keypair_id' );
            
            register_setting( 'myplugin-keypair-settings','myplugin-secretkey');
            add_settings_field( 'myplugin-secretkey_id', 'Secret Key', 'myplugsecretkey_callback', 'myplugin-keypair_page', 'myplugin-keypair_id' );
    
            function myplugkeypair_callback() { 
                echo '<p>'. _e("Once you've configured your keypair here, you'll be able to use the features of this plugin.", MyPlugins).'</p>';
            }
        	function myplugkey_callback() {
            	echo '<input type="text" name="myplugin-key" value="'. get_option('myplugin-key') .'" class="regular-text"/>';
        	}
        	function myplugsecretkey_callback() {
            	echo '<input type="text" name="myplugin-secretkey" value="'. get_option('myplugin-secretkey') .'" class="regular-text"/>';
        	}
    
    }
    

    imagesWhen I got around to the massive amounts of design I put into things, I ended up making a file called lib/settings.php where I stored everything related to the settings. For me, that’s easier to manage than one massive file with lots of different calls. Easier to debug and edit too, without panicing that I broke something.

    I don’t know if it was so much I think backwards, or the API was backwards, but whatever it was, I had a massive brain-block about all this for about a day. I really have to thank Kailey and Pippin for pointing me at examples that ‘clicked’ in a way that I suddenly could figure it all out.

  • Anti-Social Competition

    Anti-Social Competition

    browser war copyA lot has been said already about how stupid Twitter is to bite the hand that tweeted them into fame. People are all on about how Facebook’s draconian actions will hurt them. Now Instagram is in on the restriction game. There are business models for actions like this, and we’ve seen them time and again. People think the only way to keep their user base (i.e. their revenue) is to stop the users from integrating with the other tools.

    They’re wrong.

    Look, none of us use a product because they limit us, or because they force us to. While the monetary loss and software hassle of switching to a PC would hurt me, the reason I use a Mac is not because they make it impossible to switch, but because they make me not want to. It’s a part of a psychological gambit, making it easy to do what I want, and if I really wanted, easy to walk away. But what neither Apple nor Microsoft does is attempt to lock me in to their way forever.

    Now some of you might argue that’s not true, but look at the US phone system. AT&T and Verizon and all the other traditional companies lock us into their systems. We can’t leave without paying exhorbinant fees. With Apple and Microsoft, the setup fee was my choice, and I don’t pay ongoing prices to use their service, though I can in some cases.

    When I see things like Instagram and Twitter having a slap-fight, to the point that Twitter decided to remove Instagram’s embedding in Twitter, I wanted to kick them both. Twitter is going to hurt itself more and more by biting the hands that feed them (which we already knew about when they decided people making their own Twitter tools was bad). Instagram is following the trend, and that doesn’t help at all. What they’re doing is generating anti-social behavior, which is to say that they’re making it hard to be social.

    twitter-beefThese are social media outlets, and it’s almost to the point where they’re saying ‘You can drive our car, but it only uses gasoline from these vendors.’ We would cry foul and sic the lawyers on them for that. In fact, we did. Remember when Microsoft made it near impossible to run other browsers by tightly integrating IE with their OS? Look at how well that worked out. Sidebar: I don’t think Apple limiting the default browser on iOS devices is the same thing. Unlike Microsoft, they own both hardware and software, so it’s more like saying ‘You can’t put a Fiat engine in your Mini Coop.’ I do think they should allow it, but it’s not the same as the gasoline analogy. Hair splitting, I know. Don’t think I like that I can’t set Chrome as my default browser on my iPad, it really annoys me.

    One of the driving points I love about open source is that we all work together to make things better. With a few notable exceptions, we really try hard to be cooperative, because we know that one group, alone, can’t do everything. This is why I’m often an evangelist for people to contribute, I know that we need to work together in all things. My chosen flag is, right now, Open Source.

    For some reason this is lost on people when they start looking to monetize their products. And it’s not just products like Twitter that do this stupid thing. Right away, everyone’s an enemy. Recently, Carrie Dils ran into this when her presentation at a WordPress Meetup was rejected. Carrie correctly pointed out, she’s not your competition. (I’m very confident this will be addressed in the meetup world, and I know this is not the kind of behavior anyone encourages or endorsees.) Except in a way, she totally is my competition. Just not the way that guy seemed to think of it.

    That bizarre situation points out the absurdity in all this. The world is not a zero-sum game. You’ll never have all the money, all the products, all the clients, or all the people. This doesn’t mean you shouldn’t aspire to have as many as you can manage, but it means you don’t need to attack the other guy. Having a rival, having competition, is good. Every other forum moderator is better than I am at something. Even if it’s just quilting, or as weird as Microsoft servers, the crux of the issue is that competition can be a good thing, and the way to ‘win’ is not to smear the other guy or block them from sharing your client base, but to offer what the other guys doesn’t have.

    Look back at Twitter and Instagram. Twitter is for sharing 140 characters of words. Instagram is for sharing retro photos. So what does Instagram have to gain by blocking people from being able to show photos in Twitter? Well there is a practical point here, and it’s one I tout: Own your own data. After all, I don’t allow hotlinking of my images on other sites specifically because I want people to come here for content (and it’s bandwidth theft, which I hate), but also I don’t like when people steal my content without asking and present it as their own. A large part of owning my own data is also owning where it lives. So while I use Instagram and Twitter (and Facebook), anything of merit that isn’t just casual chatter ends up on one of my own sites.

    Unlike Instagram, I will happily embed small version of my content (excerpts) on any social media site I care to use. Facebook, Twitter, and Google Plus all allow me to put a link, or a link and a phrase, that shows a teaser of my blog content. This drives traffic back to me, which increases my presence, and nets me what I want. Instagram could have done this, and permitted embedding like that on a small scale (click to see bigger, click to leave comment) on Twitter and anywhere else, which would probably help them.(Wouldn’t it be cool if we could code our own sites to let Twitter embed some media from them too? Sadly, they won’t let us because some people would use it for Goatse.cx (DO NOT VISIT)) Instead, they put up a wall to make people click a link and go through. This is why WordPress lets you embed media on your site from other sources. They get it.

    I wish those other guys did. I just want to play in the park with everyone.

  • Kick PageSpeed Up A Notch

    If you’re using Apache and PHP 5.3 on your DreamHost domain, you have the magical power to enable Google PageSpeed. Just go and edit your domain and make sure you check the box for “Page Speed Optimization”:

    PageSpeed Option

    But what does that even mean, I hear you ask?

    partnersPageSpeed is Google’s way to speed up the web (yeah, that was redundant), and it serves as a way for your server to do the work of caching and compressing, taking the load off your webapps. Like WordPress. Anyone can install this on their apache server, and it’s free from Google Developers: PageSpeed Mod. Since you’re on DreamHost, you lucky ducky you, we did it for you. Now you can sit back and relax.

    The first thing to notice when you turn on PageSpeed is that it minifies your webpage. That means it takes your pretty formatted source code and gets rid of the extra spaces you don’t use. This is called by using the PageSpeed filter “collapse_whitespace.” Another filter we use is “insert_ga” which is how we’re magically able to insert your Google Analytics for you from your panel. That filter automatically inserts your GA code on every page on your domain. That’s right! No more plugins!

    If you’re like me, you may start to wonder what other filters you should use, and that entirely depend on what you want to remove. I knew I wanted to remove code comments like the following:

    <!-- #site-navigation -->
    

    That’s easy! There’s a filter for “remove_comments” so I can just use that. They have a whole mess of filters listed in the Filter Documentation and reading through it took a while. If you read each one, at the bottom they talk about how risky a certain filter is. Taking that into account, I went ahead and added some low and some high risk filters, since I know what I’m using.

    The magic sauce to add all this is just to edit your .htaccess and put in the following near the top:

    <IfModule pagespeed_module>
        ModPagespeed on
        ModPagespeedEnableFilters remove_comments,rewrite_javascript,rewrite_css,rewrite_images
        ModPagespeedEnableFilters elide_attributes,defer_javascript,move_css_to_head
        ModPagespeedJpegRecompressionQuality -1
    </IfModule>
    

    Really, that’s it.

    The ones I picked are:

    • remove_comments – Remove HTML comments (low risk)
    • rewrite_javascript – minifies JS (med. to high risk, depending on your site)
    • rewrite_css – parses linked and inline CSS, rewrites the images found and minifies the CSS (med. risk)
    • rewrite_images – compresses and optomizes images (med. risk)
    • elide_attributes – removing attributes from tags (med. risk)
    • defer_javascript – combines JS and puts it at the end of your file (high risk AND experimental!)
    • move_css_to_head – combines CSS and moves it to the head of your file (low risk)

    Now keep in mind, not all of the features will work. While DreamHost is on a pretty cutting edge version of PageSpeed, they’re constantly innovating over there and improving. The best thing about these changes is, if you do it right, you can speed your site up faster than any plugin could do for you. And that? Is pretty cool right there.

  • CentOS and PHP 5.4

    CentOS and PHP 5.4

    PHP ElephantsI finally got around to PHP 5.4

    Alas this meant reinstalling certain things, like ImageMagick and APC.

    This also brought up the question of pagespeed, which I keep toying with. I use it at work, but since this server’s on CentOS with EasyApache, there’s no ‘easy’ way to install PageSpeed yet (not even a yum install will work), so it’s all manual work plus fiddling. I don’t mind installing ImageMagick and APC, but Google’s own ‘install from source’ aren’t really optimized for CentOS, even though they say they are, and I’m nervous about the matter. Well… I did it anyway. It’s at the bottom.

    The only reason I had to do this all over is that I moved to a new major version of PHP. If I’d stayed on 5.3 and up’d to 5.3.21, that wouldn’t have mattered. But this changed a lot of things, and thus, a reinstall.

    ImageMagick

    ImageMagick I started using ImageMagick shortly after starting with DreamHost, since my co-worker Shredder was working on the ‘Have WP support ImageMagick’ project. It was weird, since I remembered using it before, and then everyone moved to GD. I used to run a photo gallery with Gallery2, and it had a way to point your install to ImageMagick. Naturally I assumed I still had it on my server, since I used to (in 2008). Well since 2008, I’ve moved servers. Twice. And now it’s no longer default.

    Well. Let’s do one of the weirder installs.

    First you install these to get your dependancies:

    yum install ImageMagick
    yum install ImageMagick-devel
    

    Then you remove them, because nine times out of ten, the yum packages are old:

    yum remove ImageMagick
    yum remove ImageMagick-devel
    

    This also cleans out any old copies you may have, so it’s okay.

    Now we install ImageMagick latest and greatest from ImageMagick:

    cd ~/tmp/
    wget http://imagemagick.mirrorcatalogs.com/ImageMagick-6.8.1-10.tar.gz
    tar zxf ImageMagick-6.8.1-10.tar.gz
    cd ImageMagick-6.8.1-10
    ./configure --with-perl=/usr/bin/perl
    make
    make install
    

    Next we install the -devel again, but this time we tell it where from:

    rpm -i --nodeps http://www.imagemagick.org/download/linux/CentOS/x86_64/ImageMagick-devel-6.8.1-10.x86_64.rpm
    

    Finally we can install the PHP stuff. Since I’m on PHP 5.4, I have to use imagick-3.1.0RC2 – Normally I’m not up for RCs on my live server, but this is a case where if I want PHP 5.4, I have to. By the way, next time you complain that your webhost is behind on PHP, this is probably why. If they told you ‘To get PHP 5.4, I have to install Release Candidate products, so your website will run on stuff that’s still being tested,’ a lot of you would rethink the prospect.

    cd ~/tmp/
    wget http://pecl.php.net/get/imagick-3.1.0RC2.tgz
    tar zxf imagick-3.1.0RC2.tgz
    cd imagick-3.1.0RC2
    phpize
    ./configure
    make
    make install
    

    Next, edit your php.ini to add this:

    extension=imagick.so
    

    Restart httpd (service httpd restart) and make sure PHP is okay (php -v), and you should be done! I had to totally uninstall and start over to make it work, since I wasn’t starting from clean.

    Speaking of clean, cleanup is:

    yum remove ImageMagick-devel
    rm -rf ~/tmp/ImageMagick-6.8.1-10*
    rm -rf ~/tmp/imagick-3.1.0RC2*
    

    APC

    APCI love APC. I can use it for so many things, and I’m just more comfortable with it than xcache. Part of it stems from a feeling that if PHP built it, it’s more likely to work. Also it’s friendly with my brand of PHP, and after 15 years, I’m uninclined to change. I like DSO, even if it makes WP a bit odd.

    Get the latest version and install:

    cd ~/tmp/
    wget http://pecl.php.net/get/APC-3.1.14.tgz
    tar -xzf APC-3.1.14.tgz
    cd APC-3.1.14
    phpize
    ./configure
    make
    make install
    

    Add this to your php.ini:

    extension = "apc.so"
    

    Restart httpd again, clean up that folder, and then one more…

    mod_pagespeed

    mod_pagespeedI hate Google. Well, no I don’t, but I don’t trust them any more than I do Microsoft, and it’s really nothing personal, but I have issues with them. Now, I use PageSpeed at work, so I’m more comfortable than I was, and first I tried Google’s installer. The RPM won’t work, so I tried to install from source, but it got shirty with me, fast, and I thought “Why isn’t this as easy as the other two were!?” I mean, APC was stupid easy, and even easier than that would be yum install pagespeed right?

    Thankfully for my sanity, someone else did already figure this out for me, Jordan Cooks, and I’m reproducing his Installing mod_pagespeed on a cPanel/WHM server notes for myself.(By the way, I keep a copy of this article saved to DropBox since invariably I will half-ass this and break my site.) Prerequisite was to have mod_deflate, which I do.

    The commands are crazy simple:

    cd /usr/local/src
    mkdir mod_pagespeed
    cd mod_pagespeed
    wget https://dl-ssl.google.com/dl/linux/direct/mod-pagespeed-beta_current_x86_64.rpm
    rpm2cpio mod-pagespeed-beta_current_x86_64.rpm | cpio -idmv
    cp usr/lib64/httpd/modules/mod_pagespeed.so /usr/local/apache/modules/
    chmod 755 /usr/local/apache/modules/mod_pagespeed.so
    mkdir -p /var/mod_pagespeed/cache
    chown nobody:nobody /var/mod_pagespeed/*
    

    Once you do this, you have to edit the file, and this is where I differ from Jordan’s direction. He just copied this over /usr/local/apache/conf/pagespeed.conf but I had an older version from a ‘Let’s try Google’s way….’ attempt and someone else’s directions, so I made a backup and then took out the ModPagespeedGeneratedFilePrefix line since I know that’s deprecated. I also added in a line to tell it to ignore wp-admin.

    Here’s my pagespeed.conf (edited):

    LoadModule pagespeed_module modules/mod_pagespeed.so
    
    	# Only attempt to load mod_deflate if it hasn't been loaded already.
    <IfModule !mod_deflate.c>
    	LoadModule deflate_module modules/mod_deflate.so
    </IfModule>
    
    <IfModule pagespeed_module>
    	ModPagespeed on
    
    	AddOutputFilterByType MOD_PAGESPEED_OUTPUT_FILTER text/html
    
    	ModPagespeedFileCachePath "/var/mod_pagespeed/cache/"
    
        ModPagespeedEnableFilters rewrite_javascript,rewrite_css
        ModPagespeedEnableFilters collapse_whitespace,elide_attributes
        ModPagespeedEnableFilters rewrite_images
        ModPagespeedEnableFilters remove_comments
    
    	ModPagespeedFileCacheSizeKb 102400
    	ModPagespeedFileCacheCleanIntervalMs 3600000
    	
    	# Bound the number of images that can be rewritten at any one time; this
    	# avoids overloading the CPU. Set this to 0 to remove the bound.
    	#
    	# ModPagespeedImageMaxRewritesAtOnce 8
    
    	<Location /mod_pagespeed_beacon>
    		SetHandler mod_pagespeed_beacon
    	</Location>
    
    	<Location /mod_pagespeed_statistics>
    		Order allow,deny
    		# You may insert other "Allow from" lines to add hosts you want to
    		# allow to look at generated statistics. Another possibility is
    		# to comment out the "Order" and "Allow" options from the config
    		# file, to allow any client that can reach your server to examine
    		# statistics. This might be appropriate in an experimental setup or
    		# if the Apache server is protected by a reverse proxy that will
    		# filter URLs in some fashion.
    		Allow from localhost
    		Allow from 127.0.0.1
    		SetHandler mod_pagespeed_statistics
    	</Location>
    
    	ModPagespeedMessageBufferSize 100000	
    	ModPagespeedDisallow */wp-admin/*
    	ModPagespeedXHeaderValue "Powered By mod_pagespeed"
    
    	<Location /mod_pagespeed_message>
    		Allow from localhost
    		Allow from 127.0.0.1
    		SetHandler mod_pagespeed_message
    	</Location>
    	<Location /mod_pagespeed_referer_statistics>
    		Allow from localhost
    		Allow from 127.0.0.1
    		SetHandler mod_pagespeed_referer_statistics
    	</Location>
    </IfModule>
    

    To tell Apache to run this, edit /usr/local/apache/conf/includes/pre_main_global.conf and add:

    Include conf/pagespeed.conf

    Note: We put this code here because EasyApache and httpd.conf will eat your changes.

    Finally you rebuild Apache config and restart apache and test your headers to see goodness! My test was a success.

    HTTP/1.1 200 OK
    Date: Mon, 21 Jan 2013 03:12:13 GMT
    Server: Apache
    X-Powered-By: PHP/5.4.10
    Set-Cookie: PHPSESSID=f4bcdae48a1e5d5c5e8868cfef35593a; path=/
    Cache-Control: max-age=0, no-cache
    Pragma: no-cache
    X-Pingback: https://ipstenu.org/xmlrpc.php
    X-Mod-Pagespeed: Powered By mod_pagespeed
    Vary: Accept-Encoding
    Content-Length: 30864
    Content-Type: text/html; charset=UTF-8
    

    For those wondering why I’m ignoring wp-admin, well … sometimes, on some servers, in some setups, if you don’t do this, you can’t use the new media uploader. It appears that PageSpeed is compressing the already compressed JS files, and changing their names, which makes things go stupid. By adding in the following, I can avoid that:

    	ModPagespeedDisallow */wp-admin/*
    

    Besides, why do I need to cache admin things anyway, I ask you?

    So there you are! Welcome to PHP 5.4!

  • Don’t Tread On Me

    Don’t Tread On Me

    Even the non techs have been hearing about Do Not Track lately. The basic idea is that letting advertisers track you is annoying, frustrating, and something a lot of us just don’t want, but moreover, we don’t want random websites doing the same thing! Imagine if you went into Starbucks, and they followed you around to everywhere else you went that day? Starbucks.com could do that, and I personally find it invasive.(I’m not the only one. My friend Remkus goes even further than I do.)

    This is, in part, what that stupid EU law was trying to tackle.

    There are a lot of ways to block that sort of tracking, but the latest way is to use Do Not Track (DNT). Turning on DNT on your browser puts an extra header in your web page requests that says “Don’t track my behavior!” Now, the only real downside is that both your browser and the site you’re on have to agree to these rules for it to work, but with Microsoft in the mix, turning DNT on by default for Windows 8, I think we’re on the right track.
    If you go to IE’s testdrive of DNT you can see the status of your current browser, and all others.

    Interstingly Chrome doesn’t have this yet, and when it does, it will default to track. Safari does that too. It’s weird for me to be saying ‘Microsoft has it right’ but I suspect it comes down to how advertising works. Microsoft really doesn’t need to advertise except to improve their image. Everyone knows Microsoft, and they know Office, IE, and Windows. Apple’s still a small percentage, and Google was a techy thing for so long, I think that’s why their first social network failed. Because Microsoft has such a percentage of non-tech users (i.e. everyone) and because of their bad rep, the best thing they can do to improve everything is start protecting the users more.

    Of course, we all know that being tracked is a function of being online, or even in a store. Physical stores have often watched where people linger to figure out how to better arrange stores, and they ask for your zipcode when you show to understand who buys what. This is all a part of marketing. Of course the problem with online is that the more I search for something, the more I see it in my ads (Google). Why is this a problem? Let’s say I research a MiFi device, find the one I want, and buy it. For the next four months, I got ads for MiFis.

    I should explain, while I have no problem with people tracking me for analytics (I rely on them myself, can’t understand your visitors without data), its what they’re doing with that data that pisses me off. Getting my info to make a better product for me is great. Getting my info to sell to people is not. And that’s why I’m for do-not-track. Or at least ‘Ask to track.’ It goes back to the store. If I go to Office Depot, they ask me for my zipcode or phone, and I can decline. They use that to track me, and if I don’t want them to know that I drove 80 miles to get something, I don’t have to tell them. Online, I should have that same option.

    Sadly, the steam behind Do Not Track is running out. Ten months after everyone agreed this needed to happen, nothing’s happened and that’s problematic. Why did we all go dark over SOPA? Because, at some level, we all believed that the Internet is changing things for the better. And yet, we all promised to have Do Not Track up by the end of 2012, and that sure didn’t happen. Then again, we’re merrily Thelma and Lousie’ing right off a fiscal cliff too, so this really isn’t a surprise.

    I’m actually against ad-blocking software, and yet we’re at the point where I’ve installed it on Chrome, and I’m starting to block people. Oh, I run the other way with this. I only block certain sites (generally I’ve taken to blocking ones that have annoying ‘overlay’ ads) because, again, I get that people need these metrics to make things work, and I too make money off ads.

    In fact, this is yet another reason I use Project Wonderful for my ads. They have a very simple policy:

    Specific tracking of user interactions that don’t involve clicks is not allowed, including view-through tracking, key-modifier tracking, and mouse-location tracking.

    So please, allow ads on my sites. I promise I don’t track you with ads. I do have Google and Jetpack tracking your visits, but that’s just for me to measure how things work on the site, and I will never sell or otherwise use your personal information for my own gain.