Half-Elf on Tech

Thoughts From a Professional Lesbian

Tag: essay

  • On the EU Cookie Law

    On the EU Cookie Law

    ETA: Please check out Trac #19622 – There will be a new way to do this in WP 3.4

    I’m going to be bold and tell you that the new EU law, that goes into effect in the UK on May 25th, is going to be impossible to track and enforce, it’s being handled backwards, but besides that, it’s actually a pretty good idea.

    For most people outside the EU, we have no real idea about what’s going on, so here’s a short recap. As of May 25th, a change to the EU law will require businesses to request permission from visitors to their websites before they can store information about their identity, history and preferences via third-party cookies. You can read the whole details in the proposal or Directive 2002/22/EC on universal service and users’ rights relating to electronic communications networks and services. Those links are full of legalese.

    Now, I do want to point out that this only affects people who live in the EU. Arguably, it also only affects people who host sites in the EU, and you may be able to skirt around it by hosting in the US or Canada, but that’s a lawyer conversation. Basically, if you live in the EU and have a website that acts as a business, you’re kind of screwed. If you just have a blog with 100% personally controlled content and cookies that only come from your domain, you’re fine. The cookies, including the kind WordPress drops on your site, are not the kind they’re talking about. If your cookie is only tracking information used on your site (login information, recent comments, etc), you’re fine. If the cookie comes from someone else (like Google Analytics or Project Wonderful), then you need to explicitly tell the visitor and obtain their consent.

    This is done for a pretty good reason, when you get down to it. When you go to a restaurant and pay with your credit card, you trust that neither the credit card company nor the restaurant are going to turn around and give your personal information to some other company who uses it for their own purposes. Legally, they have to ask you for permission to use your info, and that’s why sometimes they ask for your zipcode when you’re checking out at a store (and also why you’re totally allowed to say ‘no’ when they ask). Third-party cookies, that is those put down by someone other than the domain you’re visiting, should also be ‘agreed’ to. The EU argues that just visiting a site with Google Ads does not constitute consent.

    Item #66 in the directive:

    Third parties may wish to store information on the equipment of a user, or gain access to information already stored, for a number of purposes, ranging from the legitimate (such as certain types of cookies) to those involving unwarranted intrusion into the private sphere (such as spy­ware or  viruses). It is therefore of paramount importance that users be provided with clear and comprehensive information when engaging in any activity which could result in such storage or gaining of access. The methods of providing information and offering the right to refuse should be as user-friendly as possible. Exceptions to the obligation to provide information and offer the right to refuse should be limited to those situations where the technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user. Where it is technically possible and effective, in accordance with the relevant provisions of Directive 95/46/EC, the user’s consent to processing may be expressed by using the appropriate settings of a browser or other application. The enforcement of these requirements should be made more effective by way of enhanced powers granted to the relevant national authorities

    That’s a pretty hefty thing to get through, but it clearly spells out that third party cookies are when they’re on about. And in that, they’re right. There should be transparency to all this. We should know when we’re being tracked around the internet. But they’re wrong in making this the sole responsibility of the website owners. This is not to say that, as a website owner, I’m not responsible for the cookies my site puts down. And this is not to say that, as a website owner, I’m shouldn’t tell people how cookies and personal information I collect are used on my site. But to say that the ‘solution’ is for me to alert you with “Hi, the EU says I have to tell you about cookies and make sure you’re okay with them on your computer.” or not to use things like Google Ads, Facebook Like buttons, or Twitter integration is unenlightened.

    The issue is not that I, as website owner, am using third party services, and it’s not even that I’m using those services in an ‘hidden’ way (I use Google Analytics on this site, which you can’t easily tell unless you look at my source code). The issue is that those services are using cookies to track you between sites. But it’s easier to go after you than it is to sort out how to go after them, at the end of the day.

    Arguably, this is also being done to protect the website owners. If a visitor agrees to have the cookies, then you’re not longer on the hook if they complain. How are they going to (1) verify that (a) you did ask first and (b) they did consent, and (2) enforce this at all? The only way this can be enforced is if someone (or a program) goes to every single website hosted in the EU, or owned by someone who lives in the EU, and checks them for cookies without explicit consent. This could be automated, and emails could be automagically sent out to the site-owners, who would in turn have to look at their software and ads and deduce what’s making the cookies. Already, the UK has said they know companies won’t meet the May 25th deadline and don’t plan to enforce the law yet.

    Let’s say that they decide they will enforce the law. How can they verify that a cookie for your site is on someone’s computer? WordPress saves cookies in the name of wordpress_verylonghashkey, where your HASH key is specific to your install. Now they do show up as ‘from’ the website domain.tld but they can be forged. The easiest way is to copy cookies from one computer to another (I just did that when I moved everything from my old desktop to the new laptop). Another way is to take the information I have in my cookie, and tweak it to apply it to someone else’s site. That way requires a lot more savvy, more information than I’m providing here, obviously, and it’s incredibly hard, but it can be done.

    If they only rely on cookies that show up when your site is visited, they have to come up with a way to verify that it’s your site that put down the cookies and the visitor agreed to have the cookies put down. They have yet to explain how they’re going to be checking sites, which means you, as a site owner, still have no idea exactly what is and is not illegal to do. Sort of hard to protect yourself against an unexplained law, and it’s worse when you remember that “ignorance of the law is no excuse.” That should cut both ways. Ignorance in creating the law is no excuse.

    There’s already a way for users to stop cookies from being stored on their computers. Every browser out there has a way to turn off cookies. Most have a way to say ‘Don’t allow third-party cookies.’ If that’s not enough, Don’t Track Us has plugins for most browsers that let you block tracking plugins.

    To make this work, the EU needs to explain how they’re going to determine if you’re in violation of the law, and how they will enforce it. They also need to take this to the streets and tell the third-party cookie makers to stop. There are ways that third party tools can work around this, and one of which could be to tell people when they log in to FaceBook “We reserve the right to use your login credentials and other account information stored in cookies on other sites.” After all, the cookie belongs to FaceBook! Or we could just not use cookies at all for that sort of thing. But that has to change at the source of the matter, the third-party, and many of them don’t tell people that their cookies are used in such a way.

    As it stands, this law won’t be enforceable, it won’t be understandable, and it will cause more hassle with the wrong people without protecting anyone at all. It’s still a great idea, but it’s just not going to work this way. All they’d done is made a law to tell people that their hot coffee is, indeed, hot.

    Further Reading

  • Learning From Failure

    Learning From Failure

    The term criticality accident is what happens when there’s an increase of nuclear chain reactions.  This lets loose a radiation surge that kills people.  This is what happened at Chernobyl, Three Mile Island, Fukushima, and many other places.   To date, twenty-two criticality accidents have occurred outside nuclear reactors (some resulting in deaths), but thus far, none have resulted in explosions.

    When we look at the death of Louis Slotin, we think ‘God, how did we not know that was dangerous?’  When we regard the Trinity Test, we think ‘How did we not know we were unleashing hell on earth?’  The fact is that we cannot see the future, and we cannot predict how far we we go.  Therefore we certainly cannot see when we are too far gone before, indeed, we have gone too far. You cannot divine and magically know the unknown, though that doesn’t mean we’re in complete ignorance of the possibility.

    There is always a possibility that things can go terribly wrong, in the worst way possible. In a nuclear power plant (NPP), obviously things like a meltdown is up there on the top of the ‘worst outcome’ list. Did the scientists know there was a possibility this could happen? Of course they did. Did they know that it might leak into the ocean, pollute the land, and kill those 50 people who are still working at the plant, and who are all expected to die of radiation poisoning? Again, of course they did.

    Before any NPP is built, they go over the risks and mitigate them as best they can. They review the known risks, and solve as many as possible. But there will be a point where someone will correctly state “We’ve thought up ways to solve every problem we can come up with. Now we need a plan to handle situations where the unexpected arises.” Oh yes, they have a plan for this sort of thing too, but it’s probably really basic.

    I don’t work in NPPs, I work for a bank. We sit around and discuss things like ‘If the city of Chicago is destroyed tomorrow, how would we make sure that everyone can get to their money?’ Given that your money, like mine, is pretty much virtual and stored on computers, we do that via data integrity. Make sure that our data is all safe, secure, and backed up in multiple places. We have multiple data centers across the state, protecting your money. What about the software? It’s written to talk to those data centers. How do we compensate if one of them vanishes? The problem with those meetings, is that people want to know specifics. And I always point out ‘Give me a specific situation example, and I will give you specific steps. But since every situation is different…’ Because the answer to ‘What do we do if our Chicago servers vanish?’ is ‘Route everything to this other location.’ See how that’s really basic?

    The problem with all this is we can only plan for what we can imagine, and we can’t imagine past our abilities. Should we have seen the possibility of someone flying a plane into the World Trade Center? Of course! We should have always thought ‘Hey, this nice big skyscraper sure is an easy target for someone really pissed off!’ But the probability of that happening was so low we didn’t come up with plans for how to handle it. A criticality incident happens at that point when we realize what we should have known all the time, but couldn’t have possibly known because we are not omnipotent. We are not perfect and we cannot know everything.

    In the case of a nuclear power plant, when all hell breaks loose, people die. Even today, we know that the radiation being leaked out is bad for us, for the environment, the water and animals, but we don’t know how bad. We cannot possibly know. We can guess and infer and hypothesize. But we do not know. And the only way to know is to experiment. If that doesn’t scare the pants off you, to realize that all innovation comes from an experiment that could kill us all, well, then you’re probably not aware of the hadron collider and how we all joked about how it would open up a black hole and kill us all.

    Innovation takes risk. It takes huge risks. The people who take the risks, like Louis Slotin, know that things can happen. They know that irradiating themselves to kingdom come ends with a slow and painful death, and not becoming Dr. Manhattan. We won’t become Spider-Man or any sort of godlike superhero. We. Will. Die. And we know it. And we do not stop. We cannot stop. The only way to get better, to make it safer, is to keep trying and keep dying.

    Not to be too heavy handed, but with our code, it’s the same thing. We cannot see where too-far in our code, where danger lies, until it hits us in the face. We will destroy our programs over and over, we will crash our servers, and infuriate our customers, but we will pick up the pieces and learn and make it better the next time. This is human nature, this is human spirit and endeavor. We cannot fear failure, even if it brings death. For most of us, the worst it can bring is being fired, but really that’s not that common. I’ve found that if you step up and accept responsibility for your actions, you get chastised, warned, and you keep your job.

    When everything goes bad, it’s easy to point a finger and blame people. That’s what people do. They complain that the programers suck and didn’t test enough, that the testers didn’t do their job, that everyone is terrible and did this just to piss them off. They rarely stop and go ‘What did I do?’ They rarely say thank you, and they rarely learn from the experience of failure. Thankfully their failures will not end in death. Money loss, certainly, and a great inconvenience to everything in your life, but you learn from this far better than you can learn from anything.

    Learning from extreme failure is not easy. It’s hard to get past that initial moment of absolute terror. It’s harder still to train the end users (clients, readers, whatever) that this is okay. This is normal and it happens to everyone, everywhere, everything. But if we cannot learn from failure, we’ll never have the courage to create again.

    Get messy. Make mistakes.

  • Software Freedoms

    Software Freedoms

    copyleft image Like a million other posts, I’m starting this with a warning: I Am Not A Lawyer.  Sure, my mom is, but that qualifies me for a cup of coffee, if I have the cash.  Personally, I support open-data and open-code because I think it makes things better, but there are a lot of weird issues when you try and pair up software licenses, explain what ‘freedom’ means, and where it’s applicable. For the record, I am not getting into the ‘is a plugin/theme derivative software or not’ debate. I will wiggle my toe and point out it is a point of contention.

    I’m presuming you are already familiar with the idea of what GPL is. If not, read the GPL FAQ.

    Why are WordPress and Drupal GPL anyway?

    The people who built WordPress took an existing app (b2) and forked it.  Forking happens when developers take a legally acquired copy of some code and make a new program out of it.  Of the myriad caveats in forking, you have to remember that the fork must be a legal copy of the code.  In order to create WordPress, Matt et al. were legally obligated to make WordPress GPL.  No one argues that.  The only way to change a license from GPL is to get everyone who has ever committed any code to the project to agree to this, and you know, that’s like saying you’re going to get everyone in your house to agree to what pizza to order.

    WordPress and Drupal is GPL because it must be.  There is no other option.

    So why is this a problem?

    GPL poses a problem because of interpretations of what ‘derivative works’ are.  It’s very clear cut that if you take or use WordPress’ or Drupal’s code, you are taking code built on GPL, which means you must keep your code GPL.  The definition of ‘code’ is a bit squidgy.  A generally accepted rule of thumb is that if your code can exist, 100%, without WordPress or Drupal’s support, then it’s not a derivative.  By their very nature, plugins and modules are seen as derivative.  Both Drupal and WordPress have long since stated that this is, indeed, the case.

    Themes, modules and plugins are GPL because they must be.  There is no other option.

    Except…

    The GPL GNU. If you don't know, don't ask! Except there is.   Only the code that relies on the GPL code have to be GPL.  Your theme’s CSS and your images actually can be non-GPL (though WordPress won’t host you on their site if you don’t).  Also, if you have code that lives on your own server, and people use the plugin to help the app talk to that code, only the code that sits on WordPress or Drupal has to be GPL.  Your server’s code?  No problem, it can be as proprietary as you want!  Akismet, a product made by Automattic (who ‘makes’ WordPress, in a really broad interpretation) works like this.  So does Google Analytics (most certainly not owned by WordPress), and there are many plugins to integrate WordPress and Google.  This is generally done by APIs (aka Application programing interfaces), and are totally kosher to be as proprietary as you want.

    Themes, modules and plugins are GPL where they need to be, and proprietary (if you want) where they don’t.

    So what is GPL protecting?

    As we often carol, the GPL is about freedom.  And “free software” is a matter of liberty, not price. To understand the concept, you should think of “free” as in “free speech,” not as in “free beer.”  Freedom is a tetchy subject, misunderstood by most of us.  For example, freedom of speech does not mean you get to run around saying what you want wherever you want.  Free software is a matter of the users’ freedom to run, copy, distribute, study, change and improve the software.  This is pretty much the opposite of what you’re used to with the iOS, Microsoft and Adobe.  Free software may still charge you, but once you buy the software, you can do what you want with it.  Your freedom, as a user, is protected.

    WordPress’s adherence to GPL is for the user, not the developer.

    What’s so free about this anyway?

    The term ‘free’ is just a bad one to use in general. Remember, freedom of speech, as it’s so often used in inaccurate Internet debates, does not mean you can say whatever you want. ‘Free speech’ means ‘You have the right to say what you want, but I have the right to kick you out of MY house if I don’t like it.’ So what are these GPL freedoms anyway? In the GPL license you have the four freedoms: (1) to run the software, (2) to have the source code, (3) to distribute the software, (4) to distribute your modifications to the software. Really they should be ‘rights’ and not ‘freedoms’ if you want nit-pick, and I tend to think of the freedom of source code to be similar to data freedom. The freedoms of open-whatever are for the people who use the whatever, not those who come up with it.

    Software freedoms are for the user to become the developer.

    So if GPL is for the users, what protects the developer?

    Every post about software freedom requires Stallman's image! Not much, and this is where people get pissed off.  If anyone can buy my software and give it away for free (or pay), why would I even consider releasing something GPL?  The question, as Otto puts it, really should be ‘What exactly are you selling in the first place?’ What are we selling when we sell software?  I work on software for a living, and I never sell my code.  I’m hired to write it, certainly, and I do (not as often as I’d like).  Most of what I do is design.  It’s part math, and part art.  My contract doesn’t allow me to keep ownership of my art, which sucks, but if I was a painter, I’d sell the painting and lose the ownership anyway, so what’s the difference?  That painting can get sold and resold millions of times for billions of dollars.  And most artists die starving.

    Software Freedom doesn’t stop people from being dicks (though they should).

    So what good is the GPL to the developer trying to make a buck?  

    It’s not.  But that’s not the point.  GPL isn’t about the guy who wrote the code, it’s about the guy who gets the code (again, legally) and says “You know, this is great, but it should make milkshakes too!” and writes that. GPL is all about the guy who uses the code and the next guy who takes the code and improves on it. If you have an open community where everyone has the privilege and right to use, view, share and edit the code, then you have the ability to let your code grow organically. If you want to watch some staid, tie-wearing, Dilbert PHB lose his mind, try and explain the shenanigans of Open Source development. “Develop at the pace of ingenuity” versus “Develop at the pace of your whining users.”

    Software Freedom isn’t about making money, it’s about making the next thing.

    Why would I want to use GPL?

    Other, more famous, Communists If you use WordPress, you use it because you have to. I prefer the Apache licenses, myself, but the purpose of using any software freedom license is, at it’s Communist best, a way to make software all around the world better for everyone. You stop people from reinventing the wheel if you show them how to make the axle in the first place! Did you know that Ford and Toyota independently came up with a way to make your brakes charge your hybrid battery? They latter opened up and shared their tech with each other, only to find out how similar they already were! Just imagine how much faster we could have had new technology if they’d collaborated earlier on? With an open-source/free license, my code is there for anyone to say “You know, this would work better…” And they have! And I’ve made my code better thanks to them.

    I use ‘free software’ open source licensing on my software to make my software better.

  • Obfuscation Obtuseness

    Obfuscation Obtuseness

    I’m not getting into the ethics of free vs pay and theft or any of that. Comments on that matter will be deleted on sight.

    Earlier in the year I remarked on how right-click protection doesn’t work.   Right-click disabling is a form of obfuscation protection.   By hiding the normal methods to perform actions, you are ‘protected.’  You’re not.  I told you so then, and I tell you now.  If it’s on the internet, it can and will get ‘stolen.’

    Pretty recently I got into a ‘tiff’ with the guy who wrote a WordPress plugin to disable right click.   A user had a problem and bitched (generically).   I told him that the plugin worked well for what it was, but that disabling doesn’t really do much.   The plugin author accused me of trolling (incorrect, I was attempted to start a conversation with the user to sort out why he needed the plugin, and to then help determine a solution – which is my volunteer job).   Of course, that meant I wanted to see how this guy’s product worked.  I went and grabbed WP Protect and determined that you can block right-click, image drag and text selection.   He doesn’t block CTRL-A, though, so I could easily select all and move on. (Actually you CAN block ctrl-A, per Disable ctrl + n and other ctrl + key combinations in JavaScript, but remember that on a Mac it’s Apple-A and you have to take all that into consideration.)

    My point remains valid that the technical code for doing these things is not complete, doesn’t work for all situations, and puts a burden on you.  And it doesn’t work!   If you printed up a newspaper, it’s easy for people to copy your work.   We have a copy machine, a scanner, and scissors.   If you send out a DVD, we can rip it at home and pass it around to our friends (which is legal actually, so long as you’re not selling it as your own work – see mix tapes, yo).  Why is stopping copying bad for your users?   If I want to send someone a link to your article, there are two things I want to do.   First, I copy your URL.  Second I want to copy your title (and maybe an excerpt to illustrate a point).   By killing right click, you made it a royal pain to SHARE your work.   And if you’re online, you want people to share.  The same goes with DVDs, mix tapes etc.   Sharing is how we tell people “I really like this!”  You’ve shot that down and will lose customers.

    Apropos of all this, the New York Times decided they needed to charge for media.   This makes sense, as they charge for a newspaper.  I have no objection to paying for media (traditional or not) at all.   What did The NYT do?  They put up a paywall.  You now have to pay to get in. Kind of.

    Come March 28, you’ll only be able to read 20 articles per month for free. After that, you’ll need a digital subscription, which costs $3.75 per week for Web and mobile phone access, $5 per week for Web and tablet access and $8.75 per week for access on the web, phones, tablets, TimesReader and the Chrome Web app. Print subscribers get all this stuff at no extra charge. (New York Times Paywall: A Small Change That Seems Big – By Jared Newman, PCWorld Mar 17, 2011 12:00 PM)

    And then there’s a catch.   If you go to their site via Google, Facebook or Twitter, it’s ‘free’.

    Essentially, the New York Times doesn’t want to charge you for its content. It wants to charge you for the delivery mechanism, whether it’s through the Website, the iPhone app, the tablet app or the TimesReader software. That’s the best approach, because content is abundant on the Internet. An elegant tablet app is worth more than the individual stories within.

    The problem for the Times, and the reason a lot of people should shrug off the paywall, is that people don’t necessarily need major media gatekeepers to provide the delivery mechanism. A recent study by Pew Internet found that 75 percent of people who find news online get it through e-mail or social networks. (Here’s a fitting anecdote: a friend alerted me to the Times’ paywall announcement by e-mail.) (New York Times Paywall: A Small Change That Seems Big – By Jared Newman, PCWorld Mar 17, 2011 12:00 PM)

    Not to mention that you can overcome the technical aspects of their wall pretty easily.  The Times built their paywall with some simple Javascript, which can be tweaked pretty fast.  And it cost them $40 million. (New Media Barbarians Breach New York Times Paywall in Hours – By Erik Sherman | March 22, 2011)

    What the NYT is learning pretty quickly is that hiding your content, or putting up barriers, isn’t effective.  Nor will their plan succeed because of the same reason we still have people out there who can crack your DVD and DVD players.  If you build it, we can unbuild it and share it and there you go.  If you don’t want people to read your articles for free, you stop putting them online.   That’s it.  If you email me an article, I can email it to my friends (copy & paste) after all.

    I hate to say it, but once it’s out there, it’s done.  Hiding it doesn’t help, because you either chase people off who can’t find it (and who would have paid) or you make people smarter who steal it (and they wouldn’t have paid anyway).   We need a culture shift to make this all ‘profitable’ as well as something the end users are willing to pay for.  We’re not there yet.  I’m not entirely sure we’re all that different from where we were when I was in high school, though.  The only major change is how easy it is to find the information to steal.  That’s why I think the problem isn’t the technology, but the mentality.

    It’s hard to see a future where we can run a business like a newspaper, make it profitable, and convince people to pay for it.  Yes, part of this is our own fault for providing information for free all this time, but the other part is the abject denial that this was an issue until it was too late.  I certainly cannot advocate cable TV’s method of charging, but perhaps a restructuring of sites, so you can only see domain foo.com if your ISP pays a fee to foo.com (and you can access it by telling your ISP you want to ‘see’ foo.com as well).  Lifting the model wholesale strikes me as a terrible idea, but the idea of charging the provider, not the end user makes some sense.

    After all, basic network TV is ‘free’ in the US, isn’t it? Well, that’s another post.  The point here is that the methods we’re using now to stop ‘theft’ isn’t working.  So how do you protect your intellectual property and attract readers?

  • WordPress MultiSite – New Dashboards

    WordPress MultiSite – New Dashboards

    Back in the WordPress MU and the recent WordPress Multisite 3.0.x days, we had something called a ‘Dashboard Blog.’ This was the ‘main’ site of your install, and ostensibly was the default blog to sign users up to and control them from. This was also where you, the admin, had the Super Admin menu. So what were those things for and why were they moved? After all, a lot of people will tell you they worked just fine.

    The simplest answer is that it’s considered good design to separate the ‘user’ interface from the ‘admin’ interface. That’s why, when a regular user with the lowest role possible logs in to a regular (non-MultiSite) WordPress install, they see a very limited site. They see a dashboard, their profile, and that’s it. You want to keep the subscribers out of your meat and potatoes. Pursuant to that, there are plugins like WP Hide Dashboard that kick users to just their profile. I love that plugin, because it hides the man behind the curtain. If the Dashboard of WordPress is not a part of your desired experience (and really, it only is for the people who run the site), then you keep Dorothy, Toto, the Scarecrow, the Tin Man and the Cowardly Lion out, Ruby Slippers or not.

    When WordPress 3.0 came out, it was a bit of a chimera. We’ve got all sorts of weird parts where we call things blogs instead of sites, and from the back end, it’s really confusing. The sad thing is we cannot declare fiat, fix it all, and move on, because that would break backwards compatibility. Did you know WordPress is backwards compatible, nearly all the way to the start of WordPress 1? (17 Reasons WordPress is a Better CMS than Drupal – Mike Schinkel, Dec 1st, 2010) In order to be able to upgrade from WordPress MU (which was a fork – i.e. a totally separate version – of WordPress), the fold-in of MU to regular WordPress was a lot of work and duplication. There are some things I’m sure the devs would have chosen to do differently in a perfect world, but they decided the headache for them was worth it because it was beneficial to the users. For that alone, I laud them and owe them beers and coffee.

    One of the many drawbacks of that mentality is the users are very much used to getting what they ‘want.’ The users think ‘This worked before, it will always work, therefore, it’s cool to do it now.’ Take (not for random example) the issue with the /blog/ folder in the main site of any subfolder install. (Switching to WordPress MultiSite Breaks Links – Mika Epstein, 14 July, 2010) Back in the 3.0 days, we had a work-around to fix this, but that was a ‘bug.’ We were all taking advantage of a flaw in the system, and that flaw was plugged (mostly) in 3.1. Of course, fixing the flaw meant breaking things, and those people who were not up to speed on the dev channels (which in this instance included me) went ‘Hey, what the hell!?’ We were angry, we were upset, and then Ron told me that it was bug and I stepped down.

    A lot of people are still annoyed by this, and while there is still a buggy workaround, it’s not something I would generally suggest be used for my clients (myself, yes). Then again, the original tweak wasn’t something I considered using for clients, since I was always aware that WordPress’s stated intent was to make that /blog/ slug customizable. And I hope they do.

    What does this have to do with the new dashboards? It’s another change WordPress implemented to ‘fix’ things people didn’t see as broken. The people are wrong.

    Now don’t get all het up, thinking I’m drinking the WordPress Kool-Aid. There’s a vast difference between being ‘WordPress is always right, WordPress can do no wrong’ and the acceptance that what WordPress did was for a good, understandable, reason. In software development, I’ve learned to distance myself from the all too personal feelings of investment in my product. Many times, the product needs to be designed in a certain way to work better for the majority of people, and many times, I am not that person. Look at JetPack. This is a fantastic plugin for people moving off WordPress.com and onto self-hosted WordPress. It has absolutely no meaning to me, and I won’t be using it. But it’s great for the target audience. I accept that I am not that audience, and I look at the product with as unbiased an eye as is possible.

    I have to look at the Network Admin and User Dashboard the same way.

    The Network Admin was moved from a Super-Admin sidebar menu to it’s own section, in order to provide a clearer delineation between Site Admin (in charge of one site) and the Network Admin (in charge of all sites). (Network Admin – Trac Ticket) (Network Admin – WordPress MustUse Tutorials, October 21, 2010) This is a basic, normal, every-day bit of separation in my everyday life. For one app I use, I even have a totally separate ‘Admin App’ to use when I want to control the whole network, versus just one part of it. It’s done for security, but also to kick our brains over and go ‘Hey, moron, you’re in the Network admin section!’ Our brains need that kick, and it lessens the human errors. In doing this, we also found the plugin management was separate. Per-site admins saw the non network-activated plugins only. The Network Admin had to go to the Network Admin section to see the network-activated plugins and the must-use plugins, though many plugins needed to be recoded to handle this move. (Adding a menu to the new network admin – WordPress Must Use Tutorials, November 30, 2010) While this is annoying and takes a little time to get used to, this is good, sound UI/UX. It’s called “Separate of Duties” in the buzzwords game, and it’s really a blessing.

    Once they moved the Network Admin, the devs took a shot at getting rid of the Dashboard Blog. (Personal Dashboard – trac ticket) Once you moved the super users off to their own network, there’s no need to sign-up users to a main blog. I assume this was originally done becuase you had to hook them in somewhere with 3.0, to make them be a ‘user.’ Well, now WordPress.org Multisite now behaves like WordPress.com. You sign up for a blog but unless you get assigned a role to the blog, you’re not a ‘member’ of the blog. And you know… that’s sensible. You have no real role as a psudeo-subscriber. Nor do you need on.

    As I pointed out, part of the goal with moving the menus to Network Admin is that the whole ‘Dashboard Blog’ concept was a massive annoyance to everyone code-wise and UI wise. Having to say “Oh yeah, the main site is the master site and it’s where I control the universe” is logistically unsound. Much like you cannot in-line edit posts, you should not be mixing up Admin and User areas. So to further that separation, your users are not assigned to any site when they register. I find I need to repeat, a lot, that in most cases, this has no effect on usability. It doesn’t affect my BuddyPress site at all, because the users are the users. They just don’t have blog access. They can comment, which is all they need to do for me, and they’re happy. If they need to make posts, I can add them if I want to. But now I have security, knowing they can’t accidentally get in and poke around.

    Like it or not, it’s not going away. And most of us won’t need it to come back. I do know that some people do need it, and are struggling to find a way to auto-assign users a role on their main site at ID creation, so if you know of a fix for 3.1, please share it!

  • Do Pretty URLs Still Matter?

    Do Pretty URLs Still Matter?

    With all the kerfluffle about the new Lifehacker (and the 2.0.1 redesign) one thing stood out for me as the more interesting fallout: a rush on articles about the dreaded Hashbang.

    Unix heads can skip this paragraph. For the rest of the world, a hashbang (aka a shebang) is the character sequence consisting of the characters number sign and exclamation point #!. You see this a lot at the beginning of scripts, and are used to define which ‘interpretor’ you’re using. Basically you’re telling Unix which program to use to parse the rest of the script. It’s more complicated than that, but this is not the place for goobley tech speak.

    A hashbang used in a web URL is also an interpretor, but of a different sort. The hashbang is a fake tool used to get URLs generated by javascripts and AJAX picked up by Google. Great, what does that mean? The # tells the browser that the rest of the URL is an ‘identifier’ and everything after that # is computer code for a part of the current page. So in the case of Lifehacker, URLs went from http://lifehacker.com/5753509/hello-world-this-is-the-new-lifehacker to http://lifehacker.com/#!5753509/hello-world-this-is-the-new-lifehacker.

    Most people missed the change and concentrated on the new layout. The #! in there manages to parse the right page, so why should we worry? After all, AJAX is the ‘future’ of the web. AJAX, by the way, is not the dim-witted but incredibly strong Greek hero. AJAX is shorthand for Asynchronous JavaScript and XML, but has very little to do with XML. Instead it’s used to create interactive web-apps. It’s exceptionally powerful, but a little dim-witted. Oh, okay, maybe my Illiad joke wasn’t so far off, after all!

    AJAX has a whole lot of problems, most important (to me) is the fact that it’s not easily read by screen-readers, which means your blind visitors get the shaft a lot from poorly written AJAX sites. You also can’t use the back button most of the time, nor are these sites easily read by web crawler bots, which means that you don’t end up on Google results. Your page isn’t ‘real’, therefore it’s not followed. Of course, the folks at Google recognized this problem and came up with a weird solution: The Hashbang! That’s right, Google introduced it as a way to get sites dependant on AJAX into their results. (A proposal for making AJAX crawlable – Google Webmaster Central, Wednesday, October 07, 2009 at 10:51 AM)

    Sounds great, though rather complicated and a little kludgey. Now you can have the best of both worlds. There are still drawbacks, of course, and I hear this refrain over and over:

    The main problem is that LifeHacker URLs now don’t map to actual content. (Breaking the Web with hash-bangs – Mike Davies (Isolani.co.uk) Tuesday, February 08, 2011)

    This initially perplexed me. After all, WordPress (which runs this site) is really a single page site (index.php) with a million identifiers, and it’s not like there’s an actual, physical, page on my server called do-pretty-urls-still-matter. What’s the difference? Turns out it’s all in how the URL is generated. The hashbang method is prone to error, as Lifehacker found out already, all you have to do is kill javascript and your entire site goes down. That I knew, but as I sat to write this I got myself in a bind. After all, in order for my pretty URLs here to work, I need .htaccess (which means my httpd.conf stuff is all in a row), PHP and SQL. Of course, all that is server side, and the user only needs the most basic of browsers to get to my site. But how were those failure points less than a javascript induced set?

    I appealed to Twitter and got a couple fast responses from @JohnPBloch:

    JohnPBloch: The difference is that one (js) depends on millions of different configurations working correctly; the other depends only on one […] for example, I’ve had chrome extensions that crashed JS on non-jQuery pages before.(JohnPBloch) (JohnPBloch)

    Thanks to John, my brain kicked back into gear. There is no browser dependency on my URLs. The only way for them to fail is if I screw up the backend (that could be my site setup or my server). Either way, the critical failure will always be me, and never my user. That’s as it should be. We’re so often told that our customers are always right. As a corollary, your customers should never be at fault for a problem (as long as they’re using the products as intended). There are good reasons to use AJAX. Like using AJAX and javascript allows for a responsive and truly interactive experience for you users, in situations where small parts of the interface are changing (think Gmail, Flickr and Google Maps). Gmail actually would be perfect. That’s something where I neither want nor need a ‘pretty’ URL past http://gmail.com because I’ll never share the URL.

    The rest of the time, though, URLs are important. If I want someone to visit my site, I tell them ‘Go to https://halfelf.org’ and they know what to do. Back in the Trojan War days, you’d hear “Login to AOL, keyword ‘Ipstenu’.” instead of a URL, and we’ve moved away from that to allowing people to claim their own domains and their own presence online. We shouldn’t be reliant on AOL, Google, or anything else to generate our representation. So, regardless of the problems with the hashbang (and there are a lot), your URL is important.

    We all know the difference between a ‘pretty’ URL and an ugly one on sight. https://halfelf.org/?p=1358 is ugly and https://halfelf.org/2011/do-pretty-urls-still-matter is pretty! Simple. Lifehacker (and Twitter for that matter) have always had ‘kind of’ pretty URLs, with a weird number string in there, but enough to make you go “Okay, this Lifehacker post is about monkeys” or “This tweet is by Ipstenu.” With the change to hashbang, both sites (temporarily) broke the cardinal rule of websites: never change your URLs. By now, both sites will redirect a hashbangless site to a hashbanged one, which is as it should be. You never get rid of your old URLs, which is why on older sites, you have a really freakish .htaccess with miles and miles of regexp. AJAX makes you jump through hoops for pretty URLs.

    But that really begs the question of whether or not pretty URLs actually matter anymore, or this just me being an old stick in the mud? Much like Google’s minor issues with AJAX/Javascript, they have minor issues with Dynamic URLs. Quickie explanation: static URLs are pretty, dynamic aren’t. Basically, they can and do crawl dynamic URLs, but static are preferred. (Dynamic URLs vs. static URLs – Google Webmaster Tools, Monday, September 22, 2008 at 3:20 PM) On the other hand, we know that shorter is better, and you can’t get much shorter than https://halfelf.org/?p=1358 when you get down to it.

    I would posit that, since the web is based on look and feel, the design of your site still relies, in part, on the ease of someone in understanding the URL.