We all know that SEO is ‘Search Engine Optimization.’ I humbly suggest we pay better attention to HEO – Human Experience Optimization.
After you spend hours and hours optimizing your site for search engines, you should sit back and think about how the humans who are reading your site. This should be blindingly obvious to everyone, but more and more we hear about how you should make your URLs SEO friendly, or your post excerpts/slugs/format/meta-data the best to get highly ranked in Google. At a certain point, you’re missing the goal of a website.
A website is not for search engines, a website is for humans.
Humans like to be able to find what they want relatively painlessly. They like to know when something was written (or when whatever it’s about took place). They like to be able to search, sort, surf and select. They like to know weird things. It’s your job to make sure that when a user hits your site, they stay.
Fonts
I’ve mentioned before that font choices matter on your site. Perhaps the most important thing to remember about fonts is that people have to be able to read them. A lot of sites make their fonts very small, which force viewers to hit Ctrl-+. This is one of Jakob Nielsen’s pet peeves. Users should be able to control their font size, but you should also set your font starting size to something legible.
Imagine my surprise when I went to a site and saw this:

I had to zoom in to read. That font is set to font: 11px/13px "Lucida Grande"..... Just by changing it to 12px/20px it was easier to read, but to make it a perfect starting point, it should really be 14px/20px. You’ll need to balance on your font choice with the size, though, as too-thick and too-thin fonts are equally painful for people to read.
Colors
I’m in my mid-thirties with the best worst vision you’ll find before someone gets classified legally blind (that said, I have fantastic night vision). I cannot read black backgrounds with white text for more than a few seconds without getting after-images. I’m not in the minority of the world. There’s a reason books, eReaders, newspapers and magazines tend to print dark text on light backgrounds, and it’s not just the cost. More people can read that setup. On top of that, don’t use background images. The busier the background, the more difficult it will be to read and you’ll draw the attention away from the text.
The colors on your site need to be easy to read, and not strain the eyes.
Layout
Did you know that users tend to read to the left? This sort of flow makes sense when you consider that most languages are read left-right. Jakob Neilsen points out that people spend “more than twice as much time looking at the left side of the page as they did the right.” (Jakob Nielsen’s Alertbox, April 6, 2010: Horizontal Attention Leans Left) Not only that, but people actually tend to read pages in a pretty distinct F-shaped pattern. (Jakob Nielsen’s Alertbox, April 17, 2006: F-Shaped Pattern For Reading Web Content)
So how do you best layout your website? I tend to think people read content better if it’s on the left, so I put the body of my text left and the sidebars right. I also take into account that newspapers and magazine break up text into columns for readability reasons, and set a fixed width to my site. That choice is somewhat controversial among my friends, but I like to look at the iPad and Kindle for examples as to why you want to not allow forever-width pages. Monitors are big, browser windows can be huge, but in the human head, eyes are spaced in a certain way. Making your page’s content too wide is a drain.
Page Length
There used to be a concept of ‘The fold’, which was basically that people didn’t scroll down on webpages in the early days of the web, so if they didn’t see your important content on the top half of your page (i.e. above the fold), they weren’t going to see it at all. It’s 2011. People know to scroll down a page.(Jakob Nielsen’s Alertbox, March 22, 2010: Scrolling and Attention) But you still need to make sure your site has the most important content ‘above’ the fold.
Where’s the fold these days, though? Monitor size is a lot more variable today than it was in 1995, and the break-point on a page is getting pretty difficult to figure out. Unlike a newspaper, where the ‘fold’ is pretty obvious (unless you’re the Chicago Sun Times), you have to take a pretty good guess at where the ‘top’ of your site is. Oddly, this is a lot easier with the iPad, which currently is my benchmark for ‘the fold.’
Keeping that in mind, page length matters! I try to keep each post no more than 1200 words, because of human attention span. If I happen to dip longer, I’ll consider breaking the post into multiples.
Permalinks/URLS
Samuel Wood (aka Otto) said it simply:
Humans care about dates. Leaving a date identifier (like the year) out
of the URL is actually de-optimizing the site for humans.
Not everything should have a date, mind you. Resources like WikiPedia or other sites that act as repositories for static, timeless material (like a book), certainly do not need date stamps. Deciding if your site needs to include the year in the URL (like I do here), or not at all (like I do elsewhere), is something you need to think long and hard about. If you’re making a ‘traditional’ blog, or a newspaper, or some site that acts as a repository for time-based information, the answer is simple: Yes you do.
In addition to sorting out if you need dates or not on your site, you have to think about the post format. I’m a huge proponent of pretty URLs, so I tend to lean to custom crafted URLs. On WordPress, I always review the permalink and, if I think it could be better shorter, I do so. MediaWiki defaults to whatever you want to name the page and puts that in as your page title(Oddly you can only override this with {{DISPLAYTITLE:Custom title}} , which has weird results in searches.), but WordPress uses the ‘title’ of your post and makes that your page title.

This is pretty easy to change, though. Just click on edit and make it shorter (which I strongly suggest you do in most cases).
What else?
I could go on and on. Like how you shouldn’t use too many ads (and whatever you use, they shouldn’t be bigger than your post content!), don’t use flashing images/text, and keep in mind your audience! What are your hot-button topics for making your site human friendly?



I get a lot of requests from people to link to their sites. Back in the day, we all used to have massive link pages where we just listed all the cool sites we knew about. On a fansite, I actually still have one where I list all the related sites, organized by how they’re related, separated by language, etc etc. Here, though, you see a list on the right of links, broken down into “Websites” and “WordPress” and that’s pretty much it.
The Internet is the exact same way. So when you cold-email someone and say ‘Hi, I really like your stuff! Will you link to me site?’ you need to bring your A Game. You need to sell your work, explain to me why you’re worth space on my site, and how come I should read your blog. Just saying ‘I, too, am a blog about vegan dog food!’ doesn’t cut it for the bigger sites. You can’t expect people to spend all their time checking out people they should link to, especially if you’re not already linking to them. Think of it like coming up with a good cover letter for your resume. You want people to read that page and go “Yeah, this cat is cool!”
Back in the day, search engines would rate your site based solely on your self-contained content. One of the ways we would promote our sites would be to use hidden text or meta keywords that only the search engine would see. We’d list all the keywords related to our site about dog biscuits, and awesomely, we’d get rewarded. Naturally some people would shove totally irrelevant keywords in, to game the system for other searches. Which is why sometimes you’d search for ‘free range catnip’ and get a link for ‘wetriffs.com'(Note:
Of course, there are good backlinks. Like mine to Yoast’s (not that he needs the ‘link juice'(The term ‘link juice’ is what we use to call the ‘value’ of a link coming back to our site. If I link to you, I give you ‘juice’ which boosts your page rank. In Yoast’s case, he doesn’t need any help, but I give it anyway.)). But the best way to get those is to get yourself known in your arena. People don’t link to new sites because they don’t know about them, so you need to get out there and get known. Talk to a site you admire (or people you admire) and ask them if they’ll read and review your site. Post your articles on twitter/facebook/digg/whatever and basically put in the sweat equity to make your site shine. And if that sounded like a lot of work for you, then you’re right. It is work. It’s hard work.
minification and CDN) is a great thing to speed your site up, but at the end of the day, all advice in the world boils down to this: If there’s nothing here for people to read and find beneficial, your site is useless.
With all the kerfluffle about the
AJAX has a whole lot of problems, most important (to me) is the fact that it’s not easily read by screen-readers, which means your blind visitors get the shaft a lot from poorly written AJAX sites. You also can’t use the back button most of the time, nor are these sites easily read by web crawler bots, which means that you don’t end up on Google results. Your page isn’t ‘real’, therefore it’s not followed. Of course, the folks at Google recognized this problem and came up with a weird solution: The Hashbang! That’s right, Google introduced it as a way to get sites dependant on AJAX into their results. (
But that really begs the question of whether or not pretty URLs actually matter anymore, or this just me being an old stick in the mud? Much like Google’s minor issues with AJAX/Javascript, they have minor issues with Dynamic URLs. Quickie explanation: static URLs are pretty, dynamic aren’t. Basically, they can and do crawl dynamic URLs, but static are preferred. (
Now that you know all about the
“Content farms” are the wave of the future, and Google calls them sites with “shallow or low-quality content.” The definition is vague, and basically means a content farm is a website that trolls the internet, takes good data from other sites, and reproduces it on their own. Most content farms provided automatically inserted data. There is no man behind the scenes manually scanning the internet for related topics and copy/pasting them into their site. Instead, this is all done via software known as content scrapers. The reasons why they do this I’ll get to in a minute, but I think that Google’s statement that they’re going to spend 2011 burning down the content farms is what’s got people worried about duplicate content again.
What Google is doing is not only laudable, but necessary. They are adapting to the change of how spam is delivered, and doing so in a way that should not impact your site. The only ways I can see this affecting ‘innocent’ sites are those blogs who use RSS feed scrapers to populate their sites. This is why anytime someone asks me how to do that, I either tell them don’t or I don’t answer at all. While I certainly use other news articles to populate my site, I do so my quoting them and crafting my own, individual, posts. In that manner I both express my own creativity and promotion the high quality of my own site. I make my site better. And that is the only way to get your site well-ranked. Yes, it is work, and yes, it is time consuming. Anything worth doing is going to take you time, and the sooner you accept that, the happier you will be.

I am not an SEO expert. In fact, there are only a handful of people whom I feel can claim that title without making me roll my eyes so hard I get a migraine. Anyone who tells you they have all the answers to get your site listed well in Google is a liar, because there’s only one good answer: Make a good site. That’s really it. How then do all those spam sites get listed in Google, Bing and Yahoo to begin with, and will the techniques the search engines are using to eradicate those sites hurt you?
You write a blog post and the content is stored in the database, along with any tags, catgories, or meta data you put in. When someone goes directly to the blog post, they see. However, they can also see the post if they go to a list of posts in that category, with that tag, on that date, in that year, etc etc and so on and so forth. So the question a lot of new webfolks ask is “Is that duplicate content?” No. It’s not. Nor is having ipstenu.org and ipstenu.org point to the same page. In fact, that’s good for your site. The more, valid, ways you have of providing your user with information, the easier it is for them to find what they want, and they happier they are. Happy users means repeat users, which means profit (in that oh so nebulous “web = profit” theory).
Google also makes the claim that since CMSs generally don’t handle duplicate content ‘well’ (their word, not mine), non-malicious duplication is common and fairly harmless, though it will affect search results. Here’s where things get sticky. Personally, I disagree with Google’s claim that CMSs handle duplicate content poorly. A well written CMS, knowing that no two people think the same way, takes that into consideration when crafting a site. You want an index, but if you know someone looks for things by subject matter or year, you need to have a way to provide that information for the reader. Google’s problem is that in doing so, you have also provided it for the GoogleBots who patrol your site and pull in the data for searches, which makes the dreaded duplicate content.