Half-Elf on Tech

Thoughts From a Professional Lesbian

Tag: design

  • Rant: Lightboxes and Scrolling

    Rant: Lightboxes and Scrolling

    Lately I’ve run into a problem where a tool I’m using has a lightbox that’s cut off at the bottom of the screen.

    This generally happens because I have my browser at half-height, or because I’ve pressed the ‘increase font’ keys to make a site 110% text for readability (WordPress P2, I’m looking at you). Here’s what life can look like for me:

    Dropbox doesn't scroll with lightboxes

    This image shows a lightbox cut off midway. The bottom of the picture shows the bottom edge of my browser. Obviously I could make my browser window larger, and most of the time this is what I do. But should I have to? There are situations where I can’t do that, like on mobile. Whomever decided that overlays on mobile was a good idea needs to have their favorite sweaters eaten by moths.

    Make your screen scrollable. Make sure your lightbox doesn’t get totally jacked up when a screen is ‘too small.’ A major part of responsive design is not just making sure your site works on mobile devices and ‘full blown’ computers, but that it works on all sizes of monitors when a browser isn’t maxed out.

    Here’s another example:

    Another example of Dropbox not accounting for screen limitations

    In that image I have what I can only presume is an advertisment that wants me to click something. It’s not until I change my zoom to 75% that I can see this is an ad for Dropbox Business:

    I can only see the ad for Dropbox Business if I reduce font to 75%

    Oh and Dropbox was totally unhelpful when I reported these things, which is why they’re getting shamed. They said to fix my screensize.

    When a similar issue happened with Slack, I got an apology and a promise to address it. Which they did within a week.

  • Jekyll Table of Contents

    Jekyll Table of Contents

    One of the cool things about MediaWiki, about the only thing I missed, was the built in table of contents display. With Jekyll, since this was a static sort of site, you would have to tell it ‘When you build this page, include a table of contents.’

    And I found a brilliant Jekyll plugin, Jekyll ToC Generator, that added in a beautiful jquery based table of contents with a click back to top feature. But there was a problem. When I installed it and ran a build to test it, my Jekyll site took nine times as long to build.

    In order to generate a Jekyll site, you run the command jekyll build and, on Jekyll 3.0, that gives you the output “done in 8.493 seconds” or so on. Now, if I do a full build on a site I’ve cleaned (there’s a clean command you can run to scrub the site and ensure you get a clean rebuild), it generally takes about a minute. That’s to build a thousand files with a lot of weird trickery. If I’m just rebuilding the changed files, it takes about 10 seconds. Much more reasonable.

    With the ToC Plugin, it took 98 to 100 seconds, every single time. Right away, I knew why. I had included a plugin that had to check every single page on the site on that rebuild, see if it needed a table of contents, and then build the page. Of course that took a long time!

    I’m always talking about needs and wants when I work on websites. It’s a basic principle my high school drilled into me. Understand what you need and how it’s different from your wants. Don’t compromise on needs. Well, I knew that I didn’t need a table of contents, not on every page, so it clearly had to be a ‘mostly want.’

    By contrast, having my site build quickly was a need. A fast build ensured less overhead, less weight, and less time spent. Time is a massive factor in websites. The rendered site has to be fast for users, this much is obvious to everyone. But having your build be fast means you, the site maintainer, spends less time on the parts that don’t make the site better, freeing you to develop and write.

    Also speed is something Jekyll wants to work on. The build different between the 2.3 version of Jeykll and the 3.0-pre beta, is incredible. In 2013, a site with “362 articles with 660 words in average” took around 10 minutes for a full build. I have double the articles, about the same amount of words, and it’s a minute for a full build. It’s faster on my faster laptop (duh).

    The decision tree for Jekyll is more obvious on build than the same one is for WordPress (or MediaWiki) on render. The basic concept is simple. The more complex your site, the longer it takes to generate pages. For WordPress (or MediaWiki, or Drupal, or Joomla, etc etc), the render happens when someone visits your site. For Jekyll and other static site generators, the render occurs when you build the site. That means with Jekyll I can see right away, before I get close to deployment, which means I make the decisions well before the ‘stage’ step of my deployment process.

    What’s more important? The complexities that make your site personal or a fast build?

    Here’s an example from a Jekyll discussion on the matter. Someone had the following code in the templates, which made the date output rather pretty:

    <span>{% assign d = page.date | date: "%d" | plus:'0' %}
        {{ page.date | date: "%B" }} 
        {% case d %}
            {% when 1 or 21 or 31 %}{{ d }}st
            {% when 2 or 22 %}{{ d }}nd
            {% when 3 or 23 %}{{ d }}rd
            {% else %}{{ d }}th
        {% endcase %}, 
        {{ page.date | date: "%Y" }}
    </span>
    

    But that had to run on page builds, which naturally was going to make a site slower. One of the ways Jekyll has improve this was to introduce incremental updates. Only update the pages that need updating. That is a big “baboom!” moment and it let me run the plugin jekyll-last-modified-at (which spits out a last modified date on pages) without any performance hit except on the clean build. Since that only gets called when a page is built, and a page is only built when it changes, it’s a massive improvement for me.

    What does all this have to do with the table of contents?

    Once I pushed it into a ‘want’ and not a ‘need’ I opened my mind to other possibilities. I stopped looking for a Jekyll or Ruby or Liquid based table of contents, and I asked myself “Can Markdown make a table of contents?”

    Markdown is a ‘language’ like HTML that is actually faster to write in than raw HTML but can be read, rendered, and output as HTML. I’m a big fan of HTML and part of why I picked WordPress back in the beginning was that I stumbled on a post where Matt Mullenweg talked about how he didn’t like bbCode and didn’t want it in WordPress. HTML was something we knew. Why make people learn something new?

    It wasn’t until I started blogging more on my iPad and phone that Markdown made sense. Now I’m quite the fan. But I knew that I didn’t know a whole lot about Markdown. I did know that Jekyll used a flavor called ‘kramdown’ (all lowercase) so I read up on that and found that kramdown has a built in built in table of contents generator that was incredibly easy to implement.

    * TOC will be output here
    {:toc}
    

    It’s not something I want (or need) on every page, so I just put that on a few pages. No real overhead added and it’s easy enough to style with CSS. Suddenly I have my cake and I can eat it too.

  • Jekyll Collections

    Jekyll Collections

    Early on, Jekyll’s developers said that if someone was using posts for non-blog content, they were doing something wrong. That left one other avenue open, the first time I looked at Jekyll, which was pages. They’re nice, but they’re not what I wanted.

    Enter Jekyll collections. These are ‘arbitrary’ groups of related content which you put in their own folder. I had 15 years of interviews collected, so for me this seemed like a perfect idea. I read up on Ben Balter’s – Explain Jekyll Collections like I’m 5 and it helped me sort out what I wanted.

    Configure

    This is easy. You just add the collection code to _config.yaml

    # Collections
    collections:
      interviews:
        output: true
    

    Having the output set to true means that when I run jekyll build the pages are generated. That’s pretty simple. They don’t get auto-generated when you run a jekyll serve and you’re testing locally, however. Which sucks. I upgraded to Jekyll 3.0 beta and it started working, though, and I’m okay with running a beta.

    Create A Folder

    Also easy. Make a folder called _interviews in the main Jekyll folder. I will note, this gave me a fit. I wish I could put all my collections in a subfolder, because now I have this:

    _data
    _includes
    _interviews
    _layouts
    _pages
    _posts
    _sass
    _site
    

    It’s messy, and if I didn’t know that some of those folders are special (like _includes) I could easily be confused. The _site folder makes some sense, that’s where my site is output. But even if I use the source setting to move all my source pages into a folder (called _source in my case), I still can’t separate the code from the content. What I would like is this:

    _assets – Store all of my ‘code’ like layouts, plugins, css, etc here.
    _content – Store all my post content, collections, pages, etc here.

    Still this is a little better for me. Less insane. I will note, I was able to move my folders by defining the directories in my configuration file like this:

    # Moving Folders
    source:       _content
    plugins_dir:  _jekyll/plugins
    layouts_dir:  _jekyll/layouts
    includes_dir: _jekyll/includes
    

    So now my main folder has two folders _site and _content which is a lot easier for me to work with. I feel less muddled. Inside the content folder is a _jekyll folder which is my ‘wp-content’ folder, and a _data folder, which has some data files. More on that later.

    NB: This only works on Jekyll 3.0 and up!

    Create Files

    All I had to do was make my files in my _interviews folder and I was done. Well. Not really. I needed a way for Jekyll to link through everything, and I really didn’t think making manual pages was smart. I tossed in this code to my interviews post file and it cleverly looped through everything it found, generating the page on the fly:

    <ul>
    {% for topic in site.interviews %}
    	<li><a href="{{ site.baseurl }}{{ topic.url }}">{{ topic.title }}</a></li>
    {% endfor %}
    </ul>
    

    If you’re familiar with WordPress loops, this is the same thing as saying “For all posts in a category…”

    Customize the Hell Out of It

    Of course you know that’s what I did next. I went and made it super-complex by putting my interviews in year subfolders and then making the main interview page a list of all the years, with links to those pages, and loops back and … well. That’s another post.

  • Stupid Easy

    Stupid Easy

    You have made yet-another stupidly easy plugin. I love it!

    My buddy James said that after testing a plugin I wrote for WordPress that has no settings.

    Frankly those are my favorite plugins, the ones where you install them and walk away because they do one thing, they do it well, and you’re done.

    A plugin without options means it does the one thing, I don’t have to decide how I want it to work, and it just goes. Now, at the same time, a black box plugin with no information can be complicated and tricky, which means you have to sit and make the decisions thoughtfully at the beginning.

    The first question I ask is “Who is it for?”

    I love W3TC, but dear god is it complicated. It’s doing a lot of things and while it does them all very well, it’s hard to understand what all the settings mean and how they all need to work together. When I set about adopting a Varnish plugin, it was picked by my coworkers specifically because it was simple and it worked the majority of the time. Can you break it with other plugins and themes? Sure. That’s the nature of WordPress interoperability. But at the same time, it works without user interaction.

    That was the key, we felt. A plugin that just runs, no user interface needed, just let it go. This was simple, this was easy, and this was direct.

    Since then, I’ve added in two ‘options.’ One is the ability to define your true IP address. This is for people who are behind proxy services like CloudFlare. The way a Varnish purge works, is it sends a command to the domain name. If your domain is handled by CloudFlare’s servers, then it gets messy. The second was a ‘purge all’ button on the toolbar, which let you manually flush the entire site.

    Those two settings are hard and easy to access. The defining the IP can only be done via editing your wp-config.php or via command line. Why? The majority of people don’t need it. It’s actually often a latency issue when you cache a cache with a proxy, so it’s not really the best idea in the first place. The purge button is limited only to the admins of a site, because they’re the only ones who should be flushing the cache for an entire site anyway.

    Does this work for ‘everyone’? No. No it doesn’t. Of the 6000 people who like it just fine, there are about a dozen who want things differently. They want a settings page, where they can allow more people to flush cache and define the IP. They want a per-page flush button. They want better error reporting.

    I’m with them on the last one but the others… well it’s interesting. Adding more complications will certainly expand the usability of the plugin, but is that the goal of this plugin? No. In many ways, it would be much better to make a second, different, plugin. Maybe an add-on. Call it “Varnish Advanced” for those people who need more power.

    But the majority need the simple because I knew my target audience were the people who didn’t know or want to know the technical stuff. I handed them a plugin that works in the majority of use cases. I made sure it handled the majority of situations. And I made it so they could just install and walk away.

    I do agree that it needs a little better error reporting, but even that has to be handled carefully. You can’t just hand an end-user an error without simultaneously giving them a direct way to correct it. And no ‘Talk to your host.’ is not a fix. A fix means the user can do something (hopefully simple) to correct a problem. Now, my ‘edit the wp-config file’ fix is not super simple for everyone, but at the same time it serves an important purpose. It makes someone think about how big a change is and what it means.

    Knowing my users means I know they need that stop to think, and it means I know they will think.

  • CDN vs Local

    CDN vs Local

    Which is better?

    I have no idea.

    Sorry. That’s not the end of the post, obviously, but really after the time it took to write this, my answer is that I really don’t know. So let’s talk about why I don’t know.

    The claim is this: “A CDN is faster!” The idea here is that a CDN will be faster because it lessens the load on your server. Anything that’s being process by another server means less work yours has to do. And that is totally true. Also, many CDNs have worldwide locations, which means someone in India can download your cat video from a nearer server than yours in Michigan. Also, more people have downloaded shared resources (like Google fonts) so it won’t be downloaded again, making their experience faster.

    And all that sounds great. But the actual functionality and performance gains are not going to be exactly tit for tat. At first, I thought that since I get a lot of traffic in Brazil, having a CDN that has local servers would be great for them. It was. A little. But it was a lot worse for me. Using my site in the US slowed down measurably. This was because of the latency of the CDN being worse than my own server.

    In addition, many site speed tests measure how many URLs you call in your page. Using a CDN ups that by one. It’s minor, but it’s something to consider. There’s also the idea of branding, which a CDN can hurt if you use it wrong (most CDNs will allow you to use cdn.yourdomain.com of course). Using too many servers for a CDN (think of it like cdn1.example.com, cdn2, and so on) can slow down the user experience too and cause overhead with all the DNS lookups.

    What’s the alternative? Good caching. And I do think that proper, server side caching is hugely important. But at the same time, the right network for your static files can provide significant improvements. Which is why we always end up back at CDNs.

    The real thing to consider is what your content is on that CDN. WordPress is dynamic content and shouldn’t be on a CDN. Images and stylesheets, though, those are perfect for a CDN. You take the traffic and, thus, the load off your server and it becomes faster. Streaming video too.

    Which brings us around to the idea of the cloud as a CDN.

    But is it faster? Is it better? Is it safer?

    Maybe. It certainly can be, but there is no blanket perfect answer for all of us.

    Currently I build local but prepare for the possibility of a CDN later on.

  • Name Collisions

    Name Collisions

    Many many years ago I played MUSHes. One of the games was PernMUSH (which apparently is inactive now). PernMUSH took place on the world of Pern, and you had the chance to be a dragon rider. Which was kind of the Thing to Be ™. One of the ‘quirks’ of the game was that every character had to have a unique name, and so did each dragon. When I started playing, I didn’t really understand this. Today I know that it’s because of name collisions.

    A “name collision” is a problem not solely endemic to computers, but it comes up there an awful lot, whereby you must have a unique identifier to know what each ‘thing’ is. For example, in WordPress every post has two unique identifiers. It has a post ID, which is a number given to the post when it’s stored in the database. If you use ‘ugly’ permalinks, you’ll see this as example.com/?p=123 – that 123 is the post ID. But if you use pretty permalinks (like I do here — example.com/my-cool-post/) then you have to have only one ‘post’ with that name.

    You, literally, cannot have two posts with the same ID or name. Makes sense, right?

    On PernMUSH we had everything have a unique ID as well as a unique ‘nice’ name. But then when dragons were introduced, you had to give them unique names as well. This was not for frivolous reasons nor pretty special snowflake ones. While it was perfectly understandable to have a hundred rooms named “Bedroom,” the code for the dragons allowed them to all talk to each other and send private messages. They were, basically, our cell phones. Dragon Ath had to be able to talk to dragon Bth, and in order to ensure that worked properly without everyone having to type dtu #12724=message we had to have the code written such that someone could type dtu bth=message and that meant each name had to be unique.

    This would have been fine and dandy as it was except for one small problem. PernMUSH wasn’t the only MUSH based on Pern. There was also a game called SouCon, which took place on the Southern Continent. And transfers between the games were allowed. This added in a wrinkle that now PernMUSH and SouCon had to be sure that everyone on both games had a unique name and dragon name.

    It was quickly determined that they wouldn’t bother with human names. If J’cob on SouCon came to visit PernMUSH, which already had a J’cob, then SouCon’s J’cob would use a different name like Jy’cob. But for whatever reason it was decided that the dragon names on both games were going to be unique. Thus the “All The Weyrs List” was created. That list (which still exists at dragons.pernmu.com ) was a mostly honor system site where you would email in your ‘hatching records’ with who’d impressed and to what dragon and what color and who were the parents. The list would be updated. Then the next time anyone had a hatching, they’d search that page for the dragon names they wanted to use. If the name was there, then then couldn’t use it. Done.

    Of course this wasn’t perfect. Anything based on the honor system is bound to have a few bad eggs. After 10, 15, 20 years, the ability to give people the name they ‘want’ starts to chaff against the tacit agreement not to repeat a name. At some point, I know some games gave up and let people have whatever name they wanted, and transfers could cope.

    What does all this have to do with anything?

    On the WordPress.org servers, where we list all the plugins approved by the team, each plugin has a unique slug that cannot be changed. I have a plugin called Impostercide, which has the slug of impostercide and it’s the only one. No one else can submit a plugin with that name. For the most part, this worked fine. If someone else wanted to make a plugin with that name, they were free to do so but it just wouldn’t be on WordPress.org and that was okay.

    Then we shot ourselves in the spirit of making life easier. Today WordPress updates your plugins and themes by using an API that calls back to the wordpress.org servers. That API check sees if Impostercide on your install of WordPress is older than the one on wordpress.org and, if so, alerts you to update. You press a button and your plugin is updated. It’s magic. It’s gold. It’s great. If you’re that person who wrote your own plugin, not on wordpress.org, you can hook into the update code and have it update from other servers. It’s brilliant.

    Except what if you’re that person who has their own plugin named Impostercide? The obvious answer is that you can just rename your folder and off you go. That doesn’t fix the thousands of people who just upgraded themselves to my version, though. They’re having a bad day. Also what if someone submits a plugin called impostercide-two? Now you have the same problem all over again. Other people will tell you to bump the version to something the real Impostercide will never use. But again, that doesn’t hold up since what if Impostercide does?

    The actual fix is to tell WordPress not to check for updates for that specific plugin.

    The awesome Mark Jaquith posted about this in 2009. You can code a plugin to tell WordPress to not check for updates for it. This does put the onus on people who are writing the plugins not hosted on wordpress.org though, which is and isn’t fair. There’s a movement to allow a new plugin header to prevent these things in trac ticket 32101, which boils down to the idea that if those non-org hosted plugins can flag themselves as ‘I’m not from .org’ then the API stops trying to update them.

    I think that it would be a good idea to have an easy way for people to flag their plugins as not being hosted. The alternative would be an honor system method, where everyone registers their plugin slugs and all submissions to wordpress.org is checked against that. But that falls apart quickly the day one person forgets to do it. With a way to easily kill the API check, we can allow non-org hosted plugins to very simply protect themselves, and their users, from being stomped on.

    As for the risk that someone might edit their own locally installed copy of Jetpack to have that header because they’re tired of updates, well, we can’t stop you from shooting yourself. I just hope people are smart enough to understand that you don’t edit core and you don’t edit plugins and you don’t edit themes. You make child themes, you use other plugins, and you use filters and hooks.