Half-Elf on Tech

Thoughts From a Professional Lesbian

Author: Ipstenu (Mika Epstein)

  • Consenting to Collection

    Consenting to Collection

    Collecting information on users is something every program wants to do. Doing it is easier said than done. (more…)

  • Disclosure

    Disclosure

    One of the myriad reasons I push back on WordPress plugins is because someone didn’t disclose enough information about ‘phoning home.’ Phoning Home is simple concept that comes down to this “Don’t send data from the plugin users back to yourself (or anyone else), unless you’re a service. And if you are a service, make this clear in the readme.”

    Simple, right? It’s totally easy to understand that we want you to tell people “Hey, if you use this plugin, it will report back to my servers…” Much of the time, this is obvious. A Facebook connection plugin, logically, contacts Facebook. Embedding YouTube playlists contacts YouTube. Those sorts of things we don’t worry about, though if you have your plugin say “This will pull data from YouTube” then it’s better. Sometimes this is less obvious, like if you use geoplugin.net to determine locations. Your plugin probably has nothing to do with that domain, but you still have to tell people about it so they know.

    But why do they need to know?

    First of all, this is the basic email I’ll send you if I see you’re not explaining the phone-home in your plugin:

    Plugins that send data to other servers, call js from other servers, and/or require passwords and APIs to function are required to have a full and complete Readme so we can make sure you’re providing the users with all the information they need before they install your plugin. Our goal with this is to make sure everyone knows what they’re installing and what they need to do before they install it. No surprises.

    This is especially important if your plugin is making calls back to your own servers. For the most part, we do not permit offloading of images or code, however in the case where you are providing a service (like Disqus or Akismet or Twitter), we permit it. The catch is you have to actually explain this to the layman in your read me, so they know where data is going.

    Clearly there are some basic reasons, like we should know where our data is going for our own safety. There are also some surprising reasons to people who don’t think about these things, like legal ones. You’re calling out to other servers? What if my company legally can’t do business with them? Then we have the tin-foil hat reasons, like I don’t want to do business with Google so I don’t want to have Google JS in my plugin.

    All that sounds pretty basic. And some of it is super obvious. If you’re making an app to communicate with Facebook, then it’s logically going to send data to Facebook. None of that surprises anyone, nor should it. With a service, one simply has to be up front about what the product does, what services it connects to, and why.

    “This plugin pushes your comments from your Facebook page to your blog, matching users by email addresses with their Facebook accounts (if found).”

    I just made that up. But it’s upfront, it’s honest, it’s direct, and it’s clear what’s being sent and where and why.

    There’s another aspect to this, however, something that is far trickier and more complex. What happens when your existing tool adds a service?

    The obvious answer is that you need to disclose this change to our users. As long as the users know what they’re getting into, then you are golden. The complex answer, the one I can’t really tell you a one perfect answer for, is how you might do this.

    Why is this hard? It’s hard because there is no one right way to tell users about the change. There is no one perfect way to make sure users read the information. There is no one way to get all the data to all the people who need to know about the information.

    And there sure as hell isn’t one way to make sure no one will complain about any of that.

    When you make a change to the paradigm of what your tool does, taking a stand alone tool and adding in a service, you have to consider that a percentage of your users will rebel. This is simply because all those wonderful things you do about disclosing the service for new users have to be transitioned in a meaningful and logical way to existing users. And there is just no way to do that perfectly.

    This doesn’t mean you shouldn’t try. This means you have to be creative and innovation and a little ‘in your face’ about the change. You have to give users an option, before the service kicks in, to say if they want their data shared.

  • Mailbag: Wasting Time

    Mailbag: Wasting Time

    Today’s question comes from a Slack DM tossed my way about learning and possibly discarding Jekyll.

    Don’t you think you’re wasting your time learning all these nonWordPress things?

    Nope! Every single thing I’ve learned and discarded has improved my skill set.

    Let’s take MediaWiki. Learning that taught me templating in a way that I never would have understood in WordPress. It also taught me about the perils of your ‘own’ language instead of HTML. While I’ve come to like Markdown, you still have to know HTML to make Markdown really work, because you need to understand what it is you’re writing.

    And Jekyll? I learned a lot about importing and exporting data between ‘languages’ that don’t like each other. I also learned a lot about deployment of static content. Anything but FTP, right? Jekyll had me writing my own deployment scripts.

    Now that I’m looking at Hugo, really not much has changed. I’ve learned GoLang, which is not all that different from things I already knew. But it’s expanding how I think about the logic structures. Hugo’s got an up on Jekyll in a lot of ways, like how easy it is to make loops for traditional blogs. Also it can handle remote JSON a little better, which sets me up for what’s next.

    You see, all of this work, all this learning, is going to come back to WordPress.

    What I want to do is manage my site in WordPress and have that output the JSON (via that awesome new JSON API I’ve been learning), which will in turn be dynamically called when I want to build my static HTML site.

    For sites you’re not updating daily, or even weekly, this might be the magic you need. Everyone writes on WordPress. Someone has a command (or even a script) to run that collects everything from JSON and dumps it to Hugo, which generates the site for you to proof and then push.

    Version controlling the content.

    Let the users write in Wordpress while you bask in the glory of your static site that is, pretty much, unhackable as a CMS. I mean… what are you going to do to my HTML?

    See?

    It’s all WordPress.

  • Deploying with Hugo

    Deploying with Hugo

    One of my headaches with Jeykll was the deployment. It wasn’t until I found Octopress that I had any success. Hugo wants you to use Wrecker for automated deployment. I don’t want to since I don’t host on GitHub.

    Deploying (v1)

    The first method I could use is a script like I have with Octopress, which runs an rsync, but instead I went with a git post-update hook:

    GIT_REPO=$HOME/repositories/my-repo.git
    TMP_GIT_CLONE=$HOME/tmp/my-repo
    PUBLIC_WWW=$HOME/public_html/
    
    git clone $GIT_REPO $TMP_GIT_CLONE
    rm -rf $PUBLIC_WWW/*
    cp -r $TMP_GIT_CLONE/public/* $PUBLIC_WWW
    rm -Rf $TMP_GIT_CLONE
    exit
    

    And that copies things over nice and fast. I could do this for Jekyll as well, since now I’m version controlling my site content (in public) as well as my source code. There are downsides to this. I don’t want to sync my site folder. It’s going to get very large and messy and annoying.

    The workaround for Jekyll people is to install on the server and that didn’t work for me.

    Install on the Server

    But Wait! There’s More!

    I actually like Hugo better than Jekyll,. It feels slicker and faster and more … app like. Also since it’s not Ruby, I was able to install it properly on my server. Yes, unlike Ruby which was a crime, GoLang was incredibly easy to install on CentOS 6.

    First install Go and it’s dependancies:

    $ yum install golang
    $ yum install hg
    

    Now install Hugo. No you can’t yum it up:

    $ export GOPATH=/usr/local/go
    $ go get -v github.com/spf13/hugo
    

    This puts it in /usr/local/go/bin/hugo and I can run Hugo commands natively.

    Which brings me to …

    Deployment (v2)

    Change the aforementioned script to this:

    GIT_REPO=$HOME/repositories/my-repo.git
    TMP_GIT_CLONE=$HOME/tmp/my-repo
    PUBLIC_WWW=$HOME/public_html/
    
    git clone $GIT_REPO $TMP_GIT_CLONE
    /usr/local/go/bin/hugo -s $TMP_GIT_CLONE -d $PUBLIC_WWW 
    rm -Rf $TMP_GIT_CLONE
    exit
    

    Works perfectly. Unlike Ruby which is just a brat.

    Now the Magic

    When I’m ready to write a new post, I can locally spin up my hugo site with hugo server, write my post, make sure it looks nice, and then commit my changes to git and push.

    cd ~/Development/my-repos/hugo-site/
    hugo new content/some-file.md
    vi content/some-file.md
    git add content/some-file.md
    git commit -m "revised some-file"
    git push deploy master
    

    And it all works.

    Massive hattip to Andrew Codispiti for detailing his work which made mine easier.

  • Hugo

    Hugo

    I’ve been playing with Jekyll a lot. After WordCamp US, I started toying with the idea of a JSON powered Jekyll site, where WordPress output its posts as JSON and Jekyll pulled in the posts and converted them. This ran into many snags. It wouldn’t be ‘dynamic’ for one, but the biggest issue is that Jekyll’s JSON reading ability was terrible. It didn’t like the complex JSON that WordPress put out.

    That sent me hunting down other things like Hugo. Unlike Jekyll and it’s use of Liquid, Hugo uses Go (hence the Go in the name you see).

    Installation (on a Mac) is easy.

    brew install hugo

    Done. It’s harder on my server, but still easier than Jekyll, which started to become a ‘thing’ as I worked through all this.

    Using Hugo is remarkably similar to Jekyll except that it works faster and a little more smoothly. The site builds incredibly fast and it dynamically refreshes. So if I edit a post, the page refreshes itself. This let me tweak my theme and posts and sort out the new language incredibly fast. Edit a file, boom, I see what’s wrong.

    It’s about as logical as Jekyll too, so it took me one DayQuil addled afternoon to sort out how to make things work.

    The Basics

    The basic are simple. You have a basic structure like this:

    ▸ archetypes/ 
    ▸ content/
    ▸ data/
    ▸ layouts/
    ▸ static/
    ▸ themes/
    config.toml

    The config file is what you think it is. Your posts go in content and the html-ized version shows up a public folder that gets created on the fly. The data folder is for your static data like a .json file or a .yaml.

    Independant to your theme, you can put css and js in the static folder, and layouts in the layout folder. Ditto archetypes, which are post-types. So if you know you’re always going to have a post type of ‘musician’ and you want it to have special headers, then you can have an archetype called musician.md with all that pre-filled in. Then when you want to make a new entry for Clifford:

    $ hugo new musician/clifford.md

    You can also have those in your theme if you wanted, but generally I use the same theme and separate projects. At this point, I was impressed enough to be swayed from Jekyll. I’m not going to explain how to write a post, since it’s just markdown and a header, just like Jekyll or GitHub Pages.

    Building Your Site

    The commands are basic. Running $ hugo server will build your demo server. Of course, you’ll want to post your drafts, so you need to add the param --buildDrafts … Oh and you want a theme …

    $ hugo server --theme=utility-pro --buildDrafts

    That’s silly, right? Why do I have to define all that? Thankfully I can edit my config.toml file:

    baseurl = "http://example.com/videos"
    languageCode = "en-us"
    title = "My Super cool Video Library"
    theme = "utility-pro"
    buildDrafts = "true"
    [params]
    Subtitle = "Videos about cats, collected from the internet."
    

    Now every time I run the command ‘hugo’ it will build my drafts with my theme!

    $ hugo
    45 of 45 drafts rendered
    0 future content
    45 pages created
    42 paginator pages created
    14 tags created
    3 categories created
    in 254 ms

    That’s incredible fast seeing as it built everything. Of course the other site I have is a great many more pages.

    Theming

    Since I’d already mastered Jekyll theming, this was trivial. Go and Liquid are weirdly similar and most of it was transposing. There’s not much to say here except that there’s a Twenty Fourteen Theme for Hugo and it’s pretty much what you expect. For WordPressers, it’s a good place to start.

    Shortcodes

    The neat thing is that Hugo has shortcodes! They’re not like WordPress ones, but you can see the similarity between WordPress and Hugo.

    [video mp4="video.mp4" flv="video.flv"]

    vs.

    {{< video mp4="video.mp4" flv="video.flv" >}}

    Sadly there’s no oEmbed. And I had to write the shortcode on my own, but again, if you know the basics of logic all this stuff is easy. Here’s the magic behind the shortcode:

    <video class="video-js" width="{{ if .Get "width" }}{{ .Get "width" }}{{ else }}640{{ end }}" height="{{ if .Get "height" }}{{ .Get "height" }}{{ else }}385{{ end }}" controls>
    
    	{{ if .Get "mp4"  }}<source src="{{ .Get "mp4" }}" type="video/mp4">{{ end }}
    	{{ if .Get "ogg"  }}<source src="{{ .Get "ogg" }}" type="video/ogg">{{ end }}
    	{{ if .Get "webm" }}<source src="{{ .Get "webm" }}" type="video/webm">{{ end }}
    
    Your browser does not support the video tag.
    </video>
    

    Once you look at it, it’s remarkably like WordPress. Only I don’t need a plugin. Everything in /layouts/shortcodes/ are like mu-plugins.

    And So?

    And so Hugo won enough of my attention that I’m going to keep playing with it and see what’s next.

  • Mailbag: Why did you ask that at the Town Hall?

    Mailbag: Why did you ask that at the Town Hall?

    Last weekend, at WordCamp US, I asked a question at the town hall.

    At its heart, the question was that did Matt feel the rapid release cycle of WordPress major revisions was too fast? I asked this based on the concerns I hear voiced by plugin developers, generally the smaller shops (single person or under ten teams), who not only have to test their plugins with the new versions of WordPress, but also learn these new, rapidly evolving and changing, libraries like ReactJS and the REST API.

    Matt’s answer was essentially no, and that he felt that things would only get faster. Also he said he didn’t think plugins should be one-person shops.

    What did you think of the answer?

    I was asked variants of this by many people that night and the next day.

    I disagree with Matt somewhat.

    This isn’t a shock. I’m sure he’ll read this and nod. But he and I both know that a health disagreement can be good for an ecosystem. I understand his point, and in many ways agree with it. A team project for plugins and really any development is what makes things improve faster. Two heads are better than one.

    But at the same time, we look back on things like Yoast SEO, and to think that those can only exist while supported by a team is to forget the way that all of this, even WordPress, started.

    One person has an idea. One person shares the idea. A second person makes a suggestion.

    One person.

    Of the 45,000 plugins in the repository, the majority happened because of one person. One person had an idea and a dream and built a plugin. One person learned a thing and shared it. One. And the harder we make it for that one person to grow, the harder it will be for them to become the next Yoast, or Woo, or Jetpack.

    As of this post, we released four major releases of WordPress within 355 days. I think that speed for the sake of speed is as bad as dragging out feet and having one release a year. Yes, we have improved our stability by having more frequent releases, because we don’t rush to have an unready feature added to a release. There’s going to be another soon. And that’s a good thing.

    At the same time, it’s pushed us to release faster and faster, and that causes the bar to be set too high to new people. It causes burnt out. It causes update fatigue.

    I don’t think we should revert to ‘release when it’s ready’ again. That has as many problems (if not more) as ‘release X times per year.’ I do feel we need to consider the emotional health and the supportability of what we are releasing.

    Do it well, do it fast, do it cheap. Pick two. And know that the price is from our blood and bone.

    I think we should turn it down a notch. Just one notch. And we should stop releasing just to be sure we release a number of times a year.