Half-Elf on Tech

Thoughts From a Professional Lesbian

Category: How To

  • Free UpTime by Automagically Watching the Watchers

    Free UpTime by Automagically Watching the Watchers

    It’s 9pm. Do you know where your children are? Well now it’s 2025. Do you know the uptime of your sites?

    There are a hundred ways to get alerted about your site, to know if it’s responsive, to know if the content is correct. There are some amazing applications that can integrate with your phone and wake you up in the middle of the night.

    That’s great. Of course, when you’re a small collection of people, or maybe a tiny company, or even a solo developer, you may not want to spend money (even $7/month for UptimeRobot’s second tier). Heck, you may not want to use a 3rd party service for whatever reason.

    Now what?

    A person screaming in the woods because they don't know if their servers are up. They need an uptime monitor!

    Welcome to the land of self hosting your own uptime monitors.

    The Criteria

    Before we get to the part where I tell you what I’m using, I want to talk about the criteria.

    It absolutely must be hosted on a different server than my website! Period. End of story. I have a dedicated server I use to host all of them, except one, and I sneaky have that one monitor the others.

    I want to know if the site …

    • is up and returning the right HTTP code (200)
    • loads within an acceptable timeframe (different per site)
    • returns the right content (I call this the ‘hack checker’ but it’s also great for checking JSON)

    And what I want it to do …

    • Alert me
    • Make a ticket (if needed)
    • Have a public status page
    • Have a private back end is (optional)
    • Allow for custom domains (i.e. status.ipstenu.com)

    Not one single service does all of this. Not the self-hosted ones either. None of them do all the things I need and want. The biggest problem is alerting.

    I want alerts in Slack, Discord, and I would love an app that busts through Do Not Disturb mode. For self hosted tools, the app is a long shot. For everything else, I have to figure out web hooks, which isn’t too bad.

    But knowing this? What did I end up using?

    Services I Use (and Used)

    Okay this is where I get weirder. I will note, all of these allow for notification.

    ServiceMonitorsStatus
    Page
    Backend
    GatusUptime
    Response Time
    YesNo
    HealthChecks.ioCron JobsNoYes
    OneUptimeUptime
    Response Time
    Metrics
    The Kitchen Sink
    YesYes
    Statping-ngUptime
    Response Time
    YesYes
    TianjiUptime
    Response Time
    Metrics
    Servers (and Docker)
    Request Timeline
    YesYes
    Upptime*Uptime
    Response Time
    YesYes
    Uptime KumaUptimeYesYes

    Technically Upptime is not truly self hosted. It’s on GitHub.

    I’m actually using all of those except Statping-ng (which is the only one with its own iOS app). The reason for that is I couldn’t get the custom CSS on Statping-ng to work, and while I didn’t list it as a criterium, it is something I care about. Branding matters. I want things to look like they belong.

    Uptime Upsides & Downtime Decisions

    Now there are downsides to them all. And in fact, after using them, I had a small update to my criteria in that I need it to also show a history of outages.

    Essentially, all of these are 4 stars out of 5, except Statping-ng, and that is probably user error.

    Gatus

    The sucky thing about Gatus, which is a beautiful tool, is that I have to edit my config.yaml file, and since it’s in a Docker service, I have to restart that for every change. That means you have to edit, restart, test, rinse and repeat. I hate that. I don’t want everyone to have shell access!

    This is also only a public page.

    That said, it has an API so I can use it to show data on the back end of (say) my WP site. Also it has one of the more robust implementations to check specifics out of any of these. I have it checking my sites not just for uptime, but also for page content.

    If Gatus gave me a backend for control and customizable (like custom domains) then it might take over for simplicity.

    HealthChecks.io

    This is a unitasker. It does cron and it does it well. The downside is that it’s a unitasker. It does cron and it does it well. The UX is simple and efficient, and there’s a secret back end for the systems. You can even hook WordPress itself into it for alerts!

    OneUptime

    I want to love this. It has everything you could possibly need… And it destroyed my 6-core server with 16G memory to the point that it was over 250 load! And this is after I did tuning.

    It’s a massive memory and system hog. Also? The interface was very confusing. It took me a while to figure out the nuances of ‘This means the site is up and good, that means it’s not’ and could really be served by a UX overhaul. Also like Tijani, it could be well served by disabling features you don’t use.

    Statping-ng

    I mentioned I’m not using this anymore. The sole reason is it gave me fits when I tried to get it to let me edit CSS with my Docker setup. Other people had that problem too.

    I wanted to love it, especially for the App, but in the end it just feels like a more ‘company’ version of Uptime Kuma.

    Tianji

    The name means Heavenly Opportunity or Strategy. This one is the nerdy developer tool to the nth degree. It can replace Google Analytics with its own website tracker (even a JS event tracker), monitor servers with a shell script (and by default, it monitors the server it’s on), and even make status pages complete with custom URLs. Technically I could replace HealthChecks.io with it, but since it lumps all my monitors together, I’m less fond.

    While it does everything and the kitchen sink, that’s also the downside. I’d love to be able to disable things like App Tracking. I don’t have an app. I don’t need that.

    Upptime

    Technically this is not self hosted. Upptime is a Github Template which calls a service, which runs regular actions to check websites being up. If the site is down, it opens a Github issue. Site comes back up and the issue is closed. It even leaves a history of incidents.

    The only thing I dislike is you can’t tell it to show timezones. While I know that looking at it, if it says 10:00 something happened, it means my time zone. But that really needs to be clearer.

    Uptime Kuma

    Kuma means bear. It’s a little bear who watches! I’ve mentioned this before. Uptime Kuma is still, hands down, the best for a simple, self hosted, status page. Not only can it track websites, it can pass curl headers in an easy to understand UX so I can be alerted if something changes on a specific API.

    While you can make maintenance posts for Uptime Kuma, they don’t get saved so you lose the history. Also the customization options are pretty slim.

    The Verdict?

    The verdict is it depends what you need to monitor uptime, and how you want to be alerted.

    Uptime Kuma is perfect if you want a simple status system with alerts. And HealthChecks.io is perfect for cron. But in the time since I started this post and now, my clear winner for my needs is … Tianji!

    Gatus’ lack of a backend UX means that even though it’s so easy to write the checks, it’s just not enough for me. Uptime Kuma’s limited maintenance features loses out, and once you’ve got Tijani, you can drop HealthChecks (even though the latter has a better UX).

    So here’s where I’m at:

    • Upptime – LezWatch.TV uses this for tracking pretty much everything we need. It includes some ‘admin’ checks that I may move to …
    • Tijani – Everything else I need to watch lives here. That means my servers (plural) send data back so I know if any of them are down. Don’t worry, it’s on it’s own server.

    It’s not perfect, but because everyone’s needs are subtly different, multiple tools are the way to go.

  • Automate Your Site Checks with Cron (and WPCron)

    Automate Your Site Checks with Cron (and WPCron)

    I have a self-hosted healthchecks.io instance (mentioned), and I use it to make sure all the needful cron jobs for my site actually run. I have it installed via Docker, so it’s not super complex to update and that’s how I like it.

    The first cron jobs I monitored were the ones I have setup in my crontab on the server:

    1. Run WP ‘due now’
    2. Set daily random ‘of the day’
    3. Download an iCal file
    4. Run a nightly data validity check

    I used to have these using WP Cron, but it’s a little too erratic for my needs. This is important, remember this for later, it’ll come back up.

    Once I added in those jobs, I got to thinking about the myriad WP Cron jobs that WordPress sets up on its own.

    In fact, I have a lot of them:

    +------------------------------------------------+---------------------+-----------------------+---------------+
    | hook                                           | next_run_gmt        | next_run_relative     | recurrence    |
    +------------------------------------------------+---------------------+-----------------------+---------------+
    | rediscache_discard_metrics                     | 2025-04-25 17:51:15 | now                   | 1 hour        |
    | wp_privacy_delete_old_export_files             | 2025-04-25 18:16:33 | 20 minutes 38 seconds | 1 hour        |
    | wp_update_user_counts                          | 2025-04-25 20:30:03 | 2 hours 34 minutes    | 12 hours      |
    | recovery_mode_clean_expired_keys               | 2025-04-25 22:00:01 | 4 hours 4 minutes     | 1 day         |
    | wp_update_themes                               | 2025-04-26 04:57:57 | 11 hours 2 minutes    | 12 hours      |
    | wp_update_plugins                              | 2025-04-26 04:57:57 | 11 hours 2 minutes    | 12 hours      |
    | wp_version_check                               | 2025-04-26 04:57:57 | 11 hours 2 minutes    | 12 hours      |
    [...]
    +------------------------------------------------+---------------------+-----------------------+---------------+
    

    While I could manually add them all to my tracker, the question comes up with how to add the ping to the end of the command?

    The Code

    I’m not going to break down the code here, it’s far too long and a lot of it is dependant on my specific setup.

    In essence, what you need to do is:

    1. Hook into schedule_event
    2. If the event isn’t recurring, just run it
    3. If it is recurring, see if there’s already a ping check for that event
    4. If there’s no check, add it
    5. Now add the ping to the end of the actual cron even
    6. Run the event

    I actually built out code like that using Laravel recently, for a work related project, so I had the structure already in my head and I was familiar with it. The problem though is WP Cron is nothing like ‘real’ cron.

    Note: If you really want to see the code, the beta code can be found in the LWTV GitHub repository. It has an issue with getting the recurrence, which is why I made this post.

    When CRON isn’t CRON

    From WikiPedia:

    The actions of cron are driven by a crontab (cron table) file, a configuration file that specifies shell commands to run periodically on a given schedule. The crontab files are stored where the lists of jobs and other instructions to the cron daemon are kept. 

    Which means crontab runs on the server time. When the server hits the time, it runs the job. Adding in jobs with the ping URL is quick:

    */10 * * * * /usr/bin/wp cron event run --due-now --path=/home/username/html/ && curl -fsS -m 10 --retry 5 -o /dev/null https://health.ipstenu.com/ping/APIKEY/due-now-every-10

    This job relies on the server being up and available, so it’s a decent metric. It always runs every ten minutes.

    But WP Cron? The ‘next run’ time (GMT) is weirdly more precise, but less reliable. 2025-04-25 17:51:15 doesn’t mean it’ll run at 5:51pm GMT and 15 seconds. It means that the next time after that timestamp, it will attempt to run the command.

    Since I have a scheduled ‘due now’ caller every ten minutes, if no one visits the site at 5:52pm (rounding up), then it won’t run until 6pm. That’s generally fine, but HealthChecks.io doesn’t really understand that. More to the point, I’m guestimating when

    HealthChecks.io has three ways to check time: Simple, Cron, and onCalendar. In general, I use Cron because while it’s cryptic, I understand it. That said, there’s no decent library to convert seconds (which is what WP uses to store the interval timing) which means you end up with a mess of if checks.

    A Mess of Checks

    First, pick a decent ‘default’ (I picked every hour).

    1. If the interval in seconds is not a multiple of 60, use the default.
    2. If the interval is less than 60 seconds, run every minute.
    3. Divide seconds by 60 to get minutes.
    4. If the interval in minutes is not a multiple of 60, use the default.
    5. If the interval is less than an hour (1 to 59 minutes), run every x minutes.
    6. Divide minutes by 60 to get hours.
    7. If the interval in hours is not an even number of days (divide hours by 24), use the default
    8. If the interval is less than a day (1 to 23 hours), run every X hours.
    9. Divide hours by 24 to get days.
    10. If the days interval is not a multiple of 7 , use the default.
    11. If the interval is less than a week (1 to 6 days), run every X days.
    12. Divide days by 7 to get weeks.
    13. If the interval is a week, run every week on ‘today’ at 00:00

    You see where this is going.

    And then there’s the worse part. After you’ve done all this, you have to tweak it.

    Tweaking Timing

    Why do I have to tweak it? Well for example, let’s look at the check for expired transients:

    if ( ! wp_next_scheduled( 'delete_expired_transients' ) && ! wp_installing() ) {
    	wp_schedule_event( time(), 'daily', 'delete_expired_transients' );
    }
    

    This runs every day. Okay, but I don’t know exactly when it’ll run, just that I expect it to run daily. Using my logic above, the cron time would be 0 0 * * * which means … every day at midnight server time.

    But, like I said, I don’t actually know if it’ll run at midnight. In fact, it probably won’t! So I have to setup a grace period. Since I don’t know when in 24 hours something will run, I set it to 2.5 times the interval. If the interval runs every day, then I consider it a fail if it doesn’t run every two days and change.

    I really hate that, but it’s the best workaround I have at the moment.

    Should You Do This?

    Honestly?

    No.

    It’s positively ridiculous to have done in the first place, and I consider it more of a Proof of Concept than anything else. With the way WP handles cron and scheduling, too, it’s just a total pain in the backside to make this work without triggering alerts all the time!

    But at the same time, it does give you a lot more insight into what your site is doing, and when it’s not doing what it should be doing! In fact, this is how I found out that my Redis cache had held on to cron jobs from plugins long since removed!

    There are benefits, but most of the time this is nothing anyone needs.

  • Cute Bears, Uptime Kuma, and Docker

    Cute Bears, Uptime Kuma, and Docker

    I have a confession.

    I use Docker on my laptop all the time to create a stable test environment that I can use and abuse and validate before I push to my staging servers. When it’s just WordPress, I use LocalWP which is hands down one of the best ‘just WP’ desktop tools out there.

    But I don’t really do Docker on my servers.

    Or at least, I didn’t until last week.

    Vibe Coding

    I have a new habit, where I spin up test things while sprawled on my couch watching TV and messing on my iPad. All of this was done on my iPad using:

    • GitHub App
    • Terminus
    • ChatGPT

    Oh.

    Yeah. I used ChatGPT.

    Before you judge me, I validated and tested everything and didn’t blindly trust it, but honestly I did use it for a fast lookup where I didn’t want to figure out the specific search to get to my answer.

    My coworkers joked I’ve gone beyond vibe coding with this.

    Uptime Kuma

    Uptime Kuma is a replacement for UptimeRobot.

    Kuma (クマ/熊) means bear 🐻 in Japanese.

    A little bear is watching your website.🐻🐻🐻

    I mean come on, how could I not?

    Anyway. How do I install this with Docker?

    First of all, I have a dedicated server at DreamHost, which allows me to install Docker.

    Assuming you did that, pull the image down: docker pull louislam/uptime-kuma

    I store my docker stuff in /root/docker/ in subfolders, but you can do it wherever. Some people like to use /opt/uptime-kuma/ for example. Wherever you store it, you’ll need a docker-compose.yml file:

    version: "3"
    services:
      uptime-kuma:
        image: louislam/uptime-kuma:latest
        container_name: uptime-kuma
        restart: unless-stopped
        ports:
          - "3001:3001"
        volumes:
          - ./data:/app/data
        environment:
          - TZ=America/Los_Angeles
    
    

    Keep in mind that data folder? It’ll be created in your uptime-kuma folder. In it are things like the logos I uploaded, which is cool. Anyway, once you’re done, make sure you’re in that uptime-kuma folder and run docker-compose up -d

    Why Docker? Why Now?

    Normally my directions on how you do all this stuff is hella long and complex. There’s a reason I went with Docker when I’ve been avoiding it for (well) years on my servers.

    First of all, it’s isolated. This means it keeps its packages to itself, and I don’t have to worry about the requirements for app A messing up app B. This is hugely important the more apps you have on a server. I have Meilesearch, Uptime Kuma, and more! This gets unwieldy pretty fast.

    The other big reason is … it’s easy. I mean, come on, it’s two commands and a config file!? Compare that to the manual steps which (for one app I manage) can have pages of documentation and screenshots.

    Speaking of easy? Let’s say an upgrade went tits up on your server. Guess what? Rolling back on Docker is super easy. You basically just change the image. All your data is fine.

    I know, right? It’s super weird! But if you remember that data folder? Right so that doesn’t get deleted. It lives in my /root/docker/uptime-kuma/ folder and even if I change the docker image, that folder has my data!

    What’s the Gotcha?

    There is always a Gotcha. In my case, it’s that Nginx can be a bit of a jerk. You have to set up a thing called proxy_pass:

    server {
        server_name status.ipstenu.com;
    
        location / {
            proxy_pass http://localhost:3001;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_redirect off;
        }
    
        listen 443 ssl;
        ssl_certificate /etc/letsencrypt/live/status.ipstenu.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/status.ipstenu.com/privkey.pem;
        include /etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
    }
    
    server {
        if ($host = status.ipstenu.com) {
            return 301 https://$host$request_uri;
        }
    
        server_name status.ipstenu.com;
        listen 80;
        return 404;
    }
    

    But see here’s where it’s extra weird.

    Now, if you go to status.ipstenu.com you’ll get a dashboard login. Won’t help you, right? But if you go to (for example) status.ipstenu.com/status/lwtv you’ll see the status page for LezWatch.TV.

    And if you go to status.lezwatchtv.com …. You see the same thing.

    The Fifth Doctor, from Doctor Who, looking surprised with a caption of "Whaaaaa!?"

    Get this. Do the same thing you did for the main nginx conf file, but change your location part:

        location / {
            proxy_pass http://127.0.0.1:3001;
            proxy_http_version 1.1;
    
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    

    That’s it. Take a bow.

    Docker for Everything?

    No.

    I didn’t do docker for my Meilisearch UI install because it’s a simple webpage that doesn’t need it (and yes, that means I did redo my Meilisearch install as Docker).

    I wouldn’t do my WordPress site as Docker … yet … mostly because it’s not an official Docker, but also because WordPress is a basic PHP app. Same as FreshRSS. They don’t need Docker, they need a basic web server, and the risks of compatibility issues are low.

    ChatGPT / AI Coding for Everything?

    No.

    I see where ‘AI’ is going and while I don’t consider it actually artificial intelligence, it is a super complex language learning model than can take your presets and assist you.

    But AI hallucinates. A lot. Like, I asked it to help me set up Meilisearch UI in docker, only to find out that only works for dev and it wanted me to hack the app. I have another project where it’s constantly telling me there’s a whole library that doesn’t exist (and never has), that will solve my problems.

    It got so bad, my boss tried it on his early-release version of an AI tool and it got worse.

    And finally … sometimes it gets super obsessive about the wrong thing. I had a config wrong for Kuma at one point, and ChatGPT kept telling me to check my nginx settings. I had to tell it “Someone will die if you ask me about my nginx settings again” to make it stop.

    What Can I Do with Kuma?

    That little bear is handling 80% of my monitoring now. The remaining 20% are cron jobs that I use HealthChecks.io for (self hosted, of course).

    What are my Kuma-chans?

    • 2 basic “Is this site up?” for GitHub and a service we use.
    • 3 slightly more complex “Is this site up and how’s the SSL cert?” for all three domains I own in this case.
    • 1 basic “Does this keyword exist on this URL?” check for making sure my site isn’t hacked.
    • 2 basic “Does this API key pair exist with this specific data?” for two APIs that do very different things.
    • 1 auth login “Do I still have access to this API?” check for a service.

    I mentioned there are 2 basic API checks, but they do different things. Here’s where it’s fun.

    Ready?

    Screenshot showing I'm checking the LWTV api for last death and confirming the died date.
    Screenshot

    Now that part is pretty basic. Right? Check the API, confirm the date for ‘died’ is what I think, done. And if it’s not? What do I do?

    Well I send a Slack Message:

    Notification setup for Slack where the LWTV Death Toll (this is a joke) tells us when the death is down. it's not really down.
    Screenshot

    This tells us the death is down. Which means ‘someone new has been added to the site as a most recent death.’

    Right now I have to manually go in and change the value to the valid one, but it works. And it’s one way to keep everyone updated.

  • Monstrous Site Notes

    Monstrous Site Notes

    If you have a MonsterInsights Pro or Agency, you have access to Site Notes.

    They’re a great way to automagically connect your traffic reports to ‘things’ you’ve done on your site. The problem is that Site Notes are, by default, manual. To automate them, you need yet another plugin.

    Now to a degree, this makes sense. While the out of the box code is pretty clearcut, there’s one ‘catch’ and its categories.

    The Basic Call

    The actual code to make a note is pretty simple:

    		$note_args  = array(
    			'note'        => 'Title,
    			'author_id'   => 'author,
    			'date'        => 'date',
    			'category_id' => 1, // Where 1 is a category ID
    			'important'   => [true|false],
    		);
    
    		monsterinsights_add_site_note( $note_args );
    

    But as I mentioned, category_id is the catch. There isn’t actually an interface to know what those IDs are. The automator tools hook in and set that up for you

    Thankfully I know CLI commands and I can get a list:

    $ wp term list monsterinsights_note_category
    +---------+------------------+-----------------+-----------------+-------------+--------+-------+
    | term_id | term_taxonomy_id | name            | slug            | description | parent | count |
    +---------+------------------+-----------------+-----------------+-------------+--------+-------+
    | 850     | 850              | Blog Post       | blog-post       |             | 0      | 0     |
    | 851     | 851              | Promotion       | promotion       |             | 0      | 0     |
    | 849     | 849              | Website Updates | website-updates |             | 0      | 0     |
    +---------+------------------+-----------------+-----------------+-------------+--------+-------+
    

    But I don’t want to hardcode the IDs in.

    There are a couple ways around this, thankfully. WordPress has a function called get_term_id() which lets you search by the slug, name, or ID. Since the list of categories shows the names, I can grab them!

    A list of the categories for site notes.
    Screenshot of Site Notes Categories page.

    That means I can get the term ID like this:

     $term = get_term_by( 'name', 'Blog Post', 'monsterinsights_note_category' );
    

    Now the gotcha here? You can’t rename them or you break your code.

    Example for New Posts

    Okay so here’s how it looks for a new post:

    add_action( 'publish_post', 'create_site_note_on_post_publish', 10, 2 );
    
    function create_site_note_on_post_publish( $post_ID, $post ) {
        if ( function_exists( 'monsterinsights_add_site_note' ) ) {
            return;
        }
    
        if ( $post->post_type !== 'post' ) {
            return;
        }
    
        $post_title = $post->post_title;
        $term       = get_term_by( 'name', 'Blog Post', 'monsterinsights_note_category' );
    
        // Prepare the site note arguments
        $args = array(
            'note'        => 'New Post: ' . sanitize_text_field(   $post_title  ),
            'author_id'   => $post->post_author,
            'date'        => $post->post_date,
            'category_id' => $term->term_id,
            'important'   => false
        );
    
        monsterinsights_add_site_note( $args );
    }
    

    See? Pretty quick.

  • Meilisearch at Home

    Meilisearch at Home

    There are things most CMS tools are great at, and then there are things they suck at. Universally? They all suck at search when you get to scale.

    This is not really true fault of the CMS (be it WordPress, Drupal, Hugo, etc). The problem is search is difficult to build! If it was easy, everyone would do it. The whole reason Google rose to dominance was that it made search easy and reliable. And that’s great, but not everyone is okay with relying on 3rd party services.

    I’ve used ElasticSearch (too clunky to run, a pain to customize), Lunr (decent for static sites), and even integrated Yahoo and Google searches. They all have issues.

    Recently I was building out a search tool for a private (read: internal, no access if you’re not ‘us’) service, and I was asked to do it with MeiliSearch. It was new to me. As I installed and configured it, I thought … “This could be a nice solution.”

    Build Your Instance

    When you read the directions, you’ll notice they want to install the app as root, meaning it would be one install. And that sounds okay until you start thinking about multiple servers using one instance (for example, WordPress Multisite) where you don’t want to cross contaminate your results. Wouldn’t want posts from Ipstenu.org and Woody.com showing up on HalfElf, and all.

    There are a couple of ways around that, Multi-Tenancy and multiple Indexes. I went with the indexes for now, but I’m sure I’ll want tenancy later.

    I’m doing all this on DreamHost, because I love those weirdos, but there are pre-built images on DigitalOcean if that floats your goat:

    1. Make a dedicated server or a DreamCompute (I used the latter) – you need root access
    2. Set the server to Nginx with the latest PHP – this will allow you to make a proxy later
    3. Add your ssh key from ~/.ssh/id_rsa.pub to your SSH keys – this will let you log in root (or an account with root access)

    Did that? Great! The actual installation is pretty easy, you can just follow the directions down the line.

    Integration with WordPress

    The first one I integrated with was WordPress and for that I used Yuto.

    It’s incredibly straightforward to set up. Get your URL and your Master Key. Plunk them in. Save. Congratulations!

    On the Indices page I originally set my UIDs to ipstenu_posts and ipstenu_pages – to prevent collisions. But then I realized… I wanted the whole site on there, so I made them both ipstenu_org

    Example of Yuto's search output
    Yuto Screenshot

    I would like to change the “Ipstenu_org” flag, like ‘If there’s only one Index, don’t show the name’ and then a way to customize it.

    I will note, there’s a ‘bug’ in Yuto – it has to load all your posts into a cache before it will index them, and that’s problematic if you have a massive amount of posts, or if you have anti-abuse tools that block long actions like that. I made a quick WP-CLI command.

    WP-CLI Command

    The command I made is incredibly simple: wp ipstenu yuto build-index posts

    The code is fast, too. It took under a minute for over 1000 posts.

    After I made it, I shared it with the creators of Yuto, and their next release includes a version of it.

    Multiple Indexes and Tenants

    You’ll notice that I have two indexes. This is due to how the plugin works, making an index per post type. In so far as my ipstenu.org sites go, I don’t mind having them all share a tenant. After all, they’re all on a server together.

    However… This server will also house a Hugo site and my other WP site. What to do?

    The first thing I did was I made a couple more API keys! They have read-write access to a specific index (the Key for “Ipstenu” has access to my ipstenu_org index and so on). That lets me manage things a lot more easily and securely.

    While Yuto will make the index, it cannot make custom keys, so I used the API:

    curl \
    -X POST 'https://example.com/keys' \
    -H 'Authorization: Bearer BEARERKEY' \
    -H 'Content-Type: application/json' \
    --data-binary '{
    "description": "Ipstenu.org",
    "actions": ["*"],
    "indexes": ["ipstenu_org"],
    "expiresAt": null
    }'
    

    That returns a JSON string with (among other things) a key that you can use in WordPress.

    Will I look into Tenancy? Maybe. Haven’t decided yet. For now, separate indexes works for me.

  • Cookie Consent on Hugo

    Cookie Consent on Hugo

    Hugo is my favourite static site generator. I use it on a site I originally created in 1996 (yes, FLF is about to be 30!). Over the last 6 months, I’ve been totally overhauling the site from top to bottom, and one of the long-term goals I had was to add in Cookie Consent.

    Hugo Has Privacy Mode

    One of the nice things about Hugo is they have a built in handler for Privacy Mode.

    I have everything set to respect Do Not Track and use PrivacyMode whenever possible. It lightens my load a lot.

    Built Into Hinode: CookieYes

    The site makes use of Hinode, which has built in support for cookie consent… Kind of. They use the CookieYes service, which I get but I hate. I don’t want to offload things to a service. In fact, part of the reason I moved of WordPress and onto Hugo for the site was GDPR.

    I care deeply about privacy. People have a right to privacy, and to opt in to tracking. A huge part of that is to minimize the amount of data from your own websites that are sent around to other people and saved on your own server/services!

    Obviously I need to know some things. I need to know how many mobile users there are so I can make it better. I need to know what pages have high traffic so I can expand them. If everyone is going to a recap page only to try and find a gallery, then I need to make those more prominent.

    In other words, I need Analytics.

    And the best analytics? Still Google.

    Sigh.

    Alternatively: CookieConsent

    I did my research. I checked a lot of services (free and pay), I looked into solutions people have implemented for Hugo, and then I thought there has to be a simple tool for this.

    There is.

    CookieConsent.

    CookieConsent is a free, open-source (MIT) mini-library, which allows you to manage scripts — and consequently cookies — in full GDPR fashion. It is written in vanilla js and can be integrated in any web platform/framework.

    And yes, you can integrate with Hugo.

    How to Add CookieConsent to Hugo

    First, download it. I have node set up to handle a lot of things, so I went with the easy route:

    npm i vanilla-cookieconsent@3.1.0

    Next, I have to add the dist files to my site. I added in a command to my package.json:

    "build:cookie": "cp node_modules/vanilla-cookieconsent/dist/cookieconsent.css static/css/cookieconsent.css && cp node_modules/vanilla-cookieconsent/dist/cookieconsent.umd.js static/js/cookieconsent.umd.js",
    

    If you’re familiar with Hinode, may notice I’m not using the suggested way to integrate JS. If I was doing this in pure Hinode, I’d be copying the files to assets/js/critical/functional/ instead of my static folder.

    I tried. It errors out:

    Error: error building site: EXECUTE-AS-TEMPLATE: failed to transform "/js/critical.bundle-functional.js" (text/javascript): failed to parse Resource "/js/critical.bundle-functional.js" as Template:: template: /js/critical.bundle-functional.js:210: function "revisionMessage" not defined
    

    I didn’t feel like debugging the whole mess.

    Anyway, once you get those files in, you need to make another special js file. This file is your configuration or initialization file. And if you look at the configuration directions, it’s a little lacking.

    Instead of that, go look at their Google Example! This gives you everything you need to comply with Google Tag Manager Consent Mode, which matters to me. I copied that into /static/js/cookieconsent-init.js and customized it. Like, I don’t have ads so I left that out.

    Add Your JS and CSS

    I already have a customized header (/layouts/partials/head/head.html) for unrelated reasons, but if you don’t, copy the one from Hinode core over and add in this above the call for the SEO file:

    <link rel="stylesheet" href="/css/cookieconsent.css">

    Then you’ll want to edit /layouts/partials/templates/script.html and add in this at the bottom:

    <script type="module" src="/js/cookieconsent-init.js"></script>

    Since your init file contains the call to the main consent code, you’re good to go!

    The Output

    When you visit the site, you’ll see this:

    Screenshot of a cookie consent page.
    Screenshot

    Now there’s a typo in this one, it should say “That means if you click “Reject” right now, you won’t get any Google Analytics cookies.” I fixed it before I pushed anything to production. But I made sure to specify that so people know right away.

    If you click on manage preferences, you’ll get the expanded version:

    Screenshot of expanded cookie preferences.
    Screenshot

    The language is dry as the desert because it’s to meet Google’s specifics.

    As for ‘strictly necessary cookies’?

    At this time we have NO necessary cookies. This option is here as a placeholder in case we have to add any later. We’ll notify you if that happens.

    And how will I notify them? By using Revision Management.