Half-Elf on Tech

Thoughts From a Professional Lesbian

Category: How To

  • Automate Your Site Checks with Cron (and WPCron)

    Automate Your Site Checks with Cron (and WPCron)

    I have a self-hosted healthchecks.io instance (mentioned), and I use it to make sure all the needful cron jobs for my site actually run. I have it installed via Docker, so it’s not super complex to update and that’s how I like it.

    The first cron jobs I monitored were the ones I have setup in my crontab on the server:

    1. Run WP ‘due now’
    2. Set daily random ‘of the day’
    3. Download an iCal file
    4. Run a nightly data validity check

    I used to have these using WP Cron, but it’s a little too erratic for my needs. This is important, remember this for later, it’ll come back up.

    Once I added in those jobs, I got to thinking about the myriad WP Cron jobs that WordPress sets up on its own.

    In fact, I have a lot of them:

    +------------------------------------------------+---------------------+-----------------------+---------------+
    | hook                                           | next_run_gmt        | next_run_relative     | recurrence    |
    +------------------------------------------------+---------------------+-----------------------+---------------+
    | rediscache_discard_metrics                     | 2025-04-25 17:51:15 | now                   | 1 hour        |
    | wp_privacy_delete_old_export_files             | 2025-04-25 18:16:33 | 20 minutes 38 seconds | 1 hour        |
    | wp_update_user_counts                          | 2025-04-25 20:30:03 | 2 hours 34 minutes    | 12 hours      |
    | recovery_mode_clean_expired_keys               | 2025-04-25 22:00:01 | 4 hours 4 minutes     | 1 day         |
    | wp_update_themes                               | 2025-04-26 04:57:57 | 11 hours 2 minutes    | 12 hours      |
    | wp_update_plugins                              | 2025-04-26 04:57:57 | 11 hours 2 minutes    | 12 hours      |
    | wp_version_check                               | 2025-04-26 04:57:57 | 11 hours 2 minutes    | 12 hours      |
    [...]
    +------------------------------------------------+---------------------+-----------------------+---------------+
    

    While I could manually add them all to my tracker, the question comes up with how to add the ping to the end of the command?

    The Code

    I’m not going to break down the code here, it’s far too long and a lot of it is dependant on my specific setup.

    In essence, what you need to do is:

    1. Hook into schedule_event
    2. If the event isn’t recurring, just run it
    3. If it is recurring, see if there’s already a ping check for that event
    4. If there’s no check, add it
    5. Now add the ping to the end of the actual cron even
    6. Run the event

    I actually built out code like that using Laravel recently, for a work related project, so I had the structure already in my head and I was familiar with it. The problem though is WP Cron is nothing like ‘real’ cron.

    Note: If you really want to see the code, the beta code can be found in the LWTV GitHub repository. It has an issue with getting the recurrence, which is why I made this post.

    When CRON isn’t CRON

    From WikiPedia:

    The actions of cron are driven by a crontab (cron table) file, a configuration file that specifies shell commands to run periodically on a given schedule. The crontab files are stored where the lists of jobs and other instructions to the cron daemon are kept. 

    Which means crontab runs on the server time. When the server hits the time, it runs the job. Adding in jobs with the ping URL is quick:

    */10 * * * * /usr/bin/wp cron event run --due-now --path=/home/username/html/ && curl -fsS -m 10 --retry 5 -o /dev/null https://health.ipstenu.com/ping/APIKEY/due-now-every-10

    This job relies on the server being up and available, so it’s a decent metric. It always runs every ten minutes.

    But WP Cron? The ‘next run’ time (GMT) is weirdly more precise, but less reliable. 2025-04-25 17:51:15 doesn’t mean it’ll run at 5:51pm GMT and 15 seconds. It means that the next time after that timestamp, it will attempt to run the command.

    Since I have a scheduled ‘due now’ caller every ten minutes, if no one visits the site at 5:52pm (rounding up), then it won’t run until 6pm. That’s generally fine, but HealthChecks.io doesn’t really understand that. More to the point, I’m guestimating when

    HealthChecks.io has three ways to check time: Simple, Cron, and onCalendar. In general, I use Cron because while it’s cryptic, I understand it. That said, there’s no decent library to convert seconds (which is what WP uses to store the interval timing) which means you end up with a mess of if checks.

    A Mess of Checks

    First, pick a decent ‘default’ (I picked every hour).

    1. If the interval in seconds is not a multiple of 60, use the default.
    2. If the interval is less than 60 seconds, run every minute.
    3. Divide seconds by 60 to get minutes.
    4. If the interval in minutes is not a multiple of 60, use the default.
    5. If the interval is less than an hour (1 to 59 minutes), run every x minutes.
    6. Divide minutes by 60 to get hours.
    7. If the interval in hours is not an even number of days (divide hours by 24), use the default
    8. If the interval is less than a day (1 to 23 hours), run every X hours.
    9. Divide hours by 24 to get days.
    10. If the days interval is not a multiple of 7 , use the default.
    11. If the interval is less than a week (1 to 6 days), run every X days.
    12. Divide days by 7 to get weeks.
    13. If the interval is a week, run every week on ‘today’ at 00:00

    You see where this is going.

    And then there’s the worse part. After you’ve done all this, you have to tweak it.

    Tweaking Timing

    Why do I have to tweak it? Well for example, let’s look at the check for expired transients:

    if ( ! wp_next_scheduled( 'delete_expired_transients' ) && ! wp_installing() ) {
    	wp_schedule_event( time(), 'daily', 'delete_expired_transients' );
    }
    

    This runs every day. Okay, but I don’t know exactly when it’ll run, just that I expect it to run daily. Using my logic above, the cron time would be 0 0 * * * which means … every day at midnight server time.

    But, like I said, I don’t actually know if it’ll run at midnight. In fact, it probably won’t! So I have to setup a grace period. Since I don’t know when in 24 hours something will run, I set it to 2.5 times the interval. If the interval runs every day, then I consider it a fail if it doesn’t run every two days and change.

    I really hate that, but it’s the best workaround I have at the moment.

    Should You Do This?

    Honestly?

    No.

    It’s positively ridiculous to have done in the first place, and I consider it more of a Proof of Concept than anything else. With the way WP handles cron and scheduling, too, it’s just a total pain in the backside to make this work without triggering alerts all the time!

    But at the same time, it does give you a lot more insight into what your site is doing, and when it’s not doing what it should be doing! In fact, this is how I found out that my Redis cache had held on to cron jobs from plugins long since removed!

    There are benefits, but most of the time this is nothing anyone needs.

  • Cute Bears, Uptime Kuma, and Docker

    Cute Bears, Uptime Kuma, and Docker

    I have a confession.

    I use Docker on my laptop all the time to create a stable test environment that I can use and abuse and validate before I push to my staging servers. When it’s just WordPress, I use LocalWP which is hands down one of the best ‘just WP’ desktop tools out there.

    But I don’t really do Docker on my servers.

    Or at least, I didn’t until last week.

    Vibe Coding

    I have a new habit, where I spin up test things while sprawled on my couch watching TV and messing on my iPad. All of this was done on my iPad using:

    • GitHub App
    • Terminus
    • ChatGPT

    Oh.

    Yeah. I used ChatGPT.

    Before you judge me, I validated and tested everything and didn’t blindly trust it, but honestly I did use it for a fast lookup where I didn’t want to figure out the specific search to get to my answer.

    My coworkers joked I’ve gone beyond vibe coding with this.

    Uptime Kuma

    Uptime Kuma is a replacement for UptimeRobot.

    Kuma (クマ/熊) means bear 🐻 in Japanese.

    A little bear is watching your website.🐻🐻🐻

    I mean come on, how could I not?

    Anyway. How do I install this with Docker?

    First of all, I have a dedicated server at DreamHost, which allows me to install Docker.

    Assuming you did that, pull the image down: docker pull louislam/uptime-kuma

    I store my docker stuff in /root/docker/ in subfolders, but you can do it wherever. Some people like to use /opt/uptime-kuma/ for example. Wherever you store it, you’ll need a docker-compose.yml file:

    version: "3"
    services:
      uptime-kuma:
        image: louislam/uptime-kuma:latest
        container_name: uptime-kuma
        restart: unless-stopped
        ports:
          - "3001:3001"
        volumes:
          - ./data:/app/data
        environment:
          - TZ=America/Los_Angeles
    
    

    Keep in mind that data folder? It’ll be created in your uptime-kuma folder. In it are things like the logos I uploaded, which is cool. Anyway, once you’re done, make sure you’re in that uptime-kuma folder and run docker-compose up -d

    Why Docker? Why Now?

    Normally my directions on how you do all this stuff is hella long and complex. There’s a reason I went with Docker when I’ve been avoiding it for (well) years on my servers.

    First of all, it’s isolated. This means it keeps its packages to itself, and I don’t have to worry about the requirements for app A messing up app B. This is hugely important the more apps you have on a server. I have Meilesearch, Uptime Kuma, and more! This gets unwieldy pretty fast.

    The other big reason is … it’s easy. I mean, come on, it’s two commands and a config file!? Compare that to the manual steps which (for one app I manage) can have pages of documentation and screenshots.

    Speaking of easy? Let’s say an upgrade went tits up on your server. Guess what? Rolling back on Docker is super easy. You basically just change the image. All your data is fine.

    I know, right? It’s super weird! But if you remember that data folder? Right so that doesn’t get deleted. It lives in my /root/docker/uptime-kuma/ folder and even if I change the docker image, that folder has my data!

    What’s the Gotcha?

    There is always a Gotcha. In my case, it’s that Nginx can be a bit of a jerk. You have to set up a thing called proxy_pass:

    server {
        server_name status.ipstenu.com;
    
        location / {
            proxy_pass http://localhost:3001;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_redirect off;
        }
    
        listen 443 ssl;
        ssl_certificate /etc/letsencrypt/live/status.ipstenu.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/status.ipstenu.com/privkey.pem;
        include /etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
    }
    
    server {
        if ($host = status.ipstenu.com) {
            return 301 https://$host$request_uri;
        }
    
        server_name status.ipstenu.com;
        listen 80;
        return 404;
    }
    

    But see here’s where it’s extra weird.

    Now, if you go to status.ipstenu.com you’ll get a dashboard login. Won’t help you, right? But if you go to (for example) status.ipstenu.com/status/lwtv you’ll see the status page for LezWatch.TV.

    And if you go to status.lezwatchtv.com …. You see the same thing.

    The Fifth Doctor, from Doctor Who, looking surprised with a caption of "Whaaaaa!?"

    Get this. Do the same thing you did for the main nginx conf file, but change your location part:

        location / {
            proxy_pass http://127.0.0.1:3001;
            proxy_http_version 1.1;
    
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    

    That’s it. Take a bow.

    Docker for Everything?

    No.

    I didn’t do docker for my Meilisearch UI install because it’s a simple webpage that doesn’t need it (and yes, that means I did redo my Meilisearch install as Docker).

    I wouldn’t do my WordPress site as Docker … yet … mostly because it’s not an official Docker, but also because WordPress is a basic PHP app. Same as FreshRSS. They don’t need Docker, they need a basic web server, and the risks of compatibility issues are low.

    ChatGPT / AI Coding for Everything?

    No.

    I see where ‘AI’ is going and while I don’t consider it actually artificial intelligence, it is a super complex language learning model than can take your presets and assist you.

    But AI hallucinates. A lot. Like, I asked it to help me set up Meilisearch UI in docker, only to find out that only works for dev and it wanted me to hack the app. I have another project where it’s constantly telling me there’s a whole library that doesn’t exist (and never has), that will solve my problems.

    It got so bad, my boss tried it on his early-release version of an AI tool and it got worse.

    And finally … sometimes it gets super obsessive about the wrong thing. I had a config wrong for Kuma at one point, and ChatGPT kept telling me to check my nginx settings. I had to tell it “Someone will die if you ask me about my nginx settings again” to make it stop.

    What Can I Do with Kuma?

    That little bear is handling 80% of my monitoring now. The remaining 20% are cron jobs that I use HealthChecks.io for (self hosted, of course).

    What are my Kuma-chans?

    • 2 basic “Is this site up?” for GitHub and a service we use.
    • 3 slightly more complex “Is this site up and how’s the SSL cert?” for all three domains I own in this case.
    • 1 basic “Does this keyword exist on this URL?” check for making sure my site isn’t hacked.
    • 2 basic “Does this API key pair exist with this specific data?” for two APIs that do very different things.
    • 1 auth login “Do I still have access to this API?” check for a service.

    I mentioned there are 2 basic API checks, but they do different things. Here’s where it’s fun.

    Ready?

    Screenshot showing I'm checking the LWTV api for last death and confirming the died date.
    Screenshot

    Now that part is pretty basic. Right? Check the API, confirm the date for ‘died’ is what I think, done. And if it’s not? What do I do?

    Well I send a Slack Message:

    Notification setup for Slack where the LWTV Death Toll (this is a joke) tells us when the death is down. it's not really down.
    Screenshot

    This tells us the death is down. Which means ‘someone new has been added to the site as a most recent death.’

    Right now I have to manually go in and change the value to the valid one, but it works. And it’s one way to keep everyone updated.

  • Monstrous Site Notes

    Monstrous Site Notes

    If you have a MonsterInsights Pro or Agency, you have access to Site Notes.

    They’re a great way to automagically connect your traffic reports to ‘things’ you’ve done on your site. The problem is that Site Notes are, by default, manual. To automate them, you need yet another plugin.

    Now to a degree, this makes sense. While the out of the box code is pretty clearcut, there’s one ‘catch’ and its categories.

    The Basic Call

    The actual code to make a note is pretty simple:

    		$note_args  = array(
    			'note'        => 'Title,
    			'author_id'   => 'author,
    			'date'        => 'date',
    			'category_id' => 1, // Where 1 is a category ID
    			'important'   => [true|false],
    		);
    
    		monsterinsights_add_site_note( $note_args );
    

    But as I mentioned, category_id is the catch. There isn’t actually an interface to know what those IDs are. The automator tools hook in and set that up for you

    Thankfully I know CLI commands and I can get a list:

    $ wp term list monsterinsights_note_category
    +---------+------------------+-----------------+-----------------+-------------+--------+-------+
    | term_id | term_taxonomy_id | name            | slug            | description | parent | count |
    +---------+------------------+-----------------+-----------------+-------------+--------+-------+
    | 850     | 850              | Blog Post       | blog-post       |             | 0      | 0     |
    | 851     | 851              | Promotion       | promotion       |             | 0      | 0     |
    | 849     | 849              | Website Updates | website-updates |             | 0      | 0     |
    +---------+------------------+-----------------+-----------------+-------------+--------+-------+
    

    But I don’t want to hardcode the IDs in.

    There are a couple ways around this, thankfully. WordPress has a function called get_term_id() which lets you search by the slug, name, or ID. Since the list of categories shows the names, I can grab them!

    A list of the categories for site notes.
    Screenshot of Site Notes Categories page.

    That means I can get the term ID like this:

     $term = get_term_by( 'name', 'Blog Post', 'monsterinsights_note_category' );
    

    Now the gotcha here? You can’t rename them or you break your code.

    Example for New Posts

    Okay so here’s how it looks for a new post:

    add_action( 'publish_post', 'create_site_note_on_post_publish', 10, 2 );
    
    function create_site_note_on_post_publish( $post_ID, $post ) {
        if ( function_exists( 'monsterinsights_add_site_note' ) ) {
            return;
        }
    
        if ( $post->post_type !== 'post' ) {
            return;
        }
    
        $post_title = $post->post_title;
        $term       = get_term_by( 'name', 'Blog Post', 'monsterinsights_note_category' );
    
        // Prepare the site note arguments
        $args = array(
            'note'        => 'New Post: ' . sanitize_text_field(   $post_title  ),
            'author_id'   => $post->post_author,
            'date'        => $post->post_date,
            'category_id' => $term->term_id,
            'important'   => false
        );
    
        monsterinsights_add_site_note( $args );
    }
    

    See? Pretty quick.

  • Meilisearch at Home

    Meilisearch at Home

    There are things most CMS tools are great at, and then there are things they suck at. Universally? They all suck at search when you get to scale.

    This is not really true fault of the CMS (be it WordPress, Drupal, Hugo, etc). The problem is search is difficult to build! If it was easy, everyone would do it. The whole reason Google rose to dominance was that it made search easy and reliable. And that’s great, but not everyone is okay with relying on 3rd party services.

    I’ve used ElasticSearch (too clunky to run, a pain to customize), Lunr (decent for static sites), and even integrated Yahoo and Google searches. They all have issues.

    Recently I was building out a search tool for a private (read: internal, no access if you’re not ‘us’) service, and I was asked to do it with MeiliSearch. It was new to me. As I installed and configured it, I thought … “This could be a nice solution.”

    Build Your Instance

    When you read the directions, you’ll notice they want to install the app as root, meaning it would be one install. And that sounds okay until you start thinking about multiple servers using one instance (for example, WordPress Multisite) where you don’t want to cross contaminate your results. Wouldn’t want posts from Ipstenu.org and Woody.com showing up on HalfElf, and all.

    There are a couple of ways around that, Multi-Tenancy and multiple Indexes. I went with the indexes for now, but I’m sure I’ll want tenancy later.

    I’m doing all this on DreamHost, because I love those weirdos, but there are pre-built images on DigitalOcean if that floats your goat:

    1. Make a dedicated server or a DreamCompute (I used the latter) – you need root access
    2. Set the server to Nginx with the latest PHP – this will allow you to make a proxy later
    3. Add your ssh key from ~/.ssh/id_rsa.pub to your SSH keys – this will let you log in root (or an account with root access)

    Did that? Great! The actual installation is pretty easy, you can just follow the directions down the line.

    Integration with WordPress

    The first one I integrated with was WordPress and for that I used Yuto.

    It’s incredibly straightforward to set up. Get your URL and your Master Key. Plunk them in. Save. Congratulations!

    On the Indices page I originally set my UIDs to ipstenu_posts and ipstenu_pages – to prevent collisions. But then I realized… I wanted the whole site on there, so I made them both ipstenu_org

    Example of Yuto's search output
    Yuto Screenshot

    I would like to change the “Ipstenu_org” flag, like ‘If there’s only one Index, don’t show the name’ and then a way to customize it.

    I will note, there’s a ‘bug’ in Yuto – it has to load all your posts into a cache before it will index them, and that’s problematic if you have a massive amount of posts, or if you have anti-abuse tools that block long actions like that. I made a quick WP-CLI command.

    WP-CLI Command

    The command I made is incredibly simple: wp ipstenu yuto build-index posts

    The code is fast, too. It took under a minute for over 1000 posts.

    After I made it, I shared it with the creators of Yuto, and their next release includes a version of it.

    Multiple Indexes and Tenants

    You’ll notice that I have two indexes. This is due to how the plugin works, making an index per post type. In so far as my ipstenu.org sites go, I don’t mind having them all share a tenant. After all, they’re all on a server together.

    However… This server will also house a Hugo site and my other WP site. What to do?

    The first thing I did was I made a couple more API keys! They have read-write access to a specific index (the Key for “Ipstenu” has access to my ipstenu_org index and so on). That lets me manage things a lot more easily and securely.

    While Yuto will make the index, it cannot make custom keys, so I used the API:

    curl \
    -X POST 'https://example.com/keys' \
    -H 'Authorization: Bearer BEARERKEY' \
    -H 'Content-Type: application/json' \
    --data-binary '{
    "description": "Ipstenu.org",
    "actions": ["*"],
    "indexes": ["ipstenu_org"],
    "expiresAt": null
    }'
    

    That returns a JSON string with (among other things) a key that you can use in WordPress.

    Will I look into Tenancy? Maybe. Haven’t decided yet. For now, separate indexes works for me.

  • Cookie Consent on Hugo

    Cookie Consent on Hugo

    Hugo is my favourite static site generator. I use it on a site I originally created in 1996 (yes, FLF is about to be 30!). Over the last 6 months, I’ve been totally overhauling the site from top to bottom, and one of the long-term goals I had was to add in Cookie Consent.

    Hugo Has Privacy Mode

    One of the nice things about Hugo is they have a built in handler for Privacy Mode.

    I have everything set to respect Do Not Track and use PrivacyMode whenever possible. It lightens my load a lot.

    Built Into Hinode: CookieYes

    The site makes use of Hinode, which has built in support for cookie consent… Kind of. They use the CookieYes service, which I get but I hate. I don’t want to offload things to a service. In fact, part of the reason I moved of WordPress and onto Hugo for the site was GDPR.

    I care deeply about privacy. People have a right to privacy, and to opt in to tracking. A huge part of that is to minimize the amount of data from your own websites that are sent around to other people and saved on your own server/services!

    Obviously I need to know some things. I need to know how many mobile users there are so I can make it better. I need to know what pages have high traffic so I can expand them. If everyone is going to a recap page only to try and find a gallery, then I need to make those more prominent.

    In other words, I need Analytics.

    And the best analytics? Still Google.

    Sigh.

    Alternatively: CookieConsent

    I did my research. I checked a lot of services (free and pay), I looked into solutions people have implemented for Hugo, and then I thought there has to be a simple tool for this.

    There is.

    CookieConsent.

    CookieConsent is a free, open-source (MIT) mini-library, which allows you to manage scripts — and consequently cookies — in full GDPR fashion. It is written in vanilla js and can be integrated in any web platform/framework.

    And yes, you can integrate with Hugo.

    How to Add CookieConsent to Hugo

    First, download it. I have node set up to handle a lot of things, so I went with the easy route:

    npm i vanilla-cookieconsent@3.1.0

    Next, I have to add the dist files to my site. I added in a command to my package.json:

    "build:cookie": "cp node_modules/vanilla-cookieconsent/dist/cookieconsent.css static/css/cookieconsent.css && cp node_modules/vanilla-cookieconsent/dist/cookieconsent.umd.js static/js/cookieconsent.umd.js",
    

    If you’re familiar with Hinode, may notice I’m not using the suggested way to integrate JS. If I was doing this in pure Hinode, I’d be copying the files to assets/js/critical/functional/ instead of my static folder.

    I tried. It errors out:

    Error: error building site: EXECUTE-AS-TEMPLATE: failed to transform "/js/critical.bundle-functional.js" (text/javascript): failed to parse Resource "/js/critical.bundle-functional.js" as Template:: template: /js/critical.bundle-functional.js:210: function "revisionMessage" not defined
    

    I didn’t feel like debugging the whole mess.

    Anyway, once you get those files in, you need to make another special js file. This file is your configuration or initialization file. And if you look at the configuration directions, it’s a little lacking.

    Instead of that, go look at their Google Example! This gives you everything you need to comply with Google Tag Manager Consent Mode, which matters to me. I copied that into /static/js/cookieconsent-init.js and customized it. Like, I don’t have ads so I left that out.

    Add Your JS and CSS

    I already have a customized header (/layouts/partials/head/head.html) for unrelated reasons, but if you don’t, copy the one from Hinode core over and add in this above the call for the SEO file:

    <link rel="stylesheet" href="/css/cookieconsent.css">

    Then you’ll want to edit /layouts/partials/templates/script.html and add in this at the bottom:

    <script type="module" src="/js/cookieconsent-init.js"></script>

    Since your init file contains the call to the main consent code, you’re good to go!

    The Output

    When you visit the site, you’ll see this:

    Screenshot of a cookie consent page.
    Screenshot

    Now there’s a typo in this one, it should say “That means if you click “Reject” right now, you won’t get any Google Analytics cookies.” I fixed it before I pushed anything to production. But I made sure to specify that so people know right away.

    If you click on manage preferences, you’ll get the expanded version:

    Screenshot of expanded cookie preferences.
    Screenshot

    The language is dry as the desert because it’s to meet Google’s specifics.

    As for ‘strictly necessary cookies’?

    At this time we have NO necessary cookies. This option is here as a placeholder in case we have to add any later. We’ll notify you if that happens.

    And how will I notify them? By using Revision Management.

  • Replacing the W in Your Admin Bar

    Replacing the W in Your Admin Bar

    This is a part of ‘white labeling’, which basically means rebranding.

    When you have a website that is not used by people who really need to mess with WordPress, nor learn all about it (because you properly manage your own documentation for your writers), then that W in your admin toolbar is a bit odd, to say the least.

    This doesn’t mean I don’t want my editors to know what WordPress is, we have a whole about page, and the powered-by remains everywhere in the admin pages, but that logo…

    Well anyway, I decided to nuke it.

    Remove What You Don’t Need

    First I made a function that removes everything I don’t need:

    function cleanup_admin_bar(): void {
    	global $wp_admin_bar;
    
    	// Remove customizer link
    	$wp_admin_bar->remove_menu( 'customize' );
    
    	// Remove WP Menu things we don't need.
    	$wp_admin_bar->remove_menu( 'contribute' );
    	$wp_admin_bar->remove_menu( 'wporg' );
    	$wp_admin_bar->remove_menu( 'learn' );
    	$wp_admin_bar->remove_menu( 'support-forums' );
    	$wp_admin_bar->remove_menu( 'feedback' );
    
    	// Remove comments
    	$wp_admin_bar->remove_node( 'comments' );
    }
    
    add_action( 'wp_before_admin_bar_render','cleanup_admin_bar' );
    

    I also removed the comments node and the customizer because this site doesn’t use comments, and also how many times am I going to that Customizer anyway? Never. But the number of times I miss-click on my tablet? A lot.

    But you may notice I did not delete everything. That’s on purpose.

    Make Your New Nodes

    Instead of recreating everything, I reused some things!

    function filter_admin_bar( $wp_admin_bar ): void {
    	// Remove Howdy and Name, only use avatar.
    	$my_account = $wp_admin_bar->get_node( 'my-account' );
    
    	if ( isset( $my_account->title ) ) {
    		preg_match( '/<img.*?>/', $my_account->title, $matches );
    
    		$title = ( isset( $matches[0] ) ) ? $matches[0] : '<img src="fallback/images/avatar.png" alt="SITE NAME" class="avatar avatar-26 photo" height="26" width="26" />';
    
    		$wp_admin_bar->add_node(
    			array(
    				'id'    => 'my-account',
    				'title' => $title,
    			)
    		);
    	}
    
    	// Customize the Logo
    	$wp_logo = $wp_admin_bar->get_node( 'wp-logo' );
    	if ( isset( $wp_logo->title ) ) {
    		$logo = file_get_contents( '/images/site-logo.svg' );
    		$wp_admin_bar->add_node(
    			array(
    				'id'     => 'wp-logo',
    				'title'  => '<span class="my-site-icon" role="img">' . $logo . '</span>',
    				'parent' => null,
    				'href'   => '/wp-admin/admin.php?page=my-site',
    				'group'  => null,
    				'meta'   => array(
    					'menu_title' => 'About SITE',
    				),
    			),
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo',
    				'id'     => 'about',
    				'title'  => __( 'About SITE' ),
    				'href'   => '/about/',
    			)
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo-external',
    				'id'     => 'documentation',
    				'title'  => __( 'Documentation' ),
    				'href'   => 'https://docs.example.com/',
    			)
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo-external',
    				'id'     => 'slack',
    				'title'  => __( 'Slack' ),
    				'href'   => 'https://example.slack.com/',
    			)
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo-external',
    				'id'     => 'validation',
    				'title'  => __( 'Data Validation' ),
    				'href'   => '/wp-admin/admin.php?page=data_check',
    			)
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo-external',
    				'id'     => 'monitors',
    				'title'  => __( 'Monitors' ),
    				'href'   => '/wp-admin/admin.php?page=monitor_check',
    			)
    		);
    	}
    }
    
    add_filter( 'admin_bar_menu', array( $this, 'filter_admin_bar' ), PHP_INT_MAX );
    

    I replaced the default ‘about’ with my site’s about URL. I replaced the documentation node with my own. Everything else is new.

    Now the image… I have an SVG of our logo, and by making my span class named my-site-icon, I was able to spin up some simple CSS:

    #wpadminbar span.my-site-icon svg {
    	width: 25px;
    	height: 25px;
    }
    

    And there you are.

    Result

    What’s it look like?

    Screenshot of an alternative dropdown of what was the W logo into something customized.

    All those links are to our tools or documentation.