Half-Elf on Tech

Thoughts From a Professional Lesbian

Author: Ipstenu (Mika Epstein)

  • AI Solves Site Speed And Won’t Make You Obsolete

    AI Solves Site Speed And Won’t Make You Obsolete

    I have a bit of a moral problem with AI.

    I hate how it stole data without compensation to build its databases, and now charges people for use.

    I hate how it’s shit for the environment.

    But I also use it day to day for work because it can do some things faster than I can. Imperfectly, but still, it’s like having a handy junior programmer who is up to speed on the latest techno brouhaha, but doesn’t have the scope or depth of thought just yet.

    It can be a very useful tool, but it’s difficult to look past the practical issues.

    All that said, I have been hearing a lot about how plugin shops and theme shops for WordPress will go out of business because of AI.

    Hogwash.

    AI Doesn’t Think

    This summer, I was designing and building an app for work that used two languages (BigQuery and Laravel) I wasn’t familiar with, and two services (PlanetScale and GCP) I hadn’t used before. I sat down and wrote up what I wanted the app to do and more or less how I envisioned it working and I tossed that into AI to ask it to help me:

    1. Estimate costs based on data volume
    2. Estimate time to develop
    3. Construct a plan using Claude TaskMaster to keep it organized
    4. Write up a summary pitch

    After a lot of back and forth and starting new chats to clear the memory, I had a decent plan and presented it. It’s pretty much as planned though we did learn and tweak as we went along.

    Could AI have invented that without me?

    No.

    AI can do a lot of things, but it cannot invent from nothing. It needs prompt and the better your prompt, with the more details of the needs and wants of the project, the better it can help you create. But it cannot just decide “you know what, I need an app for X.”

    Unlike a human, who can think about the moral consequences or the practical UX, an AI only knows what it knows from its limited info base, and cannot expand on its own.

    A human will go “this sounds nice, but …” where an AI might ask you about the flaws but only as it knows them.

    AI Is Great For Optimizing

    Also this summer, I was looking at how slow LezWatch.TV is in the back end. Now I knew that this was due to how much cross data we save, and how inefficient WordPress is about storing it.

    For example, adding a new character to the database will:

    1. update the character count for the actor(s)
    2. update the character count for the show(s)
    3. update the meta for the gender and sexuality of characters for the show(s)
    4. update the stats of characters for actors, shows, formats, death, etc
    5. adjust show(s) score

    There’s a lot of other little stuff, but every save of character does that. And it’s slow.

    So I thought “there must be a better way…” and I wrote up a list of everything. Then I asked Claude (the AI in my code editor) to review the character saving code and tell me where it was slow.

    It listed everything I had, plus a couple more (shadow taxonomies, weird cmb2 <=> taxonomy stuff) and with that, I dumped a summary into Gemini (Google) and said “Based on all this, is Action Scheduler a good idea? Tell me why, or why not, and propose alternatives.”

    Now notice I came in with my own proposal!

    This is key because it set the expectation that I knew “saving is part of the issue, so if I can schedule out post processing, then saving will be faster!”

    AI Extends What We Can Do

    This is where AI is great, it extends. After I moved things to the action scheduler, I thought about the other slow parts and decided to make my sql calls more efficient.

    I know how to solve an N+1 issue (not redundancy, but the select problem), but the pain in the ass with WordPress is it tricks you!

    Let’s assume $things is a collection of post objects. And you want to loop through all those things to get the ones with a meta of ‘other’ and echo it.

    foreach ($things as $thing) {
        $other = get_post_meta($thing->ID, 'other', true);
        echo $other;
    }
    

    Looks perfect, right? Simple. Fast. Easy.

    Well it’s two of the three. When you have 100 post objects, it’s fast. It’s even fast at 1000. But when you get to 3000 … now you’re going to see things slow down, and Google will ding you for TTFB (time to first byte).

    The fix?

    $thing_ids = wp_list_pluck($things, 'ID');
    
    foreach ($thing_ids as $id) {
        $other = get_post_meta($id, 'other', true);
        echo $other;
    }
    

    It doesn’t look much different, but the trick here is that instead of looping and getting all the post data, we’re just using the IDs for a faster lookup and echo.

    This gets compounded when you use WP_Query. Yes, it gives you the posts but if you also need their meta or taxonomy data, WordPress doesn’t load that in the same query. So you end up doing one query to get all the posts, and then one extra query for each post to get its related data.

    That’s the N+1 problem: 1 big query + N small ones.

    By writing direct SQL, I can grab posts and their meta/terms in a single query, instead of hitting the database over and over.

    AI Solves With Directions

    My father used to say that people think of AI like it works in Star Trek. Captain Picard says he wants “Tea. Earl Grey. Hot.” The replicator knows exactly what he wants and makes it. Geordi LaForge asks the Holodeck to make a Sherlock Holmes themed villain, capable of defeating Data.

    Given the recent issues with guardrails on AI and it pushing people towards suicide, I can’t help but think Dad was 100% right and still is. People look at AI as a panacea. It will solve problems because it has access to all the information and can make the right conclusions.

    It doesn’t.

    While I used it to help me find all the speed issues, with my set parameters (saving, searching, etc), I can’t just say “make my site faster.” The amount of refining that happens, and sometimes straight up corrections on what is pretty basic dev work is how I know this. Any developer work their salt will tell you it’s not perfect.

    What AI is great for refining, improving, and assisting.

    It does not, cannot, think for itself. It will not propose to you “hey, I’ve been looking at the code you’re working on and I have an idea.” It will not solve the problem before you identify there is a problem.

    AI is a tool.

    Use your tool wisely.

    But … can you make it better for the environment?

  • We Should Free All WordPress Plugins and Themes

    We Should Free All WordPress Plugins and Themes

    No, I don’t mean give them away for free. But put a pin in that, because this idea begins and ends with FAIR.

    If you haven’t yet heard about the FAIR Package Manager, the idea is to rethink how software is distributed and managed in the world of open web publishing. FAIR focuses on decentralization, transparency, and giving users more control.

    And yes, I’m one of the first Technical Steering Committee’s Co-Chairs.

    When I say we should free the plugins and the themes, I mean something I asked myself almost a decade ago…

    The (Initial) Question

    I remember being on an airplane flying to Japan when I first asked myself this question:

    What would the WP world be like if we democratized the extension ecosystem?

    Mika Epstein circa 2015

    Believe it or not, that note moved to a document, which grew over time, collected all the risks and rewards I could think up. I had logical trains of thought that ended with a failure. But I kept at it. I kept scribbling notes and refining the ideas, until I felt like I had a solid frame.

    Sometime in the spring or summer of 2024, I sat and wrote it as a proposal that I had planned to present to folks on the Meta team as a future path for WordPress.org to stop hosting all the world’s plugins and themes, and instead make it a hell of a lot easier for people outside the wordpress.org services to host their own while still remaining findable.

    The World Before

    20 years ago, the world was a different place with regards to the internet.

    Remember, in 2003 Mike and Matt forked b/2 to make WP, and in 2004 MovableType decided to change their license, and in 2005 Git was invented.

    So let’s set the table remembering that when WordPress came out with Plugins and Themes it was 2004 and back then it was pretty rare for people to host other people’s stuff. That was actually the problem .org was fixing! Don’t have your own SVN setup? Only have $8 a year hosting (those were the days)? Don’t have a way to save all your versions?

    Welcome to the WordPress extension system!

    The (Current) Problem

    While WP solved the problem of 2004, 20 years passed and everything changed. Everyone’s computer has SVN and GIT now (more or less…), and everyone can host code on GitHub or GitLab or even roll their own pretty easily.

    It’s almost like WordPress paved a road for self hosting being spread across the developer landscape. Not only can creators of content host their own websites, devs can host and manage their own updates.

    And WordPress created a new issue of their own making: a gate. There are rules about what you can and cannot host, how you can behave, and frankly … I say this as one who was the Gatekeeper for too many years, that gate is a necessary evil.

    By hosting code and content from other people, WordPress.org places itself in harms’ way. They became responsible for how everything was portrayed and published on the plugin/theme search pages, and that gate had to have rules to protect itself and ensure its continued existence. Like trademarks. You have to be vigilant because places like Facebook will try and shut the whole directory down if they decide it’s infringing.

    That gate has become a hinderance to the democratized usage of WordPress.

    To get around the rules of the gate, people host their own code now. Sure there are a couple options (EDD Updater, GitHub updater, there’s a plugin self-updater out there as well). But because of that gate, they aren’t findable. This hurts WordPress. It stifles growth, it makes it harder for people to make their living on WP, and that is a net loss for everyone.

    My (Then) Proposed Solution

    WordPress.org should stop hosting non-official Plugins and Themes (including Akismet) and instead host the following:

    • Example Plugins (hello dolly)
    • Core Plugins (Classic Editor, Classic Menus, HealthCheck)
    • Core Themes (twenty-*)
    • A directory of other plugin and theme directories

    That last one is the crux of the whole matter. We give hosting back to the developers (just like we gave hosting to the content creators) and then we welcome in their content without liability.

    The thing was, I knew damn well this is something that WordPress.org would fight me over. Stop hosting? Make that radical of a change to completely rewrite how we access code? Distribute everything and own nothing?

    An uphill battle to say the least.

    So, like I did every time I opened that doc, I re-read it, made some tweaks, ran it though and AI on a whim and cleaned it up even more, and I closed it.

    The FAIR Future

    This brings me back to FAIR.

    Sounds like it was made for this proposal, huh?

    I cannot take credit. It’s lightning striking the same place twice. But that makes sense, doesn’t it? No, I’m not talking about how WordPress and I share a birthday, I’m talking about how I absolutely couldn’t be the only human on the planet with this idea in their head.

    Naturally I dove in. I helped writing the docs, translating High Geek explanations into Corporate Lingo, pitching and refining ideas, collecting information from the developers who were far better at these things than I am, learning about AtProto.

    When I was nominated to be on the TSC, I sat with my wife to discuss the implications and risks of accepting. There was a very real possibility this could destroy my career, and I’m a developer looking at 50 while being female. Finding another job, if this went badly, could become impossible.

    But I have a poster. “Flynn Lives.” For years it hung on the wall across from my desk, and I would look up at it and remember the line. “I fight for the users.”

    I started helping in the WP forums to help the users. I joined plugins for the same thing. I ran plugins with the dual view of fighting to make things easier for users and developers. And I failed, trying to do that. But I’ve learned and grown and changed.

    In order for WordPress to move from being that joke to “just a blog” CMS that ‘real’ developers mock, to grow the ecosystem and help everyone make money, to get even bigger while still giving back to everyone, distribution is key.

    Democratize Distribution

    FAIR is at step one of the plan.

    Today, the plugin can disconnect you from WordPress.org for updates (which has the added benefit of more data privacy). There’s only one distribution source right now (thank you AspirePress! none of this is possible without you) but it proves you can do this. It’s got some work left to do and we welcome everyone’s eyes and hands to help. It is very much an MVP release (something I stressed over and over during the dev cycle, to keep folks out of the weeds).

    Next is the part where we build out the system to hand the keys to the developers. No more AppStore and rules. Push what you think is right to push.

    Will some people abuse that? Of course. Someone will push weekly updates so you’re always being notified. Someone will cover the admin panel with ads. Those things happen today, and the plugin review team cannot keep up because there are more devs than reviewers.

    But there are things we can and will do to help and protect users, to empower developers, and assist the hosting companies and more. Everyone benefits from this future.

    We have a plan, and we hope you come with us.

    Other people collaborating with FAIR have blogged as well:

  • Toxic Users: The Unforgiven and the Danger of Unbanning

    Toxic Users: The Unforgiven and the Danger of Unbanning

    Over the last couple years, I posted a lot of stories about the crazy things I saw as the Plugin Rep on WordPress.org. A great number of those situations ended with someone being banned, but those aren’t the only stories out there.

    Still, with the recent situation on WordPress.org, I felt it was appropriate to break down my views on banning, and when it’s the right thing to do.

    Bans are about Safety First

    The number one reason to ban anyone is the physical safety of the community.

    With the recent announcement of a “Jubilee” and how people who were banned between August 2024 and now are being reviewed and (in some part) unbanned, I made a fairly vocal statement on Mastodon and BlueSky that this was a dangerous thing.

    So #Wordpress (org) is really unbanning EVERYONE who’s ever been banned. This was confirmed by Matt on Twitter (screenshot attached).

    I’m going to have to cold-stop any and all contributions because it’s demonstrably UNSAFE for me to be a part of the community.

    Stalkers. Harassers. DEATH THREATS.

    It’s NOT safe for me to be there.

    Until someone SANE comes up with limits, guidelines, and restrictions for this ‘all bans,’ it is NOT SAFE for me to be on WordPress.org.

    I repeat: THIS IS NOT A SAFE COMMUNITY FOR ME OR ANY OF THE THOUSANDS OF VOLUNTEERS WHO HAVE WORKED HARD TO ENFORCE GUIDELINES EQUITABLY FOR OVER A DECADE.

    (See X/Twitter for the thread)

    Matt Mullenweg tweeting "This covers all bans, not just ones I did."
    Ipstenu (Mika E.) on Mastodon

    I will note, I posted that before any clarifications as to who was being unbanned.

    While I have had one credible death threat (and a half dozen others that were laughably stupid, including threats to have various Gods wreak vengeance on me), I am aware via my friendships with Automattic employees that there are a significant number of legit threats out there.

    Those people can and should be banned, and must remain banned. Period.

    Safety isn’t Just Physical

    The number two reason to ban anyone is the ephemeral safety of the community.

    By ‘ephemeral’ I generally mean code. That is, if someone is putting backdoors in their plugin, we need to kick them out and ban them because they are an abject danger to the sites the plugins are installed on.

    But this also means things like extortion, harassment, name calling, bullying, and so on. If someone demonstrates, through their repeated actions, that they can only communicate in a hostile manner, then they need to leave the community.

    There’s a saying, once a single Nazi is allowed into a bar, it has become the Nazi Bar. All communities need to stringently protect the safety of their users. If leadership is okay with a couple people mistreating their community members, then they have just demonstrated they are not going to protect the more vulnerable members.

    This protection is bidirectional, by the way. I’ve banned as many users for harassing developers as I have developers for harassing users!

    The Community is More than Users and Developers

    The number three reason to ban anyone is the legal safety of the community.

    Any community of a decent size is one that faces legal matters. It can be a fan-club, an open-source development community, or a writing group. You have to be aware of the legalities of what you’re doing.

    You have no idea how many times I’ve had to explain the basics of copyright and trademark law to developers, who just want to have a plugin for Facebook. I totally get it, Facebook is delulu about how they enforce their trademark — you can’t even use the word ‘Facebook’ or ‘FB’ in any of your plugin names, meaning no ‘Integration of Blah with Facebook’. But that’s how it is, and you have to obey the law.

    For example, if a plugin is closed for something like that and the dev complain but make the change, that’s good. But if they make the change back when the plugin team isn’t looking, because they happen to know the team doesn’t review every change, then what happens is the plugin team gets a very nasty legal doc that threatens the entire repository. They plugin gets closed a second time.

    You can see how this would escalate. Especially when the dev starts complaining ‘but someone else got away with it!’ See what really happens when they do that is the team goes and looks at the other person and closes their plugin. No one wins. The legal team from Facebook gets angrier and angrier, and the legal mess gets worse and worse.

    If someone is the cause of putting the entire repository (or worse, the project) in legal jeopardy, they’re going to get banned and should be. They’re reckless and a danger to all.

    Fake Content Hurts the Community

    The number four reason to ban anyone is spam, auto-generated content, and lying.

    I’m sure someone is confused that I’ve lumped them together, but they’re all worthless content.

    Spam, no one would argue is ‘good.’ I know you get that one. Lying? Again, pretty obvious why you’d get banned for lying over and over again. If you can’t be trusted, then your contributions can’t be trusted.

    But auto-generated content? I almost called it ‘low quality content’ because that’s what it is. People who post copies of AI generated ‘answers’ wholesale are posting low quality content. Since we know that AI has issues with hallucinations (read ‘it just gets things wrong sometimes’), you have to verify it. If you’re doing that, you’re going to end up changing some of what it says.

    When someone doesn’t change anything it said, they’re not adding anything of value. It’s like dropping a ‘Let me google that for you’ link. They’re wasting everyone’s time and aren’t educating someone on how to help themself in the future. This is especially true on support forums.

    Community should help itself. If someone wants to look things up with AI, more power to them, but if they come to a place to ask for help, they deserve to be treated as a human, not a bot.

    Protect the Community From Yourself

    The number five reason to ban anyone is they’re actually acting harmfully to the community, not just you.

    This is sort of a backwards thing. It’s more ‘the number one reason NOT to ban…’ but it works anyway.

    I have never once banned a single person because they annoyed me, or hurt my feelings, or even threatened me.

    I’m pretty sure there are some people out there who are scoffing.

    As hard as that may be for some of you to believe, it’s the truth. I have only banned people for guideline violations. Pretty much all the threats I’ve received happened after I banned people, first of all, but more to the point, everyone who devolves to threats tends to have a violation first.

    There are some rare exceptions. I remember a few plugin reviews that had the sole reply of “fuck you” (or similar eloquence) and those were pre-emptively banned. Not because they swore at me, but because they clearly were incapable of following the guidelines. I didn’t want people thinking it was okay to talk like that to the community.

    Not me.

    The community.

    Call me whatever you want, I don’t care, but when you do that I sure as hell judge you.

    The Community Must Come First

    You may sense a theme here.

    Every single reason you ban someone is to help the community. Sometimes you’re protecting the community from itself, sometimes you’re doing things because there’s a grumpy lawyer standing over your shoulder (metaphorically), but at the end of the day you ban people who are actively harmful to the community.

    There will always be people who cannot be unbanned because of the danger they represent. Those people, the people who hurt the community, must stay out.

  • Automate Your Site Checks with Cron (and WPCron)

    Automate Your Site Checks with Cron (and WPCron)

    I have a self-hosted healthchecks.io instance (mentioned), and I use it to make sure all the needful cron jobs for my site actually run. I have it installed via Docker, so it’s not super complex to update and that’s how I like it.

    The first cron jobs I monitored were the ones I have setup in my crontab on the server:

    1. Run WP ‘due now’
    2. Set daily random ‘of the day’
    3. Download an iCal file
    4. Run a nightly data validity check

    I used to have these using WP Cron, but it’s a little too erratic for my needs. This is important, remember this for later, it’ll come back up.

    Once I added in those jobs, I got to thinking about the myriad WP Cron jobs that WordPress sets up on its own.

    In fact, I have a lot of them:

    +------------------------------------------------+---------------------+-----------------------+---------------+
    | hook                                           | next_run_gmt        | next_run_relative     | recurrence    |
    +------------------------------------------------+---------------------+-----------------------+---------------+
    | rediscache_discard_metrics                     | 2025-04-25 17:51:15 | now                   | 1 hour        |
    | wp_privacy_delete_old_export_files             | 2025-04-25 18:16:33 | 20 minutes 38 seconds | 1 hour        |
    | wp_update_user_counts                          | 2025-04-25 20:30:03 | 2 hours 34 minutes    | 12 hours      |
    | recovery_mode_clean_expired_keys               | 2025-04-25 22:00:01 | 4 hours 4 minutes     | 1 day         |
    | wp_update_themes                               | 2025-04-26 04:57:57 | 11 hours 2 minutes    | 12 hours      |
    | wp_update_plugins                              | 2025-04-26 04:57:57 | 11 hours 2 minutes    | 12 hours      |
    | wp_version_check                               | 2025-04-26 04:57:57 | 11 hours 2 minutes    | 12 hours      |
    [...]
    +------------------------------------------------+---------------------+-----------------------+---------------+
    

    While I could manually add them all to my tracker, the question comes up with how to add the ping to the end of the command?

    The Code

    I’m not going to break down the code here, it’s far too long and a lot of it is dependant on my specific setup.

    In essence, what you need to do is:

    1. Hook into schedule_event
    2. If the event isn’t recurring, just run it
    3. If it is recurring, see if there’s already a ping check for that event
    4. If there’s no check, add it
    5. Now add the ping to the end of the actual cron even
    6. Run the event

    I actually built out code like that using Laravel recently, for a work related project, so I had the structure already in my head and I was familiar with it. The problem though is WP Cron is nothing like ‘real’ cron.

    Note: If you really want to see the code, the beta code can be found in the LWTV GitHub repository. It has an issue with getting the recurrence, which is why I made this post.

    When CRON isn’t CRON

    From WikiPedia:

    The actions of cron are driven by a crontab (cron table) file, a configuration file that specifies shell commands to run periodically on a given schedule. The crontab files are stored where the lists of jobs and other instructions to the cron daemon are kept. 

    Which means crontab runs on the server time. When the server hits the time, it runs the job. Adding in jobs with the ping URL is quick:

    */10 * * * * /usr/bin/wp cron event run --due-now --path=/home/username/html/ && curl -fsS -m 10 --retry 5 -o /dev/null https://health.ipstenu.com/ping/APIKEY/due-now-every-10

    This job relies on the server being up and available, so it’s a decent metric. It always runs every ten minutes.

    But WP Cron? The ‘next run’ time (GMT) is weirdly more precise, but less reliable. 2025-04-25 17:51:15 doesn’t mean it’ll run at 5:51pm GMT and 15 seconds. It means that the next time after that timestamp, it will attempt to run the command.

    Since I have a scheduled ‘due now’ caller every ten minutes, if no one visits the site at 5:52pm (rounding up), then it won’t run until 6pm. That’s generally fine, but HealthChecks.io doesn’t really understand that. More to the point, I’m guestimating when

    HealthChecks.io has three ways to check time: Simple, Cron, and onCalendar. In general, I use Cron because while it’s cryptic, I understand it. That said, there’s no decent library to convert seconds (which is what WP uses to store the interval timing) which means you end up with a mess of if checks.

    A Mess of Checks

    First, pick a decent ‘default’ (I picked every hour).

    1. If the interval in seconds is not a multiple of 60, use the default.
    2. If the interval is less than 60 seconds, run every minute.
    3. Divide seconds by 60 to get minutes.
    4. If the interval in minutes is not a multiple of 60, use the default.
    5. If the interval is less than an hour (1 to 59 minutes), run every x minutes.
    6. Divide minutes by 60 to get hours.
    7. If the interval in hours is not an even number of days (divide hours by 24), use the default
    8. If the interval is less than a day (1 to 23 hours), run every X hours.
    9. Divide hours by 24 to get days.
    10. If the days interval is not a multiple of 7 , use the default.
    11. If the interval is less than a week (1 to 6 days), run every X days.
    12. Divide days by 7 to get weeks.
    13. If the interval is a week, run every week on ‘today’ at 00:00

    You see where this is going.

    And then there’s the worse part. After you’ve done all this, you have to tweak it.

    Tweaking Timing

    Why do I have to tweak it? Well for example, let’s look at the check for expired transients:

    if ( ! wp_next_scheduled( 'delete_expired_transients' ) &amp;&amp; ! wp_installing() ) {
    	wp_schedule_event( time(), 'daily', 'delete_expired_transients' );
    }
    

    This runs every day. Okay, but I don’t know exactly when it’ll run, just that I expect it to run daily. Using my logic above, the cron time would be 0 0 * * * which means … every day at midnight server time.

    But, like I said, I don’t actually know if it’ll run at midnight. In fact, it probably won’t! So I have to setup a grace period. Since I don’t know when in 24 hours something will run, I set it to 2.5 times the interval. If the interval runs every day, then I consider it a fail if it doesn’t run every two days and change.

    I really hate that, but it’s the best workaround I have at the moment.

    Should You Do This?

    Honestly?

    No.

    It’s positively ridiculous to have done in the first place, and I consider it more of a Proof of Concept than anything else. With the way WP handles cron and scheduling, too, it’s just a total pain in the backside to make this work without triggering alerts all the time!

    But at the same time, it does give you a lot more insight into what your site is doing, and when it’s not doing what it should be doing! In fact, this is how I found out that my Redis cache had held on to cron jobs from plugins long since removed!

    There are benefits, but most of the time this is nothing anyone needs.

  • Cute Bears, Uptime Kuma, and Docker

    Cute Bears, Uptime Kuma, and Docker

    I have a confession.

    I use Docker on my laptop all the time to create a stable test environment that I can use and abuse and validate before I push to my staging servers. When it’s just WordPress, I use LocalWP which is hands down one of the best ‘just WP’ desktop tools out there.

    But I don’t really do Docker on my servers.

    Or at least, I didn’t until last week.

    Vibe Coding

    I have a new habit, where I spin up test things while sprawled on my couch watching TV and messing on my iPad. All of this was done on my iPad using:

    • GitHub App
    • Terminus
    • ChatGPT

    Oh.

    Yeah. I used ChatGPT.

    Before you judge me, I validated and tested everything and didn’t blindly trust it, but honestly I did use it for a fast lookup where I didn’t want to figure out the specific search to get to my answer.

    My coworkers joked I’ve gone beyond vibe coding with this.

    Uptime Kuma

    Uptime Kuma is a replacement for UptimeRobot.

    Kuma (クマ/熊) means bear 🐻 in Japanese.

    A little bear is watching your website.🐻🐻🐻

    I mean come on, how could I not?

    Anyway. How do I install this with Docker?

    First of all, I have a dedicated server at DreamHost, which allows me to install Docker.

    Assuming you did that, pull the image down: docker pull louislam/uptime-kuma

    I store my docker stuff in /root/docker/ in subfolders, but you can do it wherever. Some people like to use /opt/uptime-kuma/ for example. Wherever you store it, you’ll need a docker-compose.yml file:

    version: "3"
    services:
      uptime-kuma:
        image: louislam/uptime-kuma:latest
        container_name: uptime-kuma
        restart: unless-stopped
        ports:
          - "3001:3001"
        volumes:
          - ./data:/app/data
        environment:
          - TZ=America/Los_Angeles
    
    

    Keep in mind that data folder? It’ll be created in your uptime-kuma folder. In it are things like the logos I uploaded, which is cool. Anyway, once you’re done, make sure you’re in that uptime-kuma folder and run docker-compose up -d

    Why Docker? Why Now?

    Normally my directions on how you do all this stuff is hella long and complex. There’s a reason I went with Docker when I’ve been avoiding it for (well) years on my servers.

    First of all, it’s isolated. This means it keeps its packages to itself, and I don’t have to worry about the requirements for app A messing up app B. This is hugely important the more apps you have on a server. I have Meilesearch, Uptime Kuma, and more! This gets unwieldy pretty fast.

    The other big reason is … it’s easy. I mean, come on, it’s two commands and a config file!? Compare that to the manual steps which (for one app I manage) can have pages of documentation and screenshots.

    Speaking of easy? Let’s say an upgrade went tits up on your server. Guess what? Rolling back on Docker is super easy. You basically just change the image. All your data is fine.

    I know, right? It’s super weird! But if you remember that data folder? Right so that doesn’t get deleted. It lives in my /root/docker/uptime-kuma/ folder and even if I change the docker image, that folder has my data!

    What’s the Gotcha?

    There is always a Gotcha. In my case, it’s that Nginx can be a bit of a jerk. You have to set up a thing called proxy_pass:

    server {
        server_name status.ipstenu.com;
    
        location / {
            proxy_pass http://localhost:3001;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_redirect off;
        }
    
        listen 443 ssl;
        ssl_certificate /etc/letsencrypt/live/status.ipstenu.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/status.ipstenu.com/privkey.pem;
        include /etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
    }
    
    server {
        if ($host = status.ipstenu.com) {
            return 301 https://$host$request_uri;
        }
    
        server_name status.ipstenu.com;
        listen 80;
        return 404;
    }
    

    But see here’s where it’s extra weird.

    Now, if you go to status.ipstenu.com you’ll get a dashboard login. Won’t help you, right? But if you go to (for example) status.ipstenu.com/status/lwtv you’ll see the status page for LezWatch.TV.

    And if you go to status.lezwatchtv.com …. You see the same thing.

    The Fifth Doctor, from Doctor Who, looking surprised with a caption of "Whaaaaa!?"

    Get this. Do the same thing you did for the main nginx conf file, but change your location part:

        location / {
            proxy_pass http://127.0.0.1:3001;
            proxy_http_version 1.1;
    
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    

    That’s it. Take a bow.

    Docker for Everything?

    No.

    I didn’t do docker for my Meilisearch UI install because it’s a simple webpage that doesn’t need it (and yes, that means I did redo my Meilisearch install as Docker).

    I wouldn’t do my WordPress site as Docker … yet … mostly because it’s not an official Docker, but also because WordPress is a basic PHP app. Same as FreshRSS. They don’t need Docker, they need a basic web server, and the risks of compatibility issues are low.

    ChatGPT / AI Coding for Everything?

    No.

    I see where ‘AI’ is going and while I don’t consider it actually artificial intelligence, it is a super complex language learning model than can take your presets and assist you.

    But AI hallucinates. A lot. Like, I asked it to help me set up Meilisearch UI in docker, only to find out that only works for dev and it wanted me to hack the app. I have another project where it’s constantly telling me there’s a whole library that doesn’t exist (and never has), that will solve my problems.

    It got so bad, my boss tried it on his early-release version of an AI tool and it got worse.

    And finally … sometimes it gets super obsessive about the wrong thing. I had a config wrong for Kuma at one point, and ChatGPT kept telling me to check my nginx settings. I had to tell it “Someone will die if you ask me about my nginx settings again” to make it stop.

    What Can I Do with Kuma?

    That little bear is handling 80% of my monitoring now. The remaining 20% are cron jobs that I use HealthChecks.io for (self hosted, of course).

    What are my Kuma-chans?

    • 2 basic “Is this site up?” for GitHub and a service we use.
    • 3 slightly more complex “Is this site up and how’s the SSL cert?” for all three domains I own in this case.
    • 1 basic “Does this keyword exist on this URL?” check for making sure my site isn’t hacked.
    • 2 basic “Does this API key pair exist with this specific data?” for two APIs that do very different things.
    • 1 auth login “Do I still have access to this API?” check for a service.

    I mentioned there are 2 basic API checks, but they do different things. Here’s where it’s fun.

    Ready?

    Screenshot showing I'm checking the LWTV api for last death and confirming the died date.
    Screenshot

    Now that part is pretty basic. Right? Check the API, confirm the date for ‘died’ is what I think, done. And if it’s not? What do I do?

    Well I send a Slack Message:

    Notification setup for Slack where the LWTV Death Toll (this is a joke) tells us when the death is down. it's not really down.
    Screenshot

    This tells us the death is down. Which means ‘someone new has been added to the site as a most recent death.’

    Right now I have to manually go in and change the value to the valid one, but it works. And it’s one way to keep everyone updated.

  • Monstrous Site Notes

    Monstrous Site Notes

    If you have a MonsterInsights Pro or Agency, you have access to Site Notes.

    They’re a great way to automagically connect your traffic reports to ‘things’ you’ve done on your site. The problem is that Site Notes are, by default, manual. To automate them, you need yet another plugin.

    Now to a degree, this makes sense. While the out of the box code is pretty clearcut, there’s one ‘catch’ and its categories.

    The Basic Call

    The actual code to make a note is pretty simple:

    		$note_args  = array(
    			'note'        => 'Title,
    			'author_id'   => 'author,
    			'date'        => 'date',
    			'category_id' => 1, // Where 1 is a category ID
    			'important'   => [true|false],
    		);
    
    		monsterinsights_add_site_note( $note_args );
    

    But as I mentioned, category_id is the catch. There isn’t actually an interface to know what those IDs are. The automator tools hook in and set that up for you

    Thankfully I know CLI commands and I can get a list:

    $ wp term list monsterinsights_note_category
    +---------+------------------+-----------------+-----------------+-------------+--------+-------+
    | term_id | term_taxonomy_id | name            | slug            | description | parent | count |
    +---------+------------------+-----------------+-----------------+-------------+--------+-------+
    | 850     | 850              | Blog Post       | blog-post       |             | 0      | 0     |
    | 851     | 851              | Promotion       | promotion       |             | 0      | 0     |
    | 849     | 849              | Website Updates | website-updates |             | 0      | 0     |
    +---------+------------------+-----------------+-----------------+-------------+--------+-------+
    

    But I don’t want to hardcode the IDs in.

    There are a couple ways around this, thankfully. WordPress has a function called get_term_id() which lets you search by the slug, name, or ID. Since the list of categories shows the names, I can grab them!

    A list of the categories for site notes.
    Screenshot of Site Notes Categories page.

    That means I can get the term ID like this:

     $term = get_term_by( 'name', 'Blog Post', 'monsterinsights_note_category' );
    

    Now the gotcha here? You can’t rename them or you break your code.

    Example for New Posts

    Okay so here’s how it looks for a new post:

    add_action( 'publish_post', 'create_site_note_on_post_publish', 10, 2 );
    
    function create_site_note_on_post_publish( $post_ID, $post ) {
        if ( function_exists( 'monsterinsights_add_site_note' ) ) {
            return;
        }
    
        if ( $post->post_type !== 'post' ) {
            return;
        }
    
        $post_title = $post->post_title;
        $term       = get_term_by( 'name', 'Blog Post', 'monsterinsights_note_category' );
    
        // Prepare the site note arguments
        $args = array(
            'note'        => 'New Post: ' . sanitize_text_field(   $post_title  ),
            'author_id'   => $post->post_author,
            'date'        => $post->post_date,
            'category_id' => $term->term_id,
            'important'   => false
        );
    
        monsterinsights_add_site_note( $args );
    }
    

    See? Pretty quick.