Half-Elf on Tech

Thoughts From a Professional Lesbian

Tag: docker

  • Cute Bears, Uptime Kuma, and Docker

    Cute Bears, Uptime Kuma, and Docker

    I have a confession.

    I use Docker on my laptop all the time to create a stable test environment that I can use and abuse and validate before I push to my staging servers. When it’s just WordPress, I use LocalWP which is hands down one of the best ‘just WP’ desktop tools out there.

    But I don’t really do Docker on my servers.

    Or at least, I didn’t until last week.

    Vibe Coding

    I have a new habit, where I spin up test things while sprawled on my couch watching TV and messing on my iPad. All of this was done on my iPad using:

    • GitHub App
    • Terminus
    • ChatGPT

    Oh.

    Yeah. I used ChatGPT.

    Before you judge me, I validated and tested everything and didn’t blindly trust it, but honestly I did use it for a fast lookup where I didn’t want to figure out the specific search to get to my answer.

    My coworkers joked I’ve gone beyond vibe coding with this.

    Uptime Kuma

    Uptime Kuma is a replacement for UptimeRobot.

    Kuma (クマ/熊) means bear 🐻 in Japanese.

    A little bear is watching your website.🐻🐻🐻

    I mean come on, how could I not?

    Anyway. How do I install this with Docker?

    First of all, I have a dedicated server at DreamHost, which allows me to install Docker.

    Assuming you did that, pull the image down: docker pull louislam/uptime-kuma

    I store my docker stuff in /root/docker/ in subfolders, but you can do it wherever. Some people like to use /opt/uptime-kuma/ for example. Wherever you store it, you’ll need a docker-compose.yml file:

    version: "3"
    services:
      uptime-kuma:
        image: louislam/uptime-kuma:latest
        container_name: uptime-kuma
        restart: unless-stopped
        ports:
          - "3001:3001"
        volumes:
          - ./data:/app/data
        environment:
          - TZ=America/Los_Angeles
    
    

    Keep in mind that data folder? It’ll be created in your uptime-kuma folder. In it are things like the logos I uploaded, which is cool. Anyway, once you’re done, make sure you’re in that uptime-kuma folder and run docker-compose up -d

    Why Docker? Why Now?

    Normally my directions on how you do all this stuff is hella long and complex. There’s a reason I went with Docker when I’ve been avoiding it for (well) years on my servers.

    First of all, it’s isolated. This means it keeps its packages to itself, and I don’t have to worry about the requirements for app A messing up app B. This is hugely important the more apps you have on a server. I have Meilesearch, Uptime Kuma, and more! This gets unwieldy pretty fast.

    The other big reason is … it’s easy. I mean, come on, it’s two commands and a config file!? Compare that to the manual steps which (for one app I manage) can have pages of documentation and screenshots.

    Speaking of easy? Let’s say an upgrade went tits up on your server. Guess what? Rolling back on Docker is super easy. You basically just change the image. All your data is fine.

    I know, right? It’s super weird! But if you remember that data folder? Right so that doesn’t get deleted. It lives in my /root/docker/uptime-kuma/ folder and even if I change the docker image, that folder has my data!

    What’s the Gotcha?

    There is always a Gotcha. In my case, it’s that Nginx can be a bit of a jerk. You have to set up a thing called proxy_pass:

    server {
        server_name status.ipstenu.com;
    
        location / {
            proxy_pass http://localhost:3001;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_redirect off;
        }
    
        listen 443 ssl;
        ssl_certificate /etc/letsencrypt/live/status.ipstenu.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/status.ipstenu.com/privkey.pem;
        include /etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
    }
    
    server {
        if ($host = status.ipstenu.com) {
            return 301 https://$host$request_uri;
        }
    
        server_name status.ipstenu.com;
        listen 80;
        return 404;
    }
    

    But see here’s where it’s extra weird.

    Now, if you go to status.ipstenu.com you’ll get a dashboard login. Won’t help you, right? But if you go to (for example) status.ipstenu.com/status/lwtv you’ll see the status page for LezWatch.TV.

    And if you go to status.lezwatchtv.com …. You see the same thing.

    The Fifth Doctor, from Doctor Who, looking surprised with a caption of "Whaaaaa!?"

    Get this. Do the same thing you did for the main nginx conf file, but change your location part:

        location / {
            proxy_pass http://127.0.0.1:3001;
            proxy_http_version 1.1;
    
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    

    That’s it. Take a bow.

    Docker for Everything?

    No.

    I didn’t do docker for my Meilisearch UI install because it’s a simple webpage that doesn’t need it (and yes, that means I did redo my Meilisearch install as Docker).

    I wouldn’t do my WordPress site as Docker … yet … mostly because it’s not an official Docker, but also because WordPress is a basic PHP app. Same as FreshRSS. They don’t need Docker, they need a basic web server, and the risks of compatibility issues are low.

    ChatGPT / AI Coding for Everything?

    No.

    I see where ‘AI’ is going and while I don’t consider it actually artificial intelligence, it is a super complex language learning model than can take your presets and assist you.

    But AI hallucinates. A lot. Like, I asked it to help me set up Meilisearch UI in docker, only to find out that only works for dev and it wanted me to hack the app. I have another project where it’s constantly telling me there’s a whole library that doesn’t exist (and never has), that will solve my problems.

    It got so bad, my boss tried it on his early-release version of an AI tool and it got worse.

    And finally … sometimes it gets super obsessive about the wrong thing. I had a config wrong for Kuma at one point, and ChatGPT kept telling me to check my nginx settings. I had to tell it “Someone will die if you ask me about my nginx settings again” to make it stop.

    What Can I Do with Kuma?

    That little bear is handling 80% of my monitoring now. The remaining 20% are cron jobs that I use HealthChecks.io for (self hosted, of course).

    What are my Kuma-chans?

    • 2 basic “Is this site up?” for GitHub and a service we use.
    • 3 slightly more complex “Is this site up and how’s the SSL cert?” for all three domains I own in this case.
    • 1 basic “Does this keyword exist on this URL?” check for making sure my site isn’t hacked.
    • 2 basic “Does this API key pair exist with this specific data?” for two APIs that do very different things.
    • 1 auth login “Do I still have access to this API?” check for a service.

    I mentioned there are 2 basic API checks, but they do different things. Here’s where it’s fun.

    Ready?

    Screenshot showing I'm checking the LWTV api for last death and confirming the died date.
    Screenshot

    Now that part is pretty basic. Right? Check the API, confirm the date for ‘died’ is what I think, done. And if it’s not? What do I do?

    Well I send a Slack Message:

    Notification setup for Slack where the LWTV Death Toll (this is a joke) tells us when the death is down. it's not really down.
    Screenshot

    This tells us the death is down. Which means ‘someone new has been added to the site as a most recent death.’

    Right now I have to manually go in and change the value to the valid one, but it works. And it’s one way to keep everyone updated.

  • Docked Libraries and DNSMasque

    Docked Libraries and DNSMasque

    I use Docker at work because it’s what we have to use to build sites on specific servers (like WordPress VIP). Honestly, I like it because everything is nicely isolated, but it has been known to have some … let’s call them ‘quirks’ with the newer M1 and M2 chip Macs.

    You know what I have.

    And I had some drama on a Friday afternoon, because why the hell not.

    Drama 1: libc-bin

    After working just fine all day, I quit out of Docker to run something else that likes to use a lot of processing power. When I popped back in and started my container, it blew a gasket on me:

    21.39 Setting up npm (9.2.0~ds1-1) ...
    21.40 Processing triggers for libc-bin (2.36-9+deb12u1) ...
    22.74 npm ERR! code EBADENGINE
    22.74 npm ERR! engine Unsupported engine
    22.74 npm ERR! engine Not compatible with your version of node/npm: npm@10.1.0
    22.74 npm ERR! notsup Not compatible with your version of node/npm: npm@10.1.0
    22.74 npm ERR! notsup Required: {"node":"^18.17.0 || >=20.5.0"}
    22.74 npm ERR! notsup Actual:   {"npm":"9.2.0"}
    

    I was using Node 16 as a holdover from some work I was doing back at DreamHost. Of course the first thing I did was update Node to 18, but no matter what I tried, Docker would not run the right version!

    I looked at the Dockerfile and saw this section:

    # Development tooling dependencies
    RUN apt-get update \
    	&& apt-get install -y --no-install-recommends \
    		bash less default-mysql-client git zip unzip \
    		nodejs npm curl pv \
    		msmtp libz-dev libmemcached-dev \
    	&& npm install --global npm@latest \
    	&& rm -rf /var/lib/apt/lists/*
    

    When I broke it apart, it was clear than apt-get install was installing the wrong version of Node!

    Maddening. I wrestled around, and finally I added FROM node:18 to the top of my Dockerfile to see if that declare would work (after all, Docker supports multiple FROM calls since 2017).

    To my surprise, it did! Almost…

    Drama 2: PHP

    It broke PHP.

    While you can have multiple FROM calls in modern Docker, you have to make sure that you place them properly. Since node was the new thing, I put it as the second FROM call. In doing so, it overrode the PHP call a few lines down, causing the build to fail on PHP.

    Our Dockerfile is using something like the older version of the default file (I know I know, but I can only update a few things at a time, I have 4 tickets out there to modernize things, including PHPCS), I had to move the call for FROM wordpress:php8.1-fpm to right above the line where we call PHP.

    You may not have that. But if you add in that from node and it breaks PHP telling you it can’t run? That’s probably why.

    Huzzah, the build works! PHP and Node are happy … but then …

    Drama 3: UDP Ports

    Guess what happened next?

    Error response from daemon: Ports are not available: exposing port 
    UDP 0.0.0.0:53 -> 0.0.0.0:0: listen udp 0.0.0.0:53: bind: address 
    already in use
    

    I shouted “FOR FUCKS SAKE!”

    I did not want to edit the compose.yml file. Not one bit. If it works for everyone else, it should be that way.

    Thankfully, Docker has a way to override with compose.override.yml (we have a docker-compose.override.yml file, old school name, old school project). I was already using that because, for some dumb reason, the only way to get the database working was to add in this:

    services:
      db:
        platform: linux/amd64
    

    It’s not a super dumb reason, it’s a Docker vs M1 chipset reason. Still, it was annoying as all get out.

    Naturally, I assumed override meant anything I put in there would override the default. So I tossed this in:

      dnsmasq:
        ports:
          - "54:54/udp"
    

    Turns out, override doesn’t mean override when it comes to ports. I went to the documentation, and there is no mention of how to override ports. On to the Googles! A lot of searching finally landed me on an old, closed ticket that implied I could do exactly what I wanted.

    After reading that whole long ass ticket I determined the syntax is this:

      dnsmasq:
        ports: !reset
          - "54:54/udp"
    

    Finally I could get my site up and running! No idea why that isn’t documented, since the dnsmasq issue is a known compat issue with MacOS.

    Drama 4: Update All The Things

    Then I did the dumbest thing on the planet.

    I updated the following:

    • Docker
    • Everything I use from HomeBrew
    • Mac OS (only the minor – I’m not jumping to the next major release on a Friday afternoon!)

    But I have good news this time.

    Everything worked.

    Final Drama…

    The thing was, I didn’t really want to have to edit the Dockerfile. It’s a tetchy beast, and we all use different OSes (I’m M1, one person is older Mac, one is Windows). Cross compatibility is a bit issue. I did test a number of alternatives (like defining engine in package.json and even mimicking our other setups).

    At the end of the day, nothing worked except the above. No matter what, libc-bin was certain I was using NPM 10, which I wasn’t. I wish I’d found a better solution, but so far, this is the only way I could convince my Dockerfile that when you install via apt, you really want to use the latest.