Half-Elf on Tech

Thoughts From a Professional Lesbian

Tag: coding

  • Cute Bears, Uptime Kuma, and Docker

    Cute Bears, Uptime Kuma, and Docker

    I have a confession.

    I use Docker on my laptop all the time to create a stable test environment that I can use and abuse and validate before I push to my staging servers. When it’s just WordPress, I use LocalWP which is hands down one of the best ‘just WP’ desktop tools out there.

    But I don’t really do Docker on my servers.

    Or at least, I didn’t until last week.

    Vibe Coding

    I have a new habit, where I spin up test things while sprawled on my couch watching TV and messing on my iPad. All of this was done on my iPad using:

    • GitHub App
    • Terminus
    • ChatGPT

    Oh.

    Yeah. I used ChatGPT.

    Before you judge me, I validated and tested everything and didn’t blindly trust it, but honestly I did use it for a fast lookup where I didn’t want to figure out the specific search to get to my answer.

    My coworkers joked I’ve gone beyond vibe coding with this.

    Uptime Kuma

    Uptime Kuma is a replacement for UptimeRobot.

    Kuma (クマ/熊) means bear 🐻 in Japanese.

    A little bear is watching your website.🐻🐻🐻

    I mean come on, how could I not?

    Anyway. How do I install this with Docker?

    First of all, I have a dedicated server at DreamHost, which allows me to install Docker.

    Assuming you did that, pull the image down: docker pull louislam/uptime-kuma

    I store my docker stuff in /root/docker/ in subfolders, but you can do it wherever. Some people like to use /opt/uptime-kuma/ for example. Wherever you store it, you’ll need a docker-compose.yml file:

    version: "3"
    services:
      uptime-kuma:
        image: louislam/uptime-kuma:latest
        container_name: uptime-kuma
        restart: unless-stopped
        ports:
          - "3001:3001"
        volumes:
          - ./data:/app/data
        environment:
          - TZ=America/Los_Angeles
    
    

    Keep in mind that data folder? It’ll be created in your uptime-kuma folder. In it are things like the logos I uploaded, which is cool. Anyway, once you’re done, make sure you’re in that uptime-kuma folder and run docker-compose up -d

    Why Docker? Why Now?

    Normally my directions on how you do all this stuff is hella long and complex. There’s a reason I went with Docker when I’ve been avoiding it for (well) years on my servers.

    First of all, it’s isolated. This means it keeps its packages to itself, and I don’t have to worry about the requirements for app A messing up app B. This is hugely important the more apps you have on a server. I have Meilesearch, Uptime Kuma, and more! This gets unwieldy pretty fast.

    The other big reason is … it’s easy. I mean, come on, it’s two commands and a config file!? Compare that to the manual steps which (for one app I manage) can have pages of documentation and screenshots.

    Speaking of easy? Let’s say an upgrade went tits up on your server. Guess what? Rolling back on Docker is super easy. You basically just change the image. All your data is fine.

    I know, right? It’s super weird! But if you remember that data folder? Right so that doesn’t get deleted. It lives in my /root/docker/uptime-kuma/ folder and even if I change the docker image, that folder has my data!

    What’s the Gotcha?

    There is always a Gotcha. In my case, it’s that Nginx can be a bit of a jerk. You have to set up a thing called proxy_pass:

    server {
        server_name status.ipstenu.com;
    
        location / {
            proxy_pass http://localhost:3001;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_redirect off;
        }
    
        listen 443 ssl;
        ssl_certificate /etc/letsencrypt/live/status.ipstenu.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/status.ipstenu.com/privkey.pem;
        include /etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
    }
    
    server {
        if ($host = status.ipstenu.com) {
            return 301 https://$host$request_uri;
        }
    
        server_name status.ipstenu.com;
        listen 80;
        return 404;
    }
    

    But see here’s where it’s extra weird.

    Now, if you go to status.ipstenu.com you’ll get a dashboard login. Won’t help you, right? But if you go to (for example) status.ipstenu.com/status/lwtv you’ll see the status page for LezWatch.TV.

    And if you go to status.lezwatchtv.com …. You see the same thing.

    The Fifth Doctor, from Doctor Who, looking surprised with a caption of "Whaaaaa!?"

    Get this. Do the same thing you did for the main nginx conf file, but change your location part:

        location / {
            proxy_pass http://127.0.0.1:3001;
            proxy_http_version 1.1;
    
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    

    That’s it. Take a bow.

    Docker for Everything?

    No.

    I didn’t do docker for my Meilisearch UI install because it’s a simple webpage that doesn’t need it (and yes, that means I did redo my Meilisearch install as Docker).

    I wouldn’t do my WordPress site as Docker … yet … mostly because it’s not an official Docker, but also because WordPress is a basic PHP app. Same as FreshRSS. They don’t need Docker, they need a basic web server, and the risks of compatibility issues are low.

    ChatGPT / AI Coding for Everything?

    No.

    I see where ‘AI’ is going and while I don’t consider it actually artificial intelligence, it is a super complex language learning model than can take your presets and assist you.

    But AI hallucinates. A lot. Like, I asked it to help me set up Meilisearch UI in docker, only to find out that only works for dev and it wanted me to hack the app. I have another project where it’s constantly telling me there’s a whole library that doesn’t exist (and never has), that will solve my problems.

    It got so bad, my boss tried it on his early-release version of an AI tool and it got worse.

    And finally … sometimes it gets super obsessive about the wrong thing. I had a config wrong for Kuma at one point, and ChatGPT kept telling me to check my nginx settings. I had to tell it “Someone will die if you ask me about my nginx settings again” to make it stop.

    What Can I Do with Kuma?

    That little bear is handling 80% of my monitoring now. The remaining 20% are cron jobs that I use HealthChecks.io for (self hosted, of course).

    What are my Kuma-chans?

    • 2 basic “Is this site up?” for GitHub and a service we use.
    • 3 slightly more complex “Is this site up and how’s the SSL cert?” for all three domains I own in this case.
    • 1 basic “Does this keyword exist on this URL?” check for making sure my site isn’t hacked.
    • 2 basic “Does this API key pair exist with this specific data?” for two APIs that do very different things.
    • 1 auth login “Do I still have access to this API?” check for a service.

    I mentioned there are 2 basic API checks, but they do different things. Here’s where it’s fun.

    Ready?

    Screenshot showing I'm checking the LWTV api for last death and confirming the died date.
    Screenshot

    Now that part is pretty basic. Right? Check the API, confirm the date for ‘died’ is what I think, done. And if it’s not? What do I do?

    Well I send a Slack Message:

    Notification setup for Slack where the LWTV Death Toll (this is a joke) tells us when the death is down. it's not really down.
    Screenshot

    This tells us the death is down. Which means ‘someone new has been added to the site as a most recent death.’

    Right now I have to manually go in and change the value to the valid one, but it works. And it’s one way to keep everyone updated.

  • Meilisearch at Home

    Meilisearch at Home

    There are things most CMS tools are great at, and then there are things they suck at. Universally? They all suck at search when you get to scale.

    This is not really true fault of the CMS (be it WordPress, Drupal, Hugo, etc). The problem is search is difficult to build! If it was easy, everyone would do it. The whole reason Google rose to dominance was that it made search easy and reliable. And that’s great, but not everyone is okay with relying on 3rd party services.

    I’ve used ElasticSearch (too clunky to run, a pain to customize), Lunr (decent for static sites), and even integrated Yahoo and Google searches. They all have issues.

    Recently I was building out a search tool for a private (read: internal, no access if you’re not ‘us’) service, and I was asked to do it with MeiliSearch. It was new to me. As I installed and configured it, I thought … “This could be a nice solution.”

    Build Your Instance

    When you read the directions, you’ll notice they want to install the app as root, meaning it would be one install. And that sounds okay until you start thinking about multiple servers using one instance (for example, WordPress Multisite) where you don’t want to cross contaminate your results. Wouldn’t want posts from Ipstenu.org and Woody.com showing up on HalfElf, and all.

    There are a couple of ways around that, Multi-Tenancy and multiple Indexes. I went with the indexes for now, but I’m sure I’ll want tenancy later.

    I’m doing all this on DreamHost, because I love those weirdos, but there are pre-built images on DigitalOcean if that floats your goat:

    1. Make a dedicated server or a DreamCompute (I used the latter) – you need root access
    2. Set the server to Nginx with the latest PHP – this will allow you to make a proxy later
    3. Add your ssh key from ~/.ssh/id_rsa.pub to your SSH keys – this will let you log in root (or an account with root access)

    Did that? Great! The actual installation is pretty easy, you can just follow the directions down the line.

    Integration with WordPress

    The first one I integrated with was WordPress and for that I used Yuto.

    It’s incredibly straightforward to set up. Get your URL and your Master Key. Plunk them in. Save. Congratulations!

    On the Indices page I originally set my UIDs to ipstenu_posts and ipstenu_pages – to prevent collisions. But then I realized… I wanted the whole site on there, so I made them both ipstenu_org

    Example of Yuto's search output
    Yuto Screenshot

    I would like to change the “Ipstenu_org” flag, like ‘If there’s only one Index, don’t show the name’ and then a way to customize it.

    I will note, there’s a ‘bug’ in Yuto – it has to load all your posts into a cache before it will index them, and that’s problematic if you have a massive amount of posts, or if you have anti-abuse tools that block long actions like that. I made a quick WP-CLI command.

    WP-CLI Command

    The command I made is incredibly simple: wp ipstenu yuto build-index posts

    The code is fast, too. It took under a minute for over 1000 posts.

    After I made it, I shared it with the creators of Yuto, and their next release includes a version of it.

    Multiple Indexes and Tenants

    You’ll notice that I have two indexes. This is due to how the plugin works, making an index per post type. In so far as my ipstenu.org sites go, I don’t mind having them all share a tenant. After all, they’re all on a server together.

    However… This server will also house a Hugo site and my other WP site. What to do?

    The first thing I did was I made a couple more API keys! They have read-write access to a specific index (the Key for “Ipstenu” has access to my ipstenu_org index and so on). That lets me manage things a lot more easily and securely.

    While Yuto will make the index, it cannot make custom keys, so I used the API:

    curl \
    -X POST 'https://example.com/keys' \
    -H 'Authorization: Bearer BEARERKEY' \
    -H 'Content-Type: application/json' \
    --data-binary '{
    "description": "Ipstenu.org",
    "actions": ["*"],
    "indexes": ["ipstenu_org"],
    "expiresAt": null
    }'
    

    That returns a JSON string with (among other things) a key that you can use in WordPress.

    Will I look into Tenancy? Maybe. Haven’t decided yet. For now, separate indexes works for me.

  • Cookie Consent on Hugo

    Cookie Consent on Hugo

    Hugo is my favourite static site generator. I use it on a site I originally created in 1996 (yes, FLF is about to be 30!). Over the last 6 months, I’ve been totally overhauling the site from top to bottom, and one of the long-term goals I had was to add in Cookie Consent.

    Hugo Has Privacy Mode

    One of the nice things about Hugo is they have a built in handler for Privacy Mode.

    I have everything set to respect Do Not Track and use PrivacyMode whenever possible. It lightens my load a lot.

    Built Into Hinode: CookieYes

    The site makes use of Hinode, which has built in support for cookie consent… Kind of. They use the CookieYes service, which I get but I hate. I don’t want to offload things to a service. In fact, part of the reason I moved of WordPress and onto Hugo for the site was GDPR.

    I care deeply about privacy. People have a right to privacy, and to opt in to tracking. A huge part of that is to minimize the amount of data from your own websites that are sent around to other people and saved on your own server/services!

    Obviously I need to know some things. I need to know how many mobile users there are so I can make it better. I need to know what pages have high traffic so I can expand them. If everyone is going to a recap page only to try and find a gallery, then I need to make those more prominent.

    In other words, I need Analytics.

    And the best analytics? Still Google.

    Sigh.

    Alternatively: CookieConsent

    I did my research. I checked a lot of services (free and pay), I looked into solutions people have implemented for Hugo, and then I thought there has to be a simple tool for this.

    There is.

    CookieConsent.

    CookieConsent is a free, open-source (MIT) mini-library, which allows you to manage scripts — and consequently cookies — in full GDPR fashion. It is written in vanilla js and can be integrated in any web platform/framework.

    And yes, you can integrate with Hugo.

    How to Add CookieConsent to Hugo

    First, download it. I have node set up to handle a lot of things, so I went with the easy route:

    npm i vanilla-cookieconsent@3.1.0

    Next, I have to add the dist files to my site. I added in a command to my package.json:

    "build:cookie": "cp node_modules/vanilla-cookieconsent/dist/cookieconsent.css static/css/cookieconsent.css && cp node_modules/vanilla-cookieconsent/dist/cookieconsent.umd.js static/js/cookieconsent.umd.js",
    

    If you’re familiar with Hinode, may notice I’m not using the suggested way to integrate JS. If I was doing this in pure Hinode, I’d be copying the files to assets/js/critical/functional/ instead of my static folder.

    I tried. It errors out:

    Error: error building site: EXECUTE-AS-TEMPLATE: failed to transform "/js/critical.bundle-functional.js" (text/javascript): failed to parse Resource "/js/critical.bundle-functional.js" as Template:: template: /js/critical.bundle-functional.js:210: function "revisionMessage" not defined
    

    I didn’t feel like debugging the whole mess.

    Anyway, once you get those files in, you need to make another special js file. This file is your configuration or initialization file. And if you look at the configuration directions, it’s a little lacking.

    Instead of that, go look at their Google Example! This gives you everything you need to comply with Google Tag Manager Consent Mode, which matters to me. I copied that into /static/js/cookieconsent-init.js and customized it. Like, I don’t have ads so I left that out.

    Add Your JS and CSS

    I already have a customized header (/layouts/partials/head/head.html) for unrelated reasons, but if you don’t, copy the one from Hinode core over and add in this above the call for the SEO file:

    <link rel="stylesheet" href="/css/cookieconsent.css">

    Then you’ll want to edit /layouts/partials/templates/script.html and add in this at the bottom:

    <script type="module" src="/js/cookieconsent-init.js"></script>

    Since your init file contains the call to the main consent code, you’re good to go!

    The Output

    When you visit the site, you’ll see this:

    Screenshot of a cookie consent page.
    Screenshot

    Now there’s a typo in this one, it should say “That means if you click “Reject” right now, you won’t get any Google Analytics cookies.” I fixed it before I pushed anything to production. But I made sure to specify that so people know right away.

    If you click on manage preferences, you’ll get the expanded version:

    Screenshot of expanded cookie preferences.
    Screenshot

    The language is dry as the desert because it’s to meet Google’s specifics.

    As for ‘strictly necessary cookies’?

    At this time we have NO necessary cookies. This option is here as a placeholder in case we have to add any later. We’ll notify you if that happens.

    And how will I notify them? By using Revision Management.

  • Replacing the W in Your Admin Bar

    Replacing the W in Your Admin Bar

    This is a part of ‘white labeling’, which basically means rebranding.

    When you have a website that is not used by people who really need to mess with WordPress, nor learn all about it (because you properly manage your own documentation for your writers), then that W in your admin toolbar is a bit odd, to say the least.

    This doesn’t mean I don’t want my editors to know what WordPress is, we have a whole about page, and the powered-by remains everywhere in the admin pages, but that logo…

    Well anyway, I decided to nuke it.

    Remove What You Don’t Need

    First I made a function that removes everything I don’t need:

    function cleanup_admin_bar(): void {
    	global $wp_admin_bar;
    
    	// Remove customizer link
    	$wp_admin_bar->remove_menu( 'customize' );
    
    	// Remove WP Menu things we don't need.
    	$wp_admin_bar->remove_menu( 'contribute' );
    	$wp_admin_bar->remove_menu( 'wporg' );
    	$wp_admin_bar->remove_menu( 'learn' );
    	$wp_admin_bar->remove_menu( 'support-forums' );
    	$wp_admin_bar->remove_menu( 'feedback' );
    
    	// Remove comments
    	$wp_admin_bar->remove_node( 'comments' );
    }
    
    add_action( 'wp_before_admin_bar_render','cleanup_admin_bar' );
    

    I also removed the comments node and the customizer because this site doesn’t use comments, and also how many times am I going to that Customizer anyway? Never. But the number of times I miss-click on my tablet? A lot.

    But you may notice I did not delete everything. That’s on purpose.

    Make Your New Nodes

    Instead of recreating everything, I reused some things!

    function filter_admin_bar( $wp_admin_bar ): void {
    	// Remove Howdy and Name, only use avatar.
    	$my_account = $wp_admin_bar->get_node( 'my-account' );
    
    	if ( isset( $my_account->title ) ) {
    		preg_match( '/<img.*?>/', $my_account->title, $matches );
    
    		$title = ( isset( $matches[0] ) ) ? $matches[0] : '<img src="fallback/images/avatar.png" alt="SITE NAME" class="avatar avatar-26 photo" height="26" width="26" />';
    
    		$wp_admin_bar->add_node(
    			array(
    				'id'    => 'my-account',
    				'title' => $title,
    			)
    		);
    	}
    
    	// Customize the Logo
    	$wp_logo = $wp_admin_bar->get_node( 'wp-logo' );
    	if ( isset( $wp_logo->title ) ) {
    		$logo = file_get_contents( '/images/site-logo.svg' );
    		$wp_admin_bar->add_node(
    			array(
    				'id'     => 'wp-logo',
    				'title'  => '<span class="my-site-icon" role="img">' . $logo . '</span>',
    				'parent' => null,
    				'href'   => '/wp-admin/admin.php?page=my-site',
    				'group'  => null,
    				'meta'   => array(
    					'menu_title' => 'About SITE',
    				),
    			),
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo',
    				'id'     => 'about',
    				'title'  => __( 'About SITE' ),
    				'href'   => '/about/',
    			)
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo-external',
    				'id'     => 'documentation',
    				'title'  => __( 'Documentation' ),
    				'href'   => 'https://docs.example.com/',
    			)
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo-external',
    				'id'     => 'slack',
    				'title'  => __( 'Slack' ),
    				'href'   => 'https://example.slack.com/',
    			)
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo-external',
    				'id'     => 'validation',
    				'title'  => __( 'Data Validation' ),
    				'href'   => '/wp-admin/admin.php?page=data_check',
    			)
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo-external',
    				'id'     => 'monitors',
    				'title'  => __( 'Monitors' ),
    				'href'   => '/wp-admin/admin.php?page=monitor_check',
    			)
    		);
    	}
    }
    
    add_filter( 'admin_bar_menu', array( $this, 'filter_admin_bar' ), PHP_INT_MAX );
    

    I replaced the default ‘about’ with my site’s about URL. I replaced the documentation node with my own. Everything else is new.

    Now the image… I have an SVG of our logo, and by making my span class named my-site-icon, I was able to spin up some simple CSS:

    #wpadminbar span.my-site-icon svg {
    	width: 25px;
    	height: 25px;
    }
    

    And there you are.

    Result

    What’s it look like?

    Screenshot of an alternative dropdown of what was the W logo into something customized.

    All those links are to our tools or documentation.

  • Editor Sidebar Madness and Gutenberg

    Editor Sidebar Madness and Gutenberg

    Way back when WP was simple and adding a sidebar to the editor was a simple metabox, I had a very straightforward setup with a box that, on page load, would tell you if the data in the post matched the remote API, and if not would tell you what to update.

    My plan was to have it update on refresh, and then auto-correct if you press a button (because sometimes the API would be wrong, or grab the wrong account — loose searching on people’s names is always rough). But my plan was ripped asunder by this new editor thingy, Gutenberg.

    I quickly ported over my simple solution and added a note “This does not refresh on page save, sorry.” and moved on.

    Years later brings us to 2024 and November being my ‘funenployment’ month, where I worked on little things to keep myself sharp before I started at AwesomeMotive. Most of the work was fixing security issues, moving the plugin into the theme so there was less to manage, modernizing processes, upgrading libraries, and so on.

    But one of those things was also making a real Gutenbergized sidebar that autoupdates (mostly).

    What Are We Doing?

    On LezWatch.TV, we collect actor information that is public and use it to generate our pages. So if you wanted to add in an actor, you put in their name, a bio, an image, and then all this extra data like websites, social media, birthdates, and so on. WikiData actually uses us to help determine gender and sexuality, so we pride ourselves on being accurate and regularly updated.

    In return, we use WikiData to help ensure we’re showing the data for the right person! We do that via a simple search based on either their WikiData ID (QID), IMDb ID, or their name. The last one is pretty loose since actors can have the same name now (oh for the days when SAG didn’t allow that…). We use the QID to override the search in cases where it grabs the wrong person.

    I built a CLI command that, once a week, checks actors for data validity. It makes sure the IMDb IDs and socials are formatted properly, it makes sure the dates are valid, and it pings WikiData to make sure the birth/death etc data is also correct.

    With that already in place, all I needed was to call it.

    You Need an API

    The first thing you need to know about this, is that Gutenberg uses the JSON API to pull in data. You can have it pull in everything by custom post meta, but as I already have a CLI tool run by cron to generate that information, making a custom API call was actually going to be faster.

    I went ahead and made it work in a few different ways (you can call it by IMDb ID, post ID, QID, and the slug) because I planned for the future. But really all any of them are doing is a search like this:

    	/**
    	 * Get Wikidata by Post ID
    	 *
    	 * @param int $post_id
    	 * @return array
    	 */
    	private function get_wikidata_by_post_id( $post_id ): array {
    		if ( get_post_type( $post_id ) !== 'post_type_actors' ) {
    			return array(
    				'error' => 'Invalid post ID',
    			);
    		}
    
    		$wikidata = ( new Debug_Actors() )->check_actors_wikidata( $post_id );
    
    		return array( $wikidata );
    	}
    

    The return array is a list of the data we check for, and it either is a string of ‘matches’ /true, or it’s an array with WikiData’s value and our value.

    Making a Sidebar

    Since we have our API already, we can jump to making a sidebar. Traditionally in Gutenberg, we make a sidebar panel for the block we’re adding in. If you want a custom panel, you can add in one with an icon on the Publish Bar:

    A screenshot of the Gutenberg Publish bar, with the Jetpack and YoastSEO icons

    While that’s great and all, I wanted this to be on the side by default for the actor, like Categories and Tags. Since YoastSEO (among others) can do this, I knew it had to be possible:

    Screenshot of the Gutenberg Sidebar, with YoastSEO's custom example.

    But when I started to search around, all anyone told me was how I had to use a block to make that show.

    I knew it was bullshit.

    Making a Sidebar – The Basics

    The secret sauce I was looking for is decidedly simple.

    	const MetadataPanel = () => (
    		<PluginDocumentSettingPanel
    			name="lwtv-wikidata-panel"
    			title="WikiData Checker"
    			className="lwtv-wikidata-panel"
    		>
    			<PanelRow>
    				<div>
    					[PANEL STUFF HERE]
    				</div>
    			</PanelRow>
    		</PluginDocumentSettingPanel>
    	);
    

    I knew about PanelRow but finding PluginDocumentSettingPanel took me far longer than it should have! The documentation doesn’t actually tell you ‘You can use this to make a panel on the Document settings!’ but it is obvious once you’ve done it.

    Making it Refresh

    This is a pared down version of the code, which I will link to at the end.

    The short and simple way is I’m using UseEffect to refresh:

    useEffect(() => {
    		if (
    			postId &&
    			postType === 'post_type_actors' &&
    			postStatus !== 'auto-draft'
    		) {
    			const fetchData = async () => {
    				setIsLoading(true);
    				try {
    					const response = await fetch(
    						`${siteURL}/wp-json/lwtv/v1/wikidata/${postId}`
    					);
    					if (!response.ok) {
    						throw new Error(
    							`HTTP error! status: ${response.status}`
    						);
    					}
    					const data = await response.json();
    					setApiData(data);
    					setError(null);
    				} catch (err) {
    					setError(err.message);
    					setApiData(null);
    				} finally {
    					setIsLoading(false);
    				}
    			};
    			fetchData();
    		}
    	}, [postId, postType, postStatus, siteURL, refreshCounter]);
    

    The reason I’m checking post type and status, is that I don’t want to try and run this if it’s not an actor, and if it’s not at least a real draft.

    The constants are as follows:

    	const [apiData, setApiData] = useState(null);
    	const [isLoading, setIsLoading] = useState(true);
    	const [error, setError] = useState(null);
    

    Right below this I have a second check:

    	if (postType !== 'post_type_actors') {
    		return null;
    	}
    

    That simply prevents the rest of the code from trying to run. You have to have it after the UseEffect because JS is weird and does things in an order. If you have a return before it, it fails to pass a lint (and I enforce linting on this project).

    How it works is on page load of an auto-draft, it tells you to save the post before it will check. As soon as you do save the post (with a title), it refreshes and tells you what it found, speeding up initial data entry!

    But then there’s the issue of refreshing on demand.

    HeartBeat Flatline – Use a Button

    I did, at one point, have a functioning heartbeat checker. That can get pretty expensive and it calls the API too many times if you leave a window open. Instead, I made a button that uses a constant:

    const [refreshCounter, setRefreshCounter] = useState(0);
    

    and a handler:

    	const handleRefresh = () => {
    		setRefreshCounter((prevCounter) => prevCounter + 1);
    	};
    

    Then the button itself:

    &lt;Button
    	variant="secondary"
    	onClick={handleRefresh}
    	isBusy={isLoading}
    >
    	{isLoading ? 'Refreshing...' : 'Refresh'}
    &lt;/Button>
    

    Works like a champ.

    Output the Data

    The data output is the interesting bit, because I’m still not fully satisfied with how it looks.

    I set up a filter to process the raw data:

    	const filteredPersonData = (personData) => {
    		const filteredEntries = Object.entries(personData).filter(
    			([key, value]) => {
    				const lowerCaseValue = String(value).toLowerCase();
    				return (
    					lowerCaseValue !== 'match' &&
    					lowerCaseValue !== 'n/a' &&
    					!['wikidata', 'id', 'name'].includes(key.toLowerCase())
    				);
    			}
    		);
    		return Object.fromEntries(filteredEntries);
    	};
    

    The API returns the WikiData ID, the post ID, and the name, none of which need to be checked here, so I remove them. Otherwise it capitalizes things so they look grown up.

    Then there’s a massive amount of code in the panel itself:

    <div>
    	{isLoading && <Spinner />}
    	{error && <p>Error: {error}</p>}
    	{!isLoading && !error && apiData && (
    		<>
    			{apiData.map((item) => {
    				const [key, personData] = Object.entries(item)[0];
    				const filteredData = filteredPersonData(personData);
    				return (
    					<div key={key}>
    						<h3>{personData.name}</h3>
    						{Object.keys(filteredData).length ===
    						0 ? (
    							<p>[All data matches]</p>
    						) : (
    							<div>
    								{Object.entries( filteredData ).map(([subKey, value]) => (
    									<div key={subKey}>
    										<h4>{subKey}</h4>
    										{value && (
    											<ul>
    												{Object.entries( value ).map( ([ innerKey, innerValue, ]) => (
    													<li key={ innerKey }>
    														<strong>{innerKey}</strong>:{' '} <code>{innerValue || 'empty'}</code>
    													</li>
    												)
    										)}
    											</ul>
    										)}
    										{!value && 'empty'}
    									</div>
    								))}
    							</div>
    						)}
    					</div> );
    			})}
    		</>
    	)}
    
    	{!isLoading && !error && !apiData && (
    		<p>No data found for this post.</p>
    	)}
    	<Button />
    </div>
    

    <Spinner /> is from '@wordpress/components' and is a default component.

    Now, innerKey is actually not a simple output. I wanted to capitalize the first letter and unlike PHP, there’s no ucfirst() function, so it looks like this:

    {innerKey .charAt( 0 ).toUpperCase() + innerKey.slice( 1 )}

    Sometimes JavaScript makes me want to drink.

    The Whole Code

    You can find the whole block, with some extra bits I didn’t mention but I do for quality of life, on our GitHub repo for LezWatch.TV. We use the @wordpress/scripts tooling to generate the blocks.

    The source code is located in folders within /src/ – that’s where most (if not all) of your work will happen. Each new block gets a folder and in each folder there must be a block.json file that stores all the metadata. Read Metadata in block.json if this is your first rodeo.

    The blocks will automagically build anytime anyone runs npm run build from the main folder. You can also run npm run build from the blocks folder.

    All JS and CSS from blocks defined in blocks/*/block.json get pushed to the blocks/build/ folder via the build process. PHP scans this directory and registers blocks in php/class-blocks.php. The overall code is called from the /blocks/src/blocks.php file.

    The build subfolders are NOT stored in Git, because they’re not needed to be. We run the build via actions on deploy.

    What It Looks Like

    Gif showing how it auto-loads the data on save.

    One of the things I want to do is have a way to say “use WikiData” or “use ours” to fill in each individual data point. Sadly sometimes it gets confused and uses the wrong person (there’s a Katherine with an E Hepburn!) so we do have a QID override, but even so there can be incorrect data.

    WikiData often lists socials and websites that are defunct. Mostly that’s X these days.

    Takeaways

    It’s a little frustrating that I either have to do a complex ‘normal’ custom meta box with a lot of extra JS, or make an API. Since I already had the API, it’s no big, but sometimes I wish Gutenberg was a little more obvious with refreshing.

    Also finding the right component to use for the sidebar panel was absolutely maddening. Every single document was about doing it with a block, and we weren’t adding blocks.

    Finally, errors in Javascript remain the worst. Because I’m compiling code for Gutenberg, I have to hunt down the likely culprit, which is hard when you’re still newish to the code! Thankfully, JJ from XWP was an angel and taught me tons in my 2 years there. I adore her.

  • Small Hugo, Big Images

    Small Hugo, Big Images

    In working on my Hugo powered gallery, I ran into some interesting issues, one of which was from my theme.

    I use a Bootstrap powered theme called Hinode. And Hinode is incredibly powerful, but it’s also very complicated and confusing, as Hugo’s documentation is still in the alpha stage. It’s like the early days of other web apps, which means a lot of what I’m trying to do is trial and error. Don’t ask me when I learned about errorf, okay?

    My primary issues are all about images, sizing and filing them.

    Image Sizes

    When you make a gallery, logically you want to save the large image as the zoom in, right? Click to embiggen. The problem is, in Hinode, you can load an image in a few ways:

    1. Use the standard old img tag
    2. Call the default Hugo shortcode of {{< figure >}}
    3. Call a Hinode shortcode of {{< image >}}
    4. Use a partial

    Now, that last one is a little weird, but basically you can’t us a shortcode inside a theme file. While WordPress has a do_shortcode() method, you use partial calls in Hugo. And you have to know not only the exact file, but if your theme even uses partials! Some don’t, and you’re left reconstructing the whole thing.

    Hinode has the shortcodes in partials and I love them for it! To call an image using the partial, it looks like this:

                        {{- partial "assets/image.html" (dict
                                 "url" $imgsrc
                                 "ratio" "1x1"
                                 "wrapper" "mx-auto"
                                 "title" $title)
                         -}}
    

    That call will generate webp versions of my image, saved to static image folder (which is a post of its own), and have the source sets so it’s handy and responsive.

    What it isn’t is resized. Meaning if I used that code, I would end up with the actual huge ass image used. Now, imagine I have a gallery with 30 images. That’s 30 big ass images. Not good. Not good for speed, not good for anyone.

    I ended up making my own version of assets/image.html (called lightbox-image.html) and in there I have this code:

     {{ with resources.Get $imagefile }}
         {{ $image    = .Fill "250x250" }}
         {{ $imagesrc = $image.RelPermalink }}
     {{ end }}
    

    If the file is local, which is what that get call is doing, it uses the file ($imagefile is the ‘path’ to the file) to make a 250×250 sized version and then grabs that new permalink to use.

    {{ if $imagefile }}
         &lt;img src="{{ $imagesrc }}" class="img-fluid img-lightbox" alt="{{ $title }}" data-toggle="lightbox" data-gallery="gallery" data-src="{{ $imagefile }}">
     {{ end }}
    

    Boom!

    This skips over all the responsive resizing, but then again I don’t need that when I’m making a gallery, do I?

    Remote Image Sizes

    Now let’s add in a wrinkle. What if it’s a remote image? What if I passed a URL of a remote image? For this, you need to know that on build, that Hinode code will download the image locally. Local images load faster. I can’t use the same get, I need the remote get, but now I have a new issue!

    Where are the images saved? In the img folder. No subfolders, the one folder. And I have hundreds of images to add.

    Mathematically speaking, you can put about four billion files in a folder before it’s an issue for the computers. But if you’ve ever tried to find a specific file to check in a folder that large, you’ve seriously reconsidered your career trajectory. And practically speaking, the more files, the slower the processing.

    Anyone else remember when GoDaddy announced a maximum of 1024 files in a folder on their shared hosting? While I question the long term efficacy of that, I do try to limit my files. I know that using the get/remote get calls with tack on a randomized name at the end, but I’d like them to be organized.

    Since I’m calling all my files from my assets server (assets.example.com), I can organize them there and replicate that in my build. And my method to do that is as follows:

         {{ if eq $image "" }}
             {{- $imageurl = . | absURL -}}
             {{- $imagesrc = . | absURL -}}
     
             {{ $dir := (urls.Parse $imageurl).Path }}
    
             {{ with resources.GetRemote $imageurl | resources.Copy $dir }}
                 {{ with .Err }}
                     {{ warnf "%s" . }}
                 {{ else }}
                     {{ $image    = . }}
                     {{ $imageurl = $image.Permalink }}
                     {{ $image    = $image.Fill "250x250" }}
                     {{ $imagesrc = $image.RelPermalink }}
                 {{ end }}
             {{ end }}
         {{ end }}
    

    I know that shit is weird. It pairs off the earlier code. If you don’t create the image variable, then you know the image wasn’t local. So I started by getting the image url from ‘this’ (that’s what the period is) as an absolute url. Then I used the path of the url to generate my local folder path! When I use the Copy command with the pipe, it will automatically use that as the destination.

    Conclusion

    You can totally make images that are resized in Hugo. While I wish that was easier to do, most people aren’t as worried as I am about storing the images on their repository, so it’s less of an issue. Also galleries on Hugo are fairly rare.

    I’m starting some work now with Hinode for better gallery-esque support, and we’ll see where that goes. Maybe I’ll get a patch in there!