Half-Elf on Tech

Thoughts From a Professional Lesbian

Author: Ipstenu (Mika Epstein)

  • Meilisearch at Home

    Meilisearch at Home

    There are things most CMS tools are great at, and then there are things they suck at. Universally? They all suck at search when you get to scale.

    This is not really true fault of the CMS (be it WordPress, Drupal, Hugo, etc). The problem is search is difficult to build! If it was easy, everyone would do it. The whole reason Google rose to dominance was that it made search easy and reliable. And that’s great, but not everyone is okay with relying on 3rd party services.

    I’ve used ElasticSearch (too clunky to run, a pain to customize), Lunr (decent for static sites), and even integrated Yahoo and Google searches. They all have issues.

    Recently I was building out a search tool for a private (read: internal, no access if you’re not ‘us’) service, and I was asked to do it with MeiliSearch. It was new to me. As I installed and configured it, I thought … “This could be a nice solution.”

    Build Your Instance

    When you read the directions, you’ll notice they want to install the app as root, meaning it would be one install. And that sounds okay until you start thinking about multiple servers using one instance (for example, WordPress Multisite) where you don’t want to cross contaminate your results. Wouldn’t want posts from Ipstenu.org and Woody.com showing up on HalfElf, and all.

    There are a couple of ways around that, Multi-Tenancy and multiple Indexes. I went with the indexes for now, but I’m sure I’ll want tenancy later.

    I’m doing all this on DreamHost, because I love those weirdos, but there are pre-built images on DigitalOcean if that floats your goat:

    1. Make a dedicated server or a DreamCompute (I used the latter) – you need root access
    2. Set the server to Nginx with the latest PHP – this will allow you to make a proxy later
    3. Add your ssh key from ~/.ssh/id_rsa.pub to your SSH keys – this will let you log in root (or an account with root access)

    Did that? Great! The actual installation is pretty easy, you can just follow the directions down the line.

    Integration with WordPress

    The first one I integrated with was WordPress and for that I used Yuto.

    It’s incredibly straightforward to set up. Get your URL and your Master Key. Plunk them in. Save. Congratulations!

    On the Indices page I originally set my UIDs to ipstenu_posts and ipstenu_pages – to prevent collisions. But then I realized… I wanted the whole site on there, so I made them both ipstenu_org

    Example of Yuto's search output
    Yuto Screenshot

    I would like to change the “Ipstenu_org” flag, like ‘If there’s only one Index, don’t show the name’ and then a way to customize it.

    I will note, there’s a ‘bug’ in Yuto – it has to load all your posts into a cache before it will index them, and that’s problematic if you have a massive amount of posts, or if you have anti-abuse tools that block long actions like that. I made a quick WP-CLI command.

    WP-CLI Command

    The command I made is incredibly simple: wp ipstenu yuto build-index posts

    The code is fast, too. It took under a minute for over 1000 posts.

    After I made it, I shared it with the creators of Yuto, and their next release includes a version of it.

    Multiple Indexes and Tenants

    You’ll notice that I have two indexes. This is due to how the plugin works, making an index per post type. In so far as my ipstenu.org sites go, I don’t mind having them all share a tenant. After all, they’re all on a server together.

    However… This server will also house a Hugo site and my other WP site. What to do?

    The first thing I did was I made a couple more API keys! They have read-write access to a specific index (the Key for “Ipstenu” has access to my ipstenu_org index and so on). That lets me manage things a lot more easily and securely.

    While Yuto will make the index, it cannot make custom keys, so I used the API:

    curl \
    -X POST 'https://example.com/keys' \
    -H 'Authorization: Bearer BEARERKEY' \
    -H 'Content-Type: application/json' \
    --data-binary '{
    "description": "Ipstenu.org",
    "actions": ["*"],
    "indexes": ["ipstenu_org"],
    "expiresAt": null
    }'
    

    That returns a JSON string with (among other things) a key that you can use in WordPress.

    Will I look into Tenancy? Maybe. Haven’t decided yet. For now, separate indexes works for me.

  • Cookie Consent on Hugo

    Cookie Consent on Hugo

    Hugo is my favourite static site generator. I use it on a site I originally created in 1996 (yes, FLF is about to be 30!). Over the last 6 months, I’ve been totally overhauling the site from top to bottom, and one of the long-term goals I had was to add in Cookie Consent.

    Hugo Has Privacy Mode

    One of the nice things about Hugo is they have a built in handler for Privacy Mode.

    I have everything set to respect Do Not Track and use PrivacyMode whenever possible. It lightens my load a lot.

    Built Into Hinode: CookieYes

    The site makes use of Hinode, which has built in support for cookie consent… Kind of. They use the CookieYes service, which I get but I hate. I don’t want to offload things to a service. In fact, part of the reason I moved of WordPress and onto Hugo for the site was GDPR.

    I care deeply about privacy. People have a right to privacy, and to opt in to tracking. A huge part of that is to minimize the amount of data from your own websites that are sent around to other people and saved on your own server/services!

    Obviously I need to know some things. I need to know how many mobile users there are so I can make it better. I need to know what pages have high traffic so I can expand them. If everyone is going to a recap page only to try and find a gallery, then I need to make those more prominent.

    In other words, I need Analytics.

    And the best analytics? Still Google.

    Sigh.

    Alternatively: CookieConsent

    I did my research. I checked a lot of services (free and pay), I looked into solutions people have implemented for Hugo, and then I thought there has to be a simple tool for this.

    There is.

    CookieConsent.

    CookieConsent is a free, open-source (MIT) mini-library, which allows you to manage scripts — and consequently cookies — in full GDPR fashion. It is written in vanilla js and can be integrated in any web platform/framework.

    And yes, you can integrate with Hugo.

    How to Add CookieConsent to Hugo

    First, download it. I have node set up to handle a lot of things, so I went with the easy route:

    npm i vanilla-cookieconsent@3.1.0

    Next, I have to add the dist files to my site. I added in a command to my package.json:

    "build:cookie": "cp node_modules/vanilla-cookieconsent/dist/cookieconsent.css static/css/cookieconsent.css && cp node_modules/vanilla-cookieconsent/dist/cookieconsent.umd.js static/js/cookieconsent.umd.js",
    

    If you’re familiar with Hinode, may notice I’m not using the suggested way to integrate JS. If I was doing this in pure Hinode, I’d be copying the files to assets/js/critical/functional/ instead of my static folder.

    I tried. It errors out:

    Error: error building site: EXECUTE-AS-TEMPLATE: failed to transform "/js/critical.bundle-functional.js" (text/javascript): failed to parse Resource "/js/critical.bundle-functional.js" as Template:: template: /js/critical.bundle-functional.js:210: function "revisionMessage" not defined
    

    I didn’t feel like debugging the whole mess.

    Anyway, once you get those files in, you need to make another special js file. This file is your configuration or initialization file. And if you look at the configuration directions, it’s a little lacking.

    Instead of that, go look at their Google Example! This gives you everything you need to comply with Google Tag Manager Consent Mode, which matters to me. I copied that into /static/js/cookieconsent-init.js and customized it. Like, I don’t have ads so I left that out.

    Add Your JS and CSS

    I already have a customized header (/layouts/partials/head/head.html) for unrelated reasons, but if you don’t, copy the one from Hinode core over and add in this above the call for the SEO file:

    <link rel="stylesheet" href="/css/cookieconsent.css">

    Then you’ll want to edit /layouts/partials/templates/script.html and add in this at the bottom:

    <script type="module" src="/js/cookieconsent-init.js"></script>

    Since your init file contains the call to the main consent code, you’re good to go!

    The Output

    When you visit the site, you’ll see this:

    Screenshot of a cookie consent page.
    Screenshot

    Now there’s a typo in this one, it should say “That means if you click “Reject” right now, you won’t get any Google Analytics cookies.” I fixed it before I pushed anything to production. But I made sure to specify that so people know right away.

    If you click on manage preferences, you’ll get the expanded version:

    Screenshot of expanded cookie preferences.
    Screenshot

    The language is dry as the desert because it’s to meet Google’s specifics.

    As for ‘strictly necessary cookies’?

    At this time we have NO necessary cookies. This option is here as a placeholder in case we have to add any later. We’ll notify you if that happens.

    And how will I notify them? By using Revision Management.

  • Replacing the W in Your Admin Bar

    Replacing the W in Your Admin Bar

    This is a part of ‘white labeling’, which basically means rebranding.

    When you have a website that is not used by people who really need to mess with WordPress, nor learn all about it (because you properly manage your own documentation for your writers), then that W in your admin toolbar is a bit odd, to say the least.

    This doesn’t mean I don’t want my editors to know what WordPress is, we have a whole about page, and the powered-by remains everywhere in the admin pages, but that logo…

    Well anyway, I decided to nuke it.

    Remove What You Don’t Need

    First I made a function that removes everything I don’t need:

    function cleanup_admin_bar(): void {
    	global $wp_admin_bar;
    
    	// Remove customizer link
    	$wp_admin_bar->remove_menu( 'customize' );
    
    	// Remove WP Menu things we don't need.
    	$wp_admin_bar->remove_menu( 'contribute' );
    	$wp_admin_bar->remove_menu( 'wporg' );
    	$wp_admin_bar->remove_menu( 'learn' );
    	$wp_admin_bar->remove_menu( 'support-forums' );
    	$wp_admin_bar->remove_menu( 'feedback' );
    
    	// Remove comments
    	$wp_admin_bar->remove_node( 'comments' );
    }
    
    add_action( 'wp_before_admin_bar_render','cleanup_admin_bar' );
    

    I also removed the comments node and the customizer because this site doesn’t use comments, and also how many times am I going to that Customizer anyway? Never. But the number of times I miss-click on my tablet? A lot.

    But you may notice I did not delete everything. That’s on purpose.

    Make Your New Nodes

    Instead of recreating everything, I reused some things!

    function filter_admin_bar( $wp_admin_bar ): void {
    	// Remove Howdy and Name, only use avatar.
    	$my_account = $wp_admin_bar->get_node( 'my-account' );
    
    	if ( isset( $my_account->title ) ) {
    		preg_match( '/<img.*?>/', $my_account->title, $matches );
    
    		$title = ( isset( $matches[0] ) ) ? $matches[0] : '<img src="fallback/images/avatar.png" alt="SITE NAME" class="avatar avatar-26 photo" height="26" width="26" />';
    
    		$wp_admin_bar->add_node(
    			array(
    				'id'    => 'my-account',
    				'title' => $title,
    			)
    		);
    	}
    
    	// Customize the Logo
    	$wp_logo = $wp_admin_bar->get_node( 'wp-logo' );
    	if ( isset( $wp_logo->title ) ) {
    		$logo = file_get_contents( '/images/site-logo.svg' );
    		$wp_admin_bar->add_node(
    			array(
    				'id'     => 'wp-logo',
    				'title'  => '<span class="my-site-icon" role="img">' . $logo . '</span>',
    				'parent' => null,
    				'href'   => '/wp-admin/admin.php?page=my-site',
    				'group'  => null,
    				'meta'   => array(
    					'menu_title' => 'About SITE',
    				),
    			),
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo',
    				'id'     => 'about',
    				'title'  => __( 'About SITE' ),
    				'href'   => '/about/',
    			)
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo-external',
    				'id'     => 'documentation',
    				'title'  => __( 'Documentation' ),
    				'href'   => 'https://docs.example.com/',
    			)
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo-external',
    				'id'     => 'slack',
    				'title'  => __( 'Slack' ),
    				'href'   => 'https://example.slack.com/',
    			)
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo-external',
    				'id'     => 'validation',
    				'title'  => __( 'Data Validation' ),
    				'href'   => '/wp-admin/admin.php?page=data_check',
    			)
    		);
    		$wp_admin_bar->add_node(
    			array(
    				'parent' => 'wp-logo-external',
    				'id'     => 'monitors',
    				'title'  => __( 'Monitors' ),
    				'href'   => '/wp-admin/admin.php?page=monitor_check',
    			)
    		);
    	}
    }
    
    add_filter( 'admin_bar_menu', array( $this, 'filter_admin_bar' ), PHP_INT_MAX );
    

    I replaced the default ‘about’ with my site’s about URL. I replaced the documentation node with my own. Everything else is new.

    Now the image… I have an SVG of our logo, and by making my span class named my-site-icon, I was able to spin up some simple CSS:

    #wpadminbar span.my-site-icon svg {
    	width: 25px;
    	height: 25px;
    }
    

    And there you are.

    Result

    What’s it look like?

    Screenshot of an alternative dropdown of what was the W logo into something customized.

    All those links are to our tools or documentation.

  • Editor Sidebar Madness and Gutenberg

    Editor Sidebar Madness and Gutenberg

    Way back when WP was simple and adding a sidebar to the editor was a simple metabox, I had a very straightforward setup with a box that, on page load, would tell you if the data in the post matched the remote API, and if not would tell you what to update.

    My plan was to have it update on refresh, and then auto-correct if you press a button (because sometimes the API would be wrong, or grab the wrong account — loose searching on people’s names is always rough). But my plan was ripped asunder by this new editor thingy, Gutenberg.

    I quickly ported over my simple solution and added a note “This does not refresh on page save, sorry.” and moved on.

    Years later brings us to 2024 and November being my ‘funenployment’ month, where I worked on little things to keep myself sharp before I started at AwesomeMotive. Most of the work was fixing security issues, moving the plugin into the theme so there was less to manage, modernizing processes, upgrading libraries, and so on.

    But one of those things was also making a real Gutenbergized sidebar that autoupdates (mostly).

    What Are We Doing?

    On LezWatch.TV, we collect actor information that is public and use it to generate our pages. So if you wanted to add in an actor, you put in their name, a bio, an image, and then all this extra data like websites, social media, birthdates, and so on. WikiData actually uses us to help determine gender and sexuality, so we pride ourselves on being accurate and regularly updated.

    In return, we use WikiData to help ensure we’re showing the data for the right person! We do that via a simple search based on either their WikiData ID (QID), IMDb ID, or their name. The last one is pretty loose since actors can have the same name now (oh for the days when SAG didn’t allow that…). We use the QID to override the search in cases where it grabs the wrong person.

    I built a CLI command that, once a week, checks actors for data validity. It makes sure the IMDb IDs and socials are formatted properly, it makes sure the dates are valid, and it pings WikiData to make sure the birth/death etc data is also correct.

    With that already in place, all I needed was to call it.

    You Need an API

    The first thing you need to know about this, is that Gutenberg uses the JSON API to pull in data. You can have it pull in everything by custom post meta, but as I already have a CLI tool run by cron to generate that information, making a custom API call was actually going to be faster.

    I went ahead and made it work in a few different ways (you can call it by IMDb ID, post ID, QID, and the slug) because I planned for the future. But really all any of them are doing is a search like this:

    	/**
    	 * Get Wikidata by Post ID
    	 *
    	 * @param int $post_id
    	 * @return array
    	 */
    	private function get_wikidata_by_post_id( $post_id ): array {
    		if ( get_post_type( $post_id ) !== 'post_type_actors' ) {
    			return array(
    				'error' => 'Invalid post ID',
    			);
    		}
    
    		$wikidata = ( new Debug_Actors() )->check_actors_wikidata( $post_id );
    
    		return array( $wikidata );
    	}
    

    The return array is a list of the data we check for, and it either is a string of ‘matches’ /true, or it’s an array with WikiData’s value and our value.

    Making a Sidebar

    Since we have our API already, we can jump to making a sidebar. Traditionally in Gutenberg, we make a sidebar panel for the block we’re adding in. If you want a custom panel, you can add in one with an icon on the Publish Bar:

    A screenshot of the Gutenberg Publish bar, with the Jetpack and YoastSEO icons

    While that’s great and all, I wanted this to be on the side by default for the actor, like Categories and Tags. Since YoastSEO (among others) can do this, I knew it had to be possible:

    Screenshot of the Gutenberg Sidebar, with YoastSEO's custom example.

    But when I started to search around, all anyone told me was how I had to use a block to make that show.

    I knew it was bullshit.

    Making a Sidebar – The Basics

    The secret sauce I was looking for is decidedly simple.

    	const MetadataPanel = () => (
    		<PluginDocumentSettingPanel
    			name="lwtv-wikidata-panel"
    			title="WikiData Checker"
    			className="lwtv-wikidata-panel"
    		>
    			<PanelRow>
    				<div>
    					[PANEL STUFF HERE]
    				</div>
    			</PanelRow>
    		</PluginDocumentSettingPanel>
    	);
    

    I knew about PanelRow but finding PluginDocumentSettingPanel took me far longer than it should have! The documentation doesn’t actually tell you ‘You can use this to make a panel on the Document settings!’ but it is obvious once you’ve done it.

    Making it Refresh

    This is a pared down version of the code, which I will link to at the end.

    The short and simple way is I’m using UseEffect to refresh:

    useEffect(() => {
    		if (
    			postId &&
    			postType === 'post_type_actors' &&
    			postStatus !== 'auto-draft'
    		) {
    			const fetchData = async () => {
    				setIsLoading(true);
    				try {
    					const response = await fetch(
    						`${siteURL}/wp-json/lwtv/v1/wikidata/${postId}`
    					);
    					if (!response.ok) {
    						throw new Error(
    							`HTTP error! status: ${response.status}`
    						);
    					}
    					const data = await response.json();
    					setApiData(data);
    					setError(null);
    				} catch (err) {
    					setError(err.message);
    					setApiData(null);
    				} finally {
    					setIsLoading(false);
    				}
    			};
    			fetchData();
    		}
    	}, [postId, postType, postStatus, siteURL, refreshCounter]);
    

    The reason I’m checking post type and status, is that I don’t want to try and run this if it’s not an actor, and if it’s not at least a real draft.

    The constants are as follows:

    	const [apiData, setApiData] = useState(null);
    	const [isLoading, setIsLoading] = useState(true);
    	const [error, setError] = useState(null);
    

    Right below this I have a second check:

    	if (postType !== 'post_type_actors') {
    		return null;
    	}
    

    That simply prevents the rest of the code from trying to run. You have to have it after the UseEffect because JS is weird and does things in an order. If you have a return before it, it fails to pass a lint (and I enforce linting on this project).

    How it works is on page load of an auto-draft, it tells you to save the post before it will check. As soon as you do save the post (with a title), it refreshes and tells you what it found, speeding up initial data entry!

    But then there’s the issue of refreshing on demand.

    HeartBeat Flatline – Use a Button

    I did, at one point, have a functioning heartbeat checker. That can get pretty expensive and it calls the API too many times if you leave a window open. Instead, I made a button that uses a constant:

    const [refreshCounter, setRefreshCounter] = useState(0);
    

    and a handler:

    	const handleRefresh = () => {
    		setRefreshCounter((prevCounter) => prevCounter + 1);
    	};
    

    Then the button itself:

    &lt;Button
    	variant="secondary"
    	onClick={handleRefresh}
    	isBusy={isLoading}
    >
    	{isLoading ? 'Refreshing...' : 'Refresh'}
    &lt;/Button>
    

    Works like a champ.

    Output the Data

    The data output is the interesting bit, because I’m still not fully satisfied with how it looks.

    I set up a filter to process the raw data:

    	const filteredPersonData = (personData) => {
    		const filteredEntries = Object.entries(personData).filter(
    			([key, value]) => {
    				const lowerCaseValue = String(value).toLowerCase();
    				return (
    					lowerCaseValue !== 'match' &&
    					lowerCaseValue !== 'n/a' &&
    					!['wikidata', 'id', 'name'].includes(key.toLowerCase())
    				);
    			}
    		);
    		return Object.fromEntries(filteredEntries);
    	};
    

    The API returns the WikiData ID, the post ID, and the name, none of which need to be checked here, so I remove them. Otherwise it capitalizes things so they look grown up.

    Then there’s a massive amount of code in the panel itself:

    <div>
    	{isLoading && <Spinner />}
    	{error && <p>Error: {error}</p>}
    	{!isLoading && !error && apiData && (
    		<>
    			{apiData.map((item) => {
    				const [key, personData] = Object.entries(item)[0];
    				const filteredData = filteredPersonData(personData);
    				return (
    					<div key={key}>
    						<h3>{personData.name}</h3>
    						{Object.keys(filteredData).length ===
    						0 ? (
    							<p>[All data matches]</p>
    						) : (
    							<div>
    								{Object.entries( filteredData ).map(([subKey, value]) => (
    									<div key={subKey}>
    										<h4>{subKey}</h4>
    										{value && (
    											<ul>
    												{Object.entries( value ).map( ([ innerKey, innerValue, ]) => (
    													<li key={ innerKey }>
    														<strong>{innerKey}</strong>:{' '} <code>{innerValue || 'empty'}</code>
    													</li>
    												)
    										)}
    											</ul>
    										)}
    										{!value && 'empty'}
    									</div>
    								))}
    							</div>
    						)}
    					</div> );
    			})}
    		</>
    	)}
    
    	{!isLoading && !error && !apiData && (
    		<p>No data found for this post.</p>
    	)}
    	<Button />
    </div>
    

    <Spinner /> is from '@wordpress/components' and is a default component.

    Now, innerKey is actually not a simple output. I wanted to capitalize the first letter and unlike PHP, there’s no ucfirst() function, so it looks like this:

    {innerKey .charAt( 0 ).toUpperCase() + innerKey.slice( 1 )}

    Sometimes JavaScript makes me want to drink.

    The Whole Code

    You can find the whole block, with some extra bits I didn’t mention but I do for quality of life, on our GitHub repo for LezWatch.TV. We use the @wordpress/scripts tooling to generate the blocks.

    The source code is located in folders within /src/ – that’s where most (if not all) of your work will happen. Each new block gets a folder and in each folder there must be a block.json file that stores all the metadata. Read Metadata in block.json if this is your first rodeo.

    The blocks will automagically build anytime anyone runs npm run build from the main folder. You can also run npm run build from the blocks folder.

    All JS and CSS from blocks defined in blocks/*/block.json get pushed to the blocks/build/ folder via the build process. PHP scans this directory and registers blocks in php/class-blocks.php. The overall code is called from the /blocks/src/blocks.php file.

    The build subfolders are NOT stored in Git, because they’re not needed to be. We run the build via actions on deploy.

    What It Looks Like

    Gif showing how it auto-loads the data on save.

    One of the things I want to do is have a way to say “use WikiData” or “use ours” to fill in each individual data point. Sadly sometimes it gets confused and uses the wrong person (there’s a Katherine with an E Hepburn!) so we do have a QID override, but even so there can be incorrect data.

    WikiData often lists socials and websites that are defunct. Mostly that’s X these days.

    Takeaways

    It’s a little frustrating that I either have to do a complex ‘normal’ custom meta box with a lot of extra JS, or make an API. Since I already had the API, it’s no big, but sometimes I wish Gutenberg was a little more obvious with refreshing.

    Also finding the right component to use for the sidebar panel was absolutely maddening. Every single document was about doing it with a block, and we weren’t adding blocks.

    Finally, errors in Javascript remain the worst. Because I’m compiling code for Gutenberg, I have to hunt down the likely culprit, which is hard when you’re still newish to the code! Thankfully, JJ from XWP was an angel and taught me tons in my 2 years there. I adore her.

  • Small Hugo, Big Images

    Small Hugo, Big Images

    In working on my Hugo powered gallery, I ran into some interesting issues, one of which was from my theme.

    I use a Bootstrap powered theme called Hinode. And Hinode is incredibly powerful, but it’s also very complicated and confusing, as Hugo’s documentation is still in the alpha stage. It’s like the early days of other web apps, which means a lot of what I’m trying to do is trial and error. Don’t ask me when I learned about errorf, okay?

    My primary issues are all about images, sizing and filing them.

    Image Sizes

    When you make a gallery, logically you want to save the large image as the zoom in, right? Click to embiggen. The problem is, in Hinode, you can load an image in a few ways:

    1. Use the standard old img tag
    2. Call the default Hugo shortcode of {{< figure >}}
    3. Call a Hinode shortcode of {{< image >}}
    4. Use a partial

    Now, that last one is a little weird, but basically you can’t us a shortcode inside a theme file. While WordPress has a do_shortcode() method, you use partial calls in Hugo. And you have to know not only the exact file, but if your theme even uses partials! Some don’t, and you’re left reconstructing the whole thing.

    Hinode has the shortcodes in partials and I love them for it! To call an image using the partial, it looks like this:

                        {{- partial "assets/image.html" (dict
                                 "url" $imgsrc
                                 "ratio" "1x1"
                                 "wrapper" "mx-auto"
                                 "title" $title)
                         -}}
    

    That call will generate webp versions of my image, saved to static image folder (which is a post of its own), and have the source sets so it’s handy and responsive.

    What it isn’t is resized. Meaning if I used that code, I would end up with the actual huge ass image used. Now, imagine I have a gallery with 30 images. That’s 30 big ass images. Not good. Not good for speed, not good for anyone.

    I ended up making my own version of assets/image.html (called lightbox-image.html) and in there I have this code:

     {{ with resources.Get $imagefile }}
         {{ $image    = .Fill "250x250" }}
         {{ $imagesrc = $image.RelPermalink }}
     {{ end }}
    

    If the file is local, which is what that get call is doing, it uses the file ($imagefile is the ‘path’ to the file) to make a 250×250 sized version and then grabs that new permalink to use.

    {{ if $imagefile }}
         &lt;img src="{{ $imagesrc }}" class="img-fluid img-lightbox" alt="{{ $title }}" data-toggle="lightbox" data-gallery="gallery" data-src="{{ $imagefile }}">
     {{ end }}
    

    Boom!

    This skips over all the responsive resizing, but then again I don’t need that when I’m making a gallery, do I?

    Remote Image Sizes

    Now let’s add in a wrinkle. What if it’s a remote image? What if I passed a URL of a remote image? For this, you need to know that on build, that Hinode code will download the image locally. Local images load faster. I can’t use the same get, I need the remote get, but now I have a new issue!

    Where are the images saved? In the img folder. No subfolders, the one folder. And I have hundreds of images to add.

    Mathematically speaking, you can put about four billion files in a folder before it’s an issue for the computers. But if you’ve ever tried to find a specific file to check in a folder that large, you’ve seriously reconsidered your career trajectory. And practically speaking, the more files, the slower the processing.

    Anyone else remember when GoDaddy announced a maximum of 1024 files in a folder on their shared hosting? While I question the long term efficacy of that, I do try to limit my files. I know that using the get/remote get calls with tack on a randomized name at the end, but I’d like them to be organized.

    Since I’m calling all my files from my assets server (assets.example.com), I can organize them there and replicate that in my build. And my method to do that is as follows:

         {{ if eq $image "" }}
             {{- $imageurl = . | absURL -}}
             {{- $imagesrc = . | absURL -}}
     
             {{ $dir := (urls.Parse $imageurl).Path }}
    
             {{ with resources.GetRemote $imageurl | resources.Copy $dir }}
                 {{ with .Err }}
                     {{ warnf "%s" . }}
                 {{ else }}
                     {{ $image    = . }}
                     {{ $imageurl = $image.Permalink }}
                     {{ $image    = $image.Fill "250x250" }}
                     {{ $imagesrc = $image.RelPermalink }}
                 {{ end }}
             {{ end }}
         {{ end }}
    

    I know that shit is weird. It pairs off the earlier code. If you don’t create the image variable, then you know the image wasn’t local. So I started by getting the image url from ‘this’ (that’s what the period is) as an absolute url. Then I used the path of the url to generate my local folder path! When I use the Copy command with the pipe, it will automatically use that as the destination.

    Conclusion

    You can totally make images that are resized in Hugo. While I wish that was easier to do, most people aren’t as worried as I am about storing the images on their repository, so it’s less of an issue. Also galleries on Hugo are fairly rare.

    I’m starting some work now with Hinode for better gallery-esque support, and we’ll see where that goes. Maybe I’ll get a patch in there!

  • Open for Employment – No Longer!

    Open for Employment – No Longer!

    As of Dec 2, 2024, I am employed with AwesomeMotive! I am no longer in need of a new gig. I am leaving this post up in case someone does want to pay me to work on LezWatch.TV and make bagels all day.


    As of 31 October 2024, my engagement with XWP will end. I am incredibly thankful for the time I spent with them and the trust they placed in me. Don’t get me wrong, it sucks, but the world just works like this sometimes.

    What am I looking for?

    Honestly as much as I’d love someone to pay me to just work on LezWatch.TV and make bagels all day, it’s pretty unlikely (though if you do…).

    What I’m looking for is a full time job where I get to make cool things, get a fair paycheck that allows me to save to buy a house, and provides enough vacation time that I’m not spending all of it on Jewish Holidays and can actually take a trip now and then.

    What do I want to do?

    This is likely a tech stack question. I can do WordPress, I’m very good at it, but I also know Hugo and can pick up other stacks pretty quickly. I’ve done full stack work before (server birth to death), worked in automation, and myriad other platforms like MediaWiki, ZenPhoto, and more. I’m game for learning any CMS.

    Would I really still work in WordPress?

    I would.

    Look, I know there’s a lot of volatility in the WordPress world but honestly with that in mind, you need someone like me! Why? Because I know WordPress plugins! If something happens and you can’t access .org, I’m your girl. I know how to scan plugins (and themes) for backdoors and bad code, as well as write the good stuff. I know risk assessment and management, which means I can help you when plugin ownership is in doubt.

    I also know a lot of backstory to a lot of development shops and how they treat people. What? You knew I took notes about plugin devs!

    There are millions of WordPress sites out there. They still need devs.

    Would I leave WordPress for anything else?

    Absolutely! Nothing against WordPress (or the current state of affairs), but my love for it is not absolute. It’s tempered in reality. WordPress is not the perfect solution for everyone, after all. I’m game to learn new things, to integrate, to test, and to break things.

    And if you’re transitioning a site to or away from WordPress? Hey! I’m uniquely positioned to be able to tell you exactly what that code was doing and, in most cases, why!

    Would I work for Automattic?

    No. That ship sailed about 15 years ago. I interviewed pretty much around now back then, and in talking with Matt directly we both agreed I would be a bad fit. No harm, no foul. I think that was the right choice, all this time later, and I have no regrets.

    Would I go back into Hosting?

    Sure. I liked that work. It’s fun, challenging, and I learned a lot of new platforms and specifics. I got to play with servers and it gave me a deeper understanding in how to approach asking a host for help. Bonus? I know devs, so I can help debug your code on servers!

    (If DreamHost calls me up right after this post, I would absolutely talk with them about opportunities without a second thought!)

    What about Agency Work?

    Depends on the agency.

    Some agencies are real meat grinders, and some are less so. The hardest part about agency life is how fast everyone and everything has to move. Also it’s incredibly volatile! If the company who hired you isn’t doing well, fffftttt you’re screwed.

    (Again, if XWP called me tomorrow, I would happily talk with them.)

    Would I work for a plugin shop?

    Yes, I would. I know plugins, I know the repo (sure things have changed but the basics aren’t going to), and I know the forums. Plugins are a lot of work of course.

    How about a security company?

    That would be epic fun. Yes. Finding issues, reporting them reasonably and privately, getting them fixed, and helping everyone? I miss that from Plugin Reviews.

    What about just plain ol’ IT?

    I’ve done it before. I’m sure some of my info is out of date (anyone need a Windows NT Server certified dev?), but again, I’m willing and able to learn. Basic IT has some joy you know, and users do some wild and crazy things you don’t expect.

    Didn’t anyone tell me not to sell ME in a resume?

    Many. But the thing is, you’re not hiring a machine, you’re hiring a person. If you want a grunt to grind? That ain’t me.

    If you want a well reasoned, insightful, and creative individual who thinks for herself and is willing to try things even if they fail, because those lessons help you going forward? Who fights for the users and is honest even when it hurts? Who will stand by her principles even if they cost her work? Who is passionate and puts her all into everything?

    That’s me.