Half-Elf on Tech

Thoughts From a Professional Lesbian

Author: Ipstenu (Mika Epstein)

  • Editor Sidebar Madness and Gutenberg

    Editor Sidebar Madness and Gutenberg

    Way back when WP was simple and adding a sidebar to the editor was a simple metabox, I had a very straightforward setup with a box that, on page load, would tell you if the data in the post matched the remote API, and if not would tell you what to update.

    My plan was to have it update on refresh, and then auto-correct if you press a button (because sometimes the API would be wrong, or grab the wrong account — loose searching on people’s names is always rough). But my plan was ripped asunder by this new editor thingy, Gutenberg.

    I quickly ported over my simple solution and added a note “This does not refresh on page save, sorry.” and moved on.

    Years later brings us to 2024 and November being my ‘funenployment’ month, where I worked on little things to keep myself sharp before I started at AwesomeMotive. Most of the work was fixing security issues, moving the plugin into the theme so there was less to manage, modernizing processes, upgrading libraries, and so on.

    But one of those things was also making a real Gutenbergized sidebar that autoupdates (mostly).

    What Are We Doing?

    On LezWatch.TV, we collect actor information that is public and use it to generate our pages. So if you wanted to add in an actor, you put in their name, a bio, an image, and then all this extra data like websites, social media, birthdates, and so on. WikiData actually uses us to help determine gender and sexuality, so we pride ourselves on being accurate and regularly updated.

    In return, we use WikiData to help ensure we’re showing the data for the right person! We do that via a simple search based on either their WikiData ID (QID), IMDb ID, or their name. The last one is pretty loose since actors can have the same name now (oh for the days when SAG didn’t allow that…). We use the QID to override the search in cases where it grabs the wrong person.

    I built a CLI command that, once a week, checks actors for data validity. It makes sure the IMDb IDs and socials are formatted properly, it makes sure the dates are valid, and it pings WikiData to make sure the birth/death etc data is also correct.

    With that already in place, all I needed was to call it.

    You Need an API

    The first thing you need to know about this, is that Gutenberg uses the JSON API to pull in data. You can have it pull in everything by custom post meta, but as I already have a CLI tool run by cron to generate that information, making a custom API call was actually going to be faster.

    I went ahead and made it work in a few different ways (you can call it by IMDb ID, post ID, QID, and the slug) because I planned for the future. But really all any of them are doing is a search like this:

    	/**
    	 * Get Wikidata by Post ID
    	 *
    	 * @param int $post_id
    	 * @return array
    	 */
    	private function get_wikidata_by_post_id( $post_id ): array {
    		if ( get_post_type( $post_id ) !== 'post_type_actors' ) {
    			return array(
    				'error' => 'Invalid post ID',
    			);
    		}
    
    		$wikidata = ( new Debug_Actors() )->check_actors_wikidata( $post_id );
    
    		return array( $wikidata );
    	}
    

    The return array is a list of the data we check for, and it either is a string of ‘matches’ /true, or it’s an array with WikiData’s value and our value.

    Making a Sidebar

    Since we have our API already, we can jump to making a sidebar. Traditionally in Gutenberg, we make a sidebar panel for the block we’re adding in. If you want a custom panel, you can add in one with an icon on the Publish Bar:

    A screenshot of the Gutenberg Publish bar, with the Jetpack and YoastSEO icons

    While that’s great and all, I wanted this to be on the side by default for the actor, like Categories and Tags. Since YoastSEO (among others) can do this, I knew it had to be possible:

    Screenshot of the Gutenberg Sidebar, with YoastSEO's custom example.

    But when I started to search around, all anyone told me was how I had to use a block to make that show.

    I knew it was bullshit.

    Making a Sidebar – The Basics

    The secret sauce I was looking for is decidedly simple.

    	const MetadataPanel = () => (
    		<PluginDocumentSettingPanel
    			name="lwtv-wikidata-panel"
    			title="WikiData Checker"
    			className="lwtv-wikidata-panel"
    		>
    			<PanelRow>
    				<div>
    					[PANEL STUFF HERE]
    				</div>
    			</PanelRow>
    		</PluginDocumentSettingPanel>
    	);
    

    I knew about PanelRow but finding PluginDocumentSettingPanel took me far longer than it should have! The documentation doesn’t actually tell you ‘You can use this to make a panel on the Document settings!’ but it is obvious once you’ve done it.

    Making it Refresh

    This is a pared down version of the code, which I will link to at the end.

    The short and simple way is I’m using UseEffect to refresh:

    useEffect(() => {
    		if (
    			postId &&
    			postType === 'post_type_actors' &&
    			postStatus !== 'auto-draft'
    		) {
    			const fetchData = async () => {
    				setIsLoading(true);
    				try {
    					const response = await fetch(
    						`${siteURL}/wp-json/lwtv/v1/wikidata/${postId}`
    					);
    					if (!response.ok) {
    						throw new Error(
    							`HTTP error! status: ${response.status}`
    						);
    					}
    					const data = await response.json();
    					setApiData(data);
    					setError(null);
    				} catch (err) {
    					setError(err.message);
    					setApiData(null);
    				} finally {
    					setIsLoading(false);
    				}
    			};
    			fetchData();
    		}
    	}, [postId, postType, postStatus, siteURL, refreshCounter]);
    

    The reason I’m checking post type and status, is that I don’t want to try and run this if it’s not an actor, and if it’s not at least a real draft.

    The constants are as follows:

    	const [apiData, setApiData] = useState(null);
    	const [isLoading, setIsLoading] = useState(true);
    	const [error, setError] = useState(null);
    

    Right below this I have a second check:

    	if (postType !== 'post_type_actors') {
    		return null;
    	}
    

    That simply prevents the rest of the code from trying to run. You have to have it after the UseEffect because JS is weird and does things in an order. If you have a return before it, it fails to pass a lint (and I enforce linting on this project).

    How it works is on page load of an auto-draft, it tells you to save the post before it will check. As soon as you do save the post (with a title), it refreshes and tells you what it found, speeding up initial data entry!

    But then there’s the issue of refreshing on demand.

    HeartBeat Flatline – Use a Button

    I did, at one point, have a functioning heartbeat checker. That can get pretty expensive and it calls the API too many times if you leave a window open. Instead, I made a button that uses a constant:

    const [refreshCounter, setRefreshCounter] = useState(0);
    

    and a handler:

    	const handleRefresh = () => {
    		setRefreshCounter((prevCounter) => prevCounter + 1);
    	};
    

    Then the button itself:

    &lt;Button
    	variant="secondary"
    	onClick={handleRefresh}
    	isBusy={isLoading}
    >
    	{isLoading ? 'Refreshing...' : 'Refresh'}
    &lt;/Button>
    

    Works like a champ.

    Output the Data

    The data output is the interesting bit, because I’m still not fully satisfied with how it looks.

    I set up a filter to process the raw data:

    	const filteredPersonData = (personData) => {
    		const filteredEntries = Object.entries(personData).filter(
    			([key, value]) => {
    				const lowerCaseValue = String(value).toLowerCase();
    				return (
    					lowerCaseValue !== 'match' &&
    					lowerCaseValue !== 'n/a' &&
    					!['wikidata', 'id', 'name'].includes(key.toLowerCase())
    				);
    			}
    		);
    		return Object.fromEntries(filteredEntries);
    	};
    

    The API returns the WikiData ID, the post ID, and the name, none of which need to be checked here, so I remove them. Otherwise it capitalizes things so they look grown up.

    Then there’s a massive amount of code in the panel itself:

    <div>
    	{isLoading && <Spinner />}
    	{error && <p>Error: {error}</p>}
    	{!isLoading && !error && apiData && (
    		<>
    			{apiData.map((item) => {
    				const [key, personData] = Object.entries(item)[0];
    				const filteredData = filteredPersonData(personData);
    				return (
    					<div key={key}>
    						<h3>{personData.name}</h3>
    						{Object.keys(filteredData).length ===
    						0 ? (
    							<p>[All data matches]</p>
    						) : (
    							<div>
    								{Object.entries( filteredData ).map(([subKey, value]) => (
    									<div key={subKey}>
    										<h4>{subKey}</h4>
    										{value && (
    											<ul>
    												{Object.entries( value ).map( ([ innerKey, innerValue, ]) => (
    													<li key={ innerKey }>
    														<strong>{innerKey}</strong>:{' '} <code>{innerValue || 'empty'}</code>
    													</li>
    												)
    										)}
    											</ul>
    										)}
    										{!value && 'empty'}
    									</div>
    								))}
    							</div>
    						)}
    					</div> );
    			})}
    		</>
    	)}
    
    	{!isLoading && !error && !apiData && (
    		<p>No data found for this post.</p>
    	)}
    	<Button />
    </div>
    

    <Spinner /> is from '@wordpress/components' and is a default component.

    Now, innerKey is actually not a simple output. I wanted to capitalize the first letter and unlike PHP, there’s no ucfirst() function, so it looks like this:

    {innerKey .charAt( 0 ).toUpperCase() + innerKey.slice( 1 )}

    Sometimes JavaScript makes me want to drink.

    The Whole Code

    You can find the whole block, with some extra bits I didn’t mention but I do for quality of life, on our GitHub repo for LezWatch.TV. We use the @wordpress/scripts tooling to generate the blocks.

    The source code is located in folders within /src/ – that’s where most (if not all) of your work will happen. Each new block gets a folder and in each folder there must be a block.json file that stores all the metadata. Read Metadata in block.json if this is your first rodeo.

    The blocks will automagically build anytime anyone runs npm run build from the main folder. You can also run npm run build from the blocks folder.

    All JS and CSS from blocks defined in blocks/*/block.json get pushed to the blocks/build/ folder via the build process. PHP scans this directory and registers blocks in php/class-blocks.php. The overall code is called from the /blocks/src/blocks.php file.

    The build subfolders are NOT stored in Git, because they’re not needed to be. We run the build via actions on deploy.

    What It Looks Like

    Gif showing how it auto-loads the data on save.

    One of the things I want to do is have a way to say “use WikiData” or “use ours” to fill in each individual data point. Sadly sometimes it gets confused and uses the wrong person (there’s a Katherine with an E Hepburn!) so we do have a QID override, but even so there can be incorrect data.

    WikiData often lists socials and websites that are defunct. Mostly that’s X these days.

    Takeaways

    It’s a little frustrating that I either have to do a complex ‘normal’ custom meta box with a lot of extra JS, or make an API. Since I already had the API, it’s no big, but sometimes I wish Gutenberg was a little more obvious with refreshing.

    Also finding the right component to use for the sidebar panel was absolutely maddening. Every single document was about doing it with a block, and we weren’t adding blocks.

    Finally, errors in Javascript remain the worst. Because I’m compiling code for Gutenberg, I have to hunt down the likely culprit, which is hard when you’re still newish to the code! Thankfully, JJ from XWP was an angel and taught me tons in my 2 years there. I adore her.

  • Small Hugo, Big Images

    Small Hugo, Big Images

    In working on my Hugo powered gallery, I ran into some interesting issues, one of which was from my theme.

    I use a Bootstrap powered theme called Hinode. And Hinode is incredibly powerful, but it’s also very complicated and confusing, as Hugo’s documentation is still in the alpha stage. It’s like the early days of other web apps, which means a lot of what I’m trying to do is trial and error. Don’t ask me when I learned about errorf, okay?

    My primary issues are all about images, sizing and filing them.

    Image Sizes

    When you make a gallery, logically you want to save the large image as the zoom in, right? Click to embiggen. The problem is, in Hinode, you can load an image in a few ways:

    1. Use the standard old img tag
    2. Call the default Hugo shortcode of {{< figure >}}
    3. Call a Hinode shortcode of {{< image >}}
    4. Use a partial

    Now, that last one is a little weird, but basically you can’t us a shortcode inside a theme file. While WordPress has a do_shortcode() method, you use partial calls in Hugo. And you have to know not only the exact file, but if your theme even uses partials! Some don’t, and you’re left reconstructing the whole thing.

    Hinode has the shortcodes in partials and I love them for it! To call an image using the partial, it looks like this:

                        {{- partial "assets/image.html" (dict
                                 "url" $imgsrc
                                 "ratio" "1x1"
                                 "wrapper" "mx-auto"
                                 "title" $title)
                         -}}
    

    That call will generate webp versions of my image, saved to static image folder (which is a post of its own), and have the source sets so it’s handy and responsive.

    What it isn’t is resized. Meaning if I used that code, I would end up with the actual huge ass image used. Now, imagine I have a gallery with 30 images. That’s 30 big ass images. Not good. Not good for speed, not good for anyone.

    I ended up making my own version of assets/image.html (called lightbox-image.html) and in there I have this code:

     {{ with resources.Get $imagefile }}
         {{ $image    = .Fill "250x250" }}
         {{ $imagesrc = $image.RelPermalink }}
     {{ end }}
    

    If the file is local, which is what that get call is doing, it uses the file ($imagefile is the ‘path’ to the file) to make a 250×250 sized version and then grabs that new permalink to use.

    {{ if $imagefile }}
         &lt;img src="{{ $imagesrc }}" class="img-fluid img-lightbox" alt="{{ $title }}" data-toggle="lightbox" data-gallery="gallery" data-src="{{ $imagefile }}">
     {{ end }}
    

    Boom!

    This skips over all the responsive resizing, but then again I don’t need that when I’m making a gallery, do I?

    Remote Image Sizes

    Now let’s add in a wrinkle. What if it’s a remote image? What if I passed a URL of a remote image? For this, you need to know that on build, that Hinode code will download the image locally. Local images load faster. I can’t use the same get, I need the remote get, but now I have a new issue!

    Where are the images saved? In the img folder. No subfolders, the one folder. And I have hundreds of images to add.

    Mathematically speaking, you can put about four billion files in a folder before it’s an issue for the computers. But if you’ve ever tried to find a specific file to check in a folder that large, you’ve seriously reconsidered your career trajectory. And practically speaking, the more files, the slower the processing.

    Anyone else remember when GoDaddy announced a maximum of 1024 files in a folder on their shared hosting? While I question the long term efficacy of that, I do try to limit my files. I know that using the get/remote get calls with tack on a randomized name at the end, but I’d like them to be organized.

    Since I’m calling all my files from my assets server (assets.example.com), I can organize them there and replicate that in my build. And my method to do that is as follows:

         {{ if eq $image "" }}
             {{- $imageurl = . | absURL -}}
             {{- $imagesrc = . | absURL -}}
     
             {{ $dir := (urls.Parse $imageurl).Path }}
    
             {{ with resources.GetRemote $imageurl | resources.Copy $dir }}
                 {{ with .Err }}
                     {{ warnf "%s" . }}
                 {{ else }}
                     {{ $image    = . }}
                     {{ $imageurl = $image.Permalink }}
                     {{ $image    = $image.Fill "250x250" }}
                     {{ $imagesrc = $image.RelPermalink }}
                 {{ end }}
             {{ end }}
         {{ end }}
    

    I know that shit is weird. It pairs off the earlier code. If you don’t create the image variable, then you know the image wasn’t local. So I started by getting the image url from ‘this’ (that’s what the period is) as an absolute url. Then I used the path of the url to generate my local folder path! When I use the Copy command with the pipe, it will automatically use that as the destination.

    Conclusion

    You can totally make images that are resized in Hugo. While I wish that was easier to do, most people aren’t as worried as I am about storing the images on their repository, so it’s less of an issue. Also galleries on Hugo are fairly rare.

    I’m starting some work now with Hinode for better gallery-esque support, and we’ll see where that goes. Maybe I’ll get a patch in there!

  • Open for Employment – No Longer!

    Open for Employment – No Longer!

    As of Dec 2, 2024, I am employed with AwesomeMotive! I am no longer in need of a new gig. I am leaving this post up in case someone does want to pay me to work on LezWatch.TV and make bagels all day.


    As of 31 October 2024, my engagement with XWP will end. I am incredibly thankful for the time I spent with them and the trust they placed in me. Don’t get me wrong, it sucks, but the world just works like this sometimes.

    What am I looking for?

    Honestly as much as I’d love someone to pay me to just work on LezWatch.TV and make bagels all day, it’s pretty unlikely (though if you do…).

    What I’m looking for is a full time job where I get to make cool things, get a fair paycheck that allows me to save to buy a house, and provides enough vacation time that I’m not spending all of it on Jewish Holidays and can actually take a trip now and then.

    What do I want to do?

    This is likely a tech stack question. I can do WordPress, I’m very good at it, but I also know Hugo and can pick up other stacks pretty quickly. I’ve done full stack work before (server birth to death), worked in automation, and myriad other platforms like MediaWiki, ZenPhoto, and more. I’m game for learning any CMS.

    Would I really still work in WordPress?

    I would.

    Look, I know there’s a lot of volatility in the WordPress world but honestly with that in mind, you need someone like me! Why? Because I know WordPress plugins! If something happens and you can’t access .org, I’m your girl. I know how to scan plugins (and themes) for backdoors and bad code, as well as write the good stuff. I know risk assessment and management, which means I can help you when plugin ownership is in doubt.

    I also know a lot of backstory to a lot of development shops and how they treat people. What? You knew I took notes about plugin devs!

    There are millions of WordPress sites out there. They still need devs.

    Would I leave WordPress for anything else?

    Absolutely! Nothing against WordPress (or the current state of affairs), but my love for it is not absolute. It’s tempered in reality. WordPress is not the perfect solution for everyone, after all. I’m game to learn new things, to integrate, to test, and to break things.

    And if you’re transitioning a site to or away from WordPress? Hey! I’m uniquely positioned to be able to tell you exactly what that code was doing and, in most cases, why!

    Would I work for Automattic?

    No. That ship sailed about 15 years ago. I interviewed pretty much around now back then, and in talking with Matt directly we both agreed I would be a bad fit. No harm, no foul. I think that was the right choice, all this time later, and I have no regrets.

    Would I go back into Hosting?

    Sure. I liked that work. It’s fun, challenging, and I learned a lot of new platforms and specifics. I got to play with servers and it gave me a deeper understanding in how to approach asking a host for help. Bonus? I know devs, so I can help debug your code on servers!

    (If DreamHost calls me up right after this post, I would absolutely talk with them about opportunities without a second thought!)

    What about Agency Work?

    Depends on the agency.

    Some agencies are real meat grinders, and some are less so. The hardest part about agency life is how fast everyone and everything has to move. Also it’s incredibly volatile! If the company who hired you isn’t doing well, fffftttt you’re screwed.

    (Again, if XWP called me tomorrow, I would happily talk with them.)

    Would I work for a plugin shop?

    Yes, I would. I know plugins, I know the repo (sure things have changed but the basics aren’t going to), and I know the forums. Plugins are a lot of work of course.

    How about a security company?

    That would be epic fun. Yes. Finding issues, reporting them reasonably and privately, getting them fixed, and helping everyone? I miss that from Plugin Reviews.

    What about just plain ol’ IT?

    I’ve done it before. I’m sure some of my info is out of date (anyone need a Windows NT Server certified dev?), but again, I’m willing and able to learn. Basic IT has some joy you know, and users do some wild and crazy things you don’t expect.

    Didn’t anyone tell me not to sell ME in a resume?

    Many. But the thing is, you’re not hiring a machine, you’re hiring a person. If you want a grunt to grind? That ain’t me.

    If you want a well reasoned, insightful, and creative individual who thinks for herself and is willing to try things even if they fail, because those lessons help you going forward? Who fights for the users and is honest even when it hurts? Who will stand by her principles even if they cost her work? Who is passionate and puts her all into everything?

    That’s me.

  • Hugo and a Lot of Images

    Hugo and a Lot of Images

    One issue with Hugo is that the way I’m deploying it is via Github actions, which means every time I want to update, the site has to be totally rebuilt. Now the primary flaw with that process is that when Hugo builds a lot of images, it takes a lot of time. About 8 minutes.

    The reason Hugo takes this long is that every time it runs its builds, it regenerates all the images and resizes them. This is not a bad thing, since Hugo smartly caches everything in the /resources/_gen/ folder, which is not sync’d to Github, and when you run builds locally it doesn’t take half as long.

    Now, this speed is about the same whether the images are locally (as in, stored in the repository) or remote (which is where mine are located – assets.example.com), because regardless it has to build the resized images. This only runs on a build, since it’s only needed for a build. Once the content is on the server, it’s unnecessary.

    The obvious solution to solve my speed issues would be to include the folder in Github, only I don’t want to store any images on Github if I can help it (legal reasons, if there’s a DMCA its easier to nuke them from my own storage). The less obvious solution is how we got here.

    The Basic Solution

    Here’s your overview:

    1. Checkout the repo
    2. Install Hugo
    3. Run the repo installer (all the dependancies etc)
    4. Copy the files from ‘wherever’ to the Git resource
    5. Run the build (which will use what’s in the resource folder to speed it up)
    6. Copy the resources folder content back down to the server

    This would allow me to have a ‘source of truth’ and update it as I push code.

    The Setup

    To start with, I had to decide where to upload the content. The folder is (right now) about 500 megs, and that’s only going to get bigger. Thankfully I have a big VPS and I was previous hosting around 30 gigs there, so I’m pretty sure this will be okay.

    But the ‘where’ specifics needed a little more than that. I went with a subdomain like secretplace.example.com and in there is a folder called /resources/_gen/

    Next, how do I want to upload to for starters? I went with only uploading the static CSS files because my plan involves pushing things back down after I re-run the build.

    Then comes the downloading. Did you know that there’s nearly no documentation about how to rsync from a remote source to your Github Action instance? It doesn’t help that the words are all pretty generic, and search engines think “Oh you want to know about rsync and a Github Action? You must want to sync from your action to your server!” No, thank you, I wanted the opposite.

    While there’s a nifty wrapper for syncing over SSH for Github, it only works one way. In order to do it the other way, you have to understand the actual issue that action is solving. The SSH-sync isn’t solving rsync at all, that’s baked in to the action image (assuming you’re using ubuntu…). No, what the action solves is the mishegas of adding in your SSH details (the key, the known hosts, etc).

    I could use that action to copy back down to the server, but if you’re going to have to solve the issue once, you may as well use it all the time. Once that’s solved, the easy part begins.

    Your Actions

    Once we’ve understood where we’re going, we can start to get there.

    I’ve set this up in my ci.yml, which runs on everything except production, and it’s a requirement for a PR to pass it before it can be merged into production. I could skip it (as admin) but I try very hard not to, so I can always confirm my code will actually push and not error when I run it.

    name: 'Preflight Checks'
    
    on:
      push:
        branches:
          - '!production'   # excludes production.
    
    concurrency:
         group: ${{ github.ref }}-ci
         cancel-in-progress: true
    
    jobs:
      preflight-checks:
        runs-on: ubuntu-latest
    
        steps:
          - name: Do a git checkout including submodules
            uses: actions/checkout@v4
            with:
              submodules: true
    
          - name: Install SSH Key
            uses: shimataro/ssh-key-action@v2
            with:
              key: ${{ secrets.SERVER_SSH_KEY }}
              known_hosts: unnecessary
    
          - name: Adding Known Hosts
            run: ssh-keyscan -H ${{ secrets.REMOTE_HOST }} >> ~/.ssh/known_hosts
    
          - name: Setup Hugo
            uses: peaceiris/actions-hugo@v3
            with:
              hugo-version: 'latest'
              extended: true
    
          - name: Setup Node and Install
            uses: actions/setup-node@v4
            with:
              node-version-file: '.nvmrc'
              cache: 'npm'
    
          - name: Install Dependencies
            run: npm install && npm run mod:update
    
          - name: Lint
            run: npm run lint
    
          - name: Make Resources Folder locally
            run: mkdir resources
    
          - name: Download resources from server
            run: rsync -rlgoDzvc -i ${{ secrets.REMOTE_USER }}@${{ secrets.REMOTE_HOST }}:/home/${{ secrets.REMOTE_USER }}/${{ secrets.HUGO_RESOURCES_URL }}/ resources/
    
          - name: Test site
            run: npm run tests
    
          - name: Copy back down all the regenerated resources
            run: rsync -rlgoDzvc -i --delete resources/ ${{ secrets.REMOTE_USER }}@${{ secrets.REMOTE_HOST }}:/home/${{ secrets.REMOTE_USER }}/${{ secrets.HUGO_RESOURCES_URL }}/
    

    Obviously this is geared towards Hugo. My command npm run tests is a home-grown command that runs a build and then some tests on said build. It’s separate from the linting, which comes with my theme. Because it’s running a build, this is where I can make use of my pre-built resources.

    You may notice I set known_hosts to ‘unnecessary’ — this is a lie. They’re totally needed but I had a devil of a time making it work at all, so I followed the advice from Zell, who had a similar headache, and put in the ssh-keyscan command.

    When I run my deploy action, it only runs the build (no tests), but it also copies down the resources folder to speed it up. I only copy it back up on testing for the teeny speed boost.

    Results

    Before all this, my builds took 8 to 9 minutes.

    After they took 1 to 2, which is way better. Originally I only had it down to 4 minutes, but I was using wget to test things (and that’s generally not a great idea — it’s slow). Once I switched to rsync, it’s incredibly fast. The build of Hugo is still the slowest part, but it’s around 90 seconds.

  • Why NOT WordPress?

    Why NOT WordPress?

    There’s a website I’ve been running since 1996.

    Yes, I know, I’m an Internet Old.

    1996. That’s 7 years before WordPress was a thing. So it’s not surprising this site was (at one point) moved from ‘something else’ to WordPress. Actually a lot of something-elses over the nearly 30 years of its existence. I moved it over to WP around 2005 (WordPress 1.5) and pretty much left it there for years.

    Now it’s different. Now the site is 100% powered by Hugo.

    Why Did I Stop Using WordPress?

    To understand this decision, you have to keep in mind that the site had been three parts for about 20 years.

    1. The Blog, where announcements were made, etc, powered by WordPress
    2. The Image Gallery, which had … images (about 20 Gigs), powered by netPhotoGraphics
    3. The Wiki/Library, which is the documentation, powered by Hugo

    Well, this year I got a nasty-gram and was forced to shut down the gallery. The simple truth was yes, the gallery included images that legally I didn’t have the right to use. No excuses. But the company involved was kind enough to work out a partial situation. I’m still in the middle of moving what images I can keep into a new home, but while that’s going on, I had a chance to sit down and face reality.

    The gallery, you see, was the biggest feature of the site. Next was the Wiki/Library, and the blog was pretty much just announcements. There was a forum, it was removed ages ago. There was BuddyPress, ditto. People management just isn’t fun.

    There was also the matter of cross linked data. Oh my, did a lot of images appear on the blog and the gallery. I was going to have to purge the old blog posts en masse anyway, so at that point, I asked myself that big question.

    Do I want to move the library to WP, or the blog to Hugo?

    Consider the following:

    1. I was going to have to manually curate nearly 30 years of blog posts (took a few thousand down to about 50)
    2. I already had a running Hugo site and was familiar with it (it has over 1600 files)
    3. If I ported to WP, I would have to rebuild my data setup for how the data is output
    4. Importing blog posts as text only is incredibly easy

    With that in mind, it seemed obvious. Hugo.

    What’s Different with Hugo?

    Obviously I lose the ability to write a blog post and press publish. I have to add a new file, manually link it to my new image, and push to GitHub, where it’s auto-deployed to the site in question. The process for any data is basically this:

    1. Create a branch on my GitHub Repo
    2. Add the new content
    3. Merge the branch into Production

    At that point a GitHub action takes over.

    Beyond that, however, there are some things you take for granted with WP. Like the ease of a mailing list with Jetpack. Now, I did export my Jetpack subscribers and I’m working on a solution there, but yeah, that was a big hit. There’s also the matter of auto-deploying content to socials. But… honestly that’s been pretty much shit-and-miss lately, what with Facebook and Twitter being what they are.

    But all the ‘easy’ stuff? Well Hugo has RSS Feeds, it can process images as it builds (though that will cause your deployments to take longer), it’s open source, and best of all? The output is static HTML.

    Go ahead, try and hack that.

    How Hard Was It?

    Honestly, it took me about 3 days to pick a new theme, apply it, move my basic content over, and start rebuilding the blog. Migrating blog posts took me about 3 weeks. The hardest part was realizing I was going to have to write some complex Hugo Mod code to include my gallery with lightbox code, but I banged that out in an evening.

    There were frustrating moments. The Hugo community is significantly smaller than WordPress (I mean, whose isn’t?) and some of the code is a little on the ‘understood’ level (by which I mean things aren’t always spelled out, they assume you know what they’re talking about). In a way, it’s like using WordPress back in 2006 all over again, and look at where that’s taken me!

    I’m very happy with the result. I picked a ‘fancy’ theme, called Hinode, and it came with Dark Mode built in. I ported over my custom code for recaps (I have a whole star rating system) and started building out topical small galleries where I could.

    If I was a newbie to the web world? This would have been impossible. Then again, a lot of the work I’m doing in WP would be impossible for a newbie. About the only tool I’ve used where I think it’d be easier would be … Maybe MediaWiki? But only because you can build templates from the editor backend.

    Even with Full Site Editing, WordPress would have been a bear and a half.

    Historical Notes

    The ‘Library’ was once on MediaWiki because I had this idea to be a public repository anyone could edit. Only I kept getting attacked by spammers, so I turned off registration. Then I had to apply all sorts of plugins, only MediaWiki didn’t allow you to self-update like WordPress, and I had to write scripts and it was just a pain.

    I rebuilt it all as Hugo about 6 years ago, and I really enjoyed it. GoLang is not something I’m familiar with, and sometimes the language drives me to drink, but so does PHP.

    The Gallery used to be a home-grown SHTML setup, which then moved to a now defunct project, Gallery, and then to ZenPhoto, and finally to NetPhotoGraphics after ZenPhoto decided to be more than just a photo library. NetPhotoGraphics is hella fun to use, and I even built an embed tool for it, so you could paste a link into WP.

    I did that with Hugo as well, and I’ll probably port that back to the new site sooner or later.

    It Is Sad Though

    Basically this site has been a part of my dev growth from day one. I wouldn’t be working in WordPress were it not for this site, and I owe it a lot. Moving to Hugo is the end of an era, and it is a bit sad. But at the same time, I feel like I’m now in even more control over everything, and I’m making a leaner, faster, website every day.

    I have no regrets for the steps I’ve taken on the way, and none about this move. It’s nice to not have to worry about updates all the time. After all, what’s on the site is just HTML.

    I do miss being able to schedule posts though…

  • Plugins: If I Did It…

    Plugins: If I Did It…

    Joe Co. (not their real name) had a bad day. It started when we emailed them this:

    Your plugins have been closed and your account on WordPress.org is suspended indefinitely for egregious guideline violations.

    Normally we send out warning notices to encourage developers to correct behaviour, however in certain cases there are events that cause us to have to take extreme action.

    We received a notice that you had been demanding users edit their reviews and comments in order to receive a refund from your product.

    Your employee fake@example.com said this:

    > Please kindly delete all the comment on Wordpress.org,
    >
    > After that, we will accept your refund request via PayPal

    And then they would provide links to screenshots of what to remove.

    There’s an ugly word for asking people to modify reviews for a reward: bribery.

    However asking users to modify a review and delete comments to get a refund is called EXTORTION […]

    If you’ve been reading for a while, you know how much I hate bribery and extortion.

    Time for Proof

    Joe Co. emailed back

    It’s not a nice day today for our company to receive your email on our
    account suspension, but we understand your concern to protect Org’s plugins
    directory. We respect that!

    However, in case like this, you did receive the feedback from one side and
    I believe that you should take times to reviceved the feedback from our
    company also. And we want you to know that maybe you should protect us and
    our company from UNFAIR COMPETIOR.

    Yes we internally thinks that this is among our Competitor doing
    something to get our Copies of Products and Try to Down us from the
    Directory, here is the reason and proof:

    Now, I can’t list the rest of that for reasons regarding anonymity, but I can summarize the proof:

    1. A PDF of their internal chat client with the person who they felt probably complained (it wasn’t that person)
    2. A PDF where the same person refused to let Joe Co. log into their site (reasonable!)
    3. An admission they handled it badly

    When I opened the first PDF I started laughing.

    Wanna know what it says?

    Important part: But you sent too much spam on our ticket support, our forum and Wordpress.org so we decided to give you a refund under the condition that you delete all your bad comments to avoid your spam review affect to our company's reputation. If you cannot delete all then it's meaningless.

    The kicker? That arrow and red box is from the PDF Joe Co. sent.

    Right there, they are agreeing they told the user that they would only give him a refund if he deleted the reviews. They were not spam, either, the person left one, single, solitary review.

    Who was wrong?

    Both Joe Co. and their user were in the wrong here.

    The user knew damn well they weren’t getting a refund, and in fact I told him he was being silly for complaining that there was no refund when the terms he agreed to say no refunds.

    Joe Co. should have stuck by their guns and not suggested that would give a refund at all.

    Of course, what Joe Co. did after that made it so much worse… Sockpuppets. Sockpuppets everywhere. 100% of their reviews on one plugin were faked. My buddy who cleans that up sobbed into his coffee and I think that was when he wrote a ‘close all’ script.

    Then they made two more accounts and resubmitted plugins and did it all again.

    So while I will grant you that the user was an idiot, Joe Co. was worse.

    Do You Refund?

    Regardless of if you provide refunds or not, the lesson here is “stick to your guns” folks. If the policy is “No Refunds” then suck it up, buttercup. If the policy is to provide a refund within X days, then you do that. If the company has no refund policy, then don’t buy from them, because you will get jerked around.