Half-Elf on Tech

Thoughts From a Professional Lesbian

Author: Ipstenu (Mika Epstein)

  • Open Dungeons

    Open Dungeons

    I play a D&D game. We recently transitioned into playing in person (yay vaccines!) and after much discussion, we have acquired a 3D printer.

    Blame my friend Jan.

    Anyway. One of the things I wanted to build, besides maybe minis, was the modular dungeon! I knew about the brilliant Devon Jones and his OpenForge dungeons, and recently he’s pushed out a version 2. This was exactly what I wanted and needed. The problem … all the tutorials were for version 1. So I had to sit down and figure out what I really needed.

    What’s OpenForge?

    OpenForge is an open source (heh) dungeon set for D&D which lets you build…. oh this:

    Example build by the creator – Devon Jones

    Originally we used images on a screen, or (badly) drawn lines on a mat. And all my D&D life, we used graph paper and ‘theatre of the mind.’ We’ve also been playing online for almost a year, and now that we’re in person, I find myself wanting minis and a set for people too SEE the evil that is in my mind.

    What Connections

    There are three main types of connections:

    • Magnets – Most common, it lets things be compatible with Dwarven Forge
    • OpenLock – this is a kind of lego-esque clip connection from Printable Scenery. It’s open source.
    • Dragonbite – Proprietary (licensed) but compatible with their system
    • Infinitylock – Compatible with the DungeonWorks system
    • Glue – … As it says.

    Magnets are a little weirdly expensive, since you have to buy the magnets to go in. Glue is messy. What I wanted was OpenLock — it’s more stable and my supplies are upstairs.

    Originally I was really anti-glue, but as time went on, I came to appreciate it. Especially since I can throw things in the freezer to separate them! But day to day I wanted to be able to click things together like lego and change up sets when needed.

    Sets

    The next bit is where I got miles of miles confused. Here are the recommended starter sets for the Stone setup. I figured stone is 90% of what I’ll be using for now (there are other places where we may be on mats, like for the beach or wood battles (yes, beware players! More adventures are coming!):

    Stone Floor

    • 16 dungeon_stone_floor.inch.E
    • 8 dungeon_stone_floor.inch.F
    • 2 dungeon_stone_floor.inch.R
    • 4 dungeon_stone_floor.inch.U

    Stone Walls

    • 20 dungeon_stone_wall.inch.A
    • 4 dungeon_stone_wall.inch.BA
    • 8 dungeon_stone_wall.inch.G
    • 2 dungeon_stone_wall.inch.IA
    • 4 dungeon_stone_floor.inch.Q
    • 4 dungeon_stone_column.inch.I.stl
    • 8 dungeon_stone_column.inch.L.stl
    • 4 dungeon_stone_column.inch.O.stl
    • 4 dungeon_stone_column.inch.X.stl
    • 4 dungeon_stone_column.inch.T.stl

    If you look at that, though, you wonder what the heck E, F, R and U are. Oh and then there’s Openforge vs Triplex!? And some of those files have SIDE in there!?!

    Bases

    First, we’ll cover the bases. Depending on how you want to connect, depends on the base you want. The one we’re going to use is triplex – which has many openlock ports, one per edge between squares and one per square. So here’s what Triplex looks like:

    The non Triplex is just a ‘topper’ which you will then glue or magnet on to your bases (hence why I came to love the glue). The Triplex comes with holes on the sides, which are for using OpenLock clips! That said, you will need to prep your printed whatnots for the clips. Which is also a pain in the ass.

    This means I looked at the toppers and then I looked at the bases. There were a lot of options, and I finally settled on the one that had the least amount of work for me: OpenForge 2.0 Plain OpenLOCK Base

    Related to that, the ‘side’ versions of walls are ones with click holes on the sides, as well as front/back.

    OpenLock

    I like this. It’s a double ended clip you stick in the holes in the above. They look like this:

    Photo of Springy OpenLock Clip by Marcel Toele

    Stick ’em in the hole, click and done. Now if you don’t want to use the clips, you can put magnets in some of those holes. If you’re using OpenLock, you want about 2 clips per tile you print, so 54 for the starters. For magnets you’ll need 256. Big difference!

    Many people recommend the Springy OpenLock Clip by Marcel Tool, who makes a loose, medium, and firm springy version (he recommends the medium). For whatever reason, I could not get it to build properly reliably (sometimes it was okay), so I used the ones made by Jones and the official ones until I could figure out why my slice was wrong (I’m very new at this!).

    Letters

    Now! What the bell end does “dungeon_stone_wall.inch.A” mean?

    Well after some serious digging I found the documentation about filename! [Texture]#[Shape]+[Shape Options].[Letter].[Connection system]+[Options].stl

    Now… that’s the new naming convention, but as you can see on Thingiverse, the names aren’t that. But! That clued me in to the fact that those letters, A and AS and so on all relate to the OpenLock code!

    This is made worse by the ThingiVerse display:

    If you hover over the name, you can see the whole name but you cannot click on the small picture and see the big one. I had to turn off ‘max width’ to get the full names, and everything looks like this:

    • dungeon_stone_floor.inch.AS.openforge.stl
    • dungeon_stone_floor.inch.AS.triplex.stl

    Obviously I don’t need openforge AND triplex. But that still was a pain in the butt to get the list of. And those tiny photos meant I was going mad to figure out what the hell was what Thankfully someone else felt my pain, and with that I made a key!

    Keys

    FilenameSize#
    AS3″ x 1″ rectangle (wide)
    E2″ x 2″ square16
    EA3″ x 3″ square
    F2″ x 2″ curve/V8
    I1″ x 1″ square
    R2″ x 4″ rectangle2
    S1″ x 2″ rectangle (tall)
    SA1″ x 3″ rectangle
    SB1″ x 4″ rectangle
    U4″ x 4″ square4
    # means the recommended number to print from the starter set

    That makes a lot more sense, right? I tend to use sprawling dungeons and large rooms (they’re fighting in the castle as I started all this) it would work for me. Except for the 8 F’s. I don’t use curves like that because I’m hand-making maps and I hate curves. I can’t make straight lines.

    Walls are a little weirder since a number of items have ‘side’ options and ‘pegs’ options.

    FilenameSizeSidePegs#
    A0.5″ x 2″YesYes20
    B0.5″ x 1.5″YesNo4
    G0.5″ x 2″ x 2″ curved cornerYesYes8
    IA0.5″ x 1″YesYes2
    Q0.5″ x 4″YesYes4
    I1″ x 1″NoNo4
    L1″ x 1″NoNo8
    O1″ x 1″NoNo4
    X1″ x 1″NoNo4
    T1″ x 1″NoNo4
    # means the recommended number to print from the starter set

    You’ll notice new columns!

    • Pegs is if there’s an alt version with pegs for making a second layer (not needed for starters)
    • Sides is if there’s an alt version with clip slots on the sides

    All columns are 1″x1″, but have pegs in different places.

    What did I Print?

    The weird part about this was figuring out if I really wanted Triplex (supports all three) or just plain bottoms and glue on tops… I will likely use the same floors over and over and over again. And I don’t plan on stacking layers…. Yet. Not until they have fights in a house. Otherwise I’ll pop upstairs, get the next set, and pop back down and have them scream their delight at me.

    So what did I end up building?

    • 54+ OpenLock clips — these are a mix of springy and official and Devon’s
    • 36 2×2 bases (E) — these are a mix of topless (10) and the old plain ones (26)
    • 16 2×2 curved bases (G) – topless
    • 2 2×4 bases (R) — topless
    • 8 4×4 bases (U) — topless
    • 16 cut stone 2×2 toppers (E)
    • 20 cut stone 2×2 walls (G)
    • 20 cut stone 2×2 wall floor (G)
    • 4 cut stone 4×4 toppers (U)
    • 4 cut stone 4×4 wall floor (U)
    • 4 cut stone 4×4 walls (U)
    Example of how to do an interior hallway by Manfred G

    You’ll notice things are more than they recommended. Once I went with toppers, while it does mean a lot of gluing, it also let me use the new Wall Floor system, which obviates the drama of a wall being a half inch. Instead of having a half-inch gap between wall and the floor behind it (like if you’re doing internal walls), you can make a wall that takes up a half inch, and then a floor for an inch and a half.

    The example image I included is weird, I know, but basically those are TWO towne_wall.floor.inch.2x1, which are both 1 inch wide. Then two walls which are each a half inch for a total of three inches. Plug them onto an AS or SA (3 inches by 1 inch) and you have a hallway!

    Since I’m building a specific campaign, I mathed out what I needed to mimic the boss fight. In the end, I decided to change the size a little from paper map to cardboard to real, to compensate for what I was doing.

    I had to wait to post this until (a) the printer showed up and (b) I had successfully built the set. My wife knew I was doing this. The others did not. And due to the time crunch we did not paint them. Still… it worked! Here are some of the ones I’ve made (or am making) and a cardboard to compare:

    So far, it’s all been well received.

  • Why We Hate Your Security Reports

    Why We Hate Your Security Reports

    I was having a day where a bunch of security reports were dumped in my lap. There have been days where those have been hundreds. Thankfully this wasn’t, but it did make me sit back and think about why I hated so many reports.

    In general, everything can be summed up as “The person reporting doesn’t provide all the relevant information.” Sometimes they don’t know, and that’s okay. People don’t learn by osmosis, they have to be taught. But a lot of the people submitting reports do know and just don’t. And the real crux of it all? People aren’t explaining why things matter.

    Let me explain…

    The Uninformed

    These are the people who report an app and say “It’s this app because I installed it and I was hacked.” Another variant is “It’s this app because that’s where the hacked file lives.”

    And the email will have no other information. With those, I have to explain not only are they probably wrong, but I have to dig and get at the information.

    • Why do you think it’s this app?
    • Do you have any evidence?

    Those are pretty basic questions at the heart, but it’s on me to explain why correlation isn’t causation, and just because (say) Hello Dolly was infected does not mean that it was vulnerable.

    This has no solution other than education. I dislike those, but since most people are pretty cool about it when I explain how I know it’s not the app they think it is, it’s okay.

    Semi-related are people who email us security logs from their services, and those are always messy, since it’s information without enough context. Which I’ll get to in a minute.

    The Too Terse

    Sometimes reports are explanatory but not enough. “It’s an XSS in this app.”

    Now, the good news is I know what I’m looking for. The bad news is that someone who actually knows what the issue is has decided to not share, which means I now have to figure out how they figured it out. It used to be I’d do that, but then they’d email back all snarky and bitchy that I didn’t find the one they found. Nowadays I push back. “Can you please provide details?”

    That has a weird hit-and-miss. Sometimes people are pretty chill and explain. The majority do what I think of as a pre-teen eyeroll. You can actually tell they’re huffing in annoyance that someone dared ask them to unpack what was in their heads.

    To put this differently, have you ever had someone say “Hey, the website’s down.” and just … not give you an error message? You know how maddening that is? Right that’s what we’re talking about.

    The Non Explainers

    You’d think this is the same as Terse, but it’s not. The non-explainers don’t explain WHY something is a security issue.

    I know, someone’s reading this going “Hang on, but if I tell you it’s a SQL injection vulnerability, and where it is, isn’t that enough?” And the answer is, most of the time, yes! The people who give a great proof of concept, with exactly how to replicate it, in clear English, are my favourites. They break down how things happen so you can see “Oh that’s why.” But… When they don’t, it means someone (read: me) has to go and figure out “Okay, why is this bad, and how bad is it?”

    The Hater Reporters

    I can’t believe I had to add this one in, but here we are in 2021 and there are some ‘security reporter firms’ who think the best way to report an issue is

    1. make it public
    2. attempt to make other people feel bad
    3. dogshame developers

    For them, the only way forward is their road or no way at all, and they cannot be reasoned with. Eventually someone will sue them for releasing a 0-day vulnerability without even trying to privately disclose first, and when that happens, I’ll make the popcorn.

    The Issue Is Education

    If you go back through, you can see the real issues are people not unpacking what they actually know and sharing in a digestible manner! And this is terribly endemic of security companies more than anything else.

    For example, recently a security company reported a local file inclusion (LFI) issue. Now for those who don’t know, the issue is the code in question could be used to include any file on the server. Including a hacked file. But if someone just told you “Hey that’s an LFI and you’re bad!” then, even if they take the time to tell you where the issue is, if they’re not explaining to you how it’s exploitable, you may not know!

    And then, even when people explain it, they explain as if they’re talking to a developer of their caliber. I certainly am, but the people I’ve got to pass the report to (the actual devs) are not always. Even when they are, sometimes they’re total berks who will snark that it’s not worth the time to escape things at that low a risk.

    Understanding Risk

    Security is massively important but the reality is that it’s not the first thing on most people’s minds when they write code (sorry folks). Usually people concentrate on making the code work first. Then, once it works, they go back to make it safer. I’m not casting aspersions here. There’s nothing wrong with making it work first. The issues begin when people don’t take security with the proper seriousness.

    Just the other day I saw someone who had to be told that yes, you always escape content you’re echoing. Why? Because users are humans, and humans do some really stupid things. Even if you think of yourself as average, that means roughly half of your users are not as smart as you, which means that half is who you’ve got to look out for. You sanitize content you save, you scape content you echo. All. The. Time.

    And yes, I’ve seen people who are experienced developers, people with plugins whose user count is in thousands, reply that it’s not needed to escape because … it’ll make their code slower.

    Sometimes I tell people “This is why I drink.”

    Proper Education

    Now. Part of this is on the community/company. WordPress, where I do a lot of work, has decent documentation about security. In fact, as of late WordPress’ docs have been phenomenal about this!

    Here’s what the plugin dev docs say about nonces:

    If your plugin allows users to submit data; be it on the Admin or the Public side; you have to make sure that the user is who they say they are and that they have the necessary capability to perform the action. Doing both in tandem means that data is only changing when the user expects it to be changing.

    Now. That kind of explains why you want to do this, but does it explain why it’s needed for security? Only from a high level. For the crux you have to scroll down a little:

    The capability check ensures that only users who have permission to delete a post are able to delete a post. But what if someone were to trick you into clicking that link? You have the necessary capability, so you could unwittingly delete a post.

    Now that makes a lot more sense, right? That is a good doc, assuming people read it.

    And look at the bolded intro for escaping:

    Escaping means stripping out unwanted data, like malformed HTML or script tags.

    Whenever you’re rendering data, make sure to properly escape it. Escaping output prevents XSS (Cross-site scripting) attacks.

    Whenever. Not sometimes, not when it’s convenient. WHENEVER.

    And yes, this means every single time you echo anything as a variable, you damn well escape it. No questions asked.

    But when I say proper education, I mean in explaining why a specific issue is, in fact, an issue.

    Communication Is Queen

    If you’re a regular person who sent a report you weren’t sure about, you’re fine. This next bit is not about you. This is about security experts and other developers.

    Did you contact developers, privately, about issues with their code? If you’re a security company, do you have documentation on your site to explain how something in an XSS or LFI vulnerability? Did you explain in the contact why something is a risky LFI? Did you take a minute to share a Proof of Concept to illustrate how you knew something was a risk? Did you describe why and how the POC shows the risk?

    That’s what you have to do.

    You want other developers to be better and to write better code? Then you communicate, clearly, and take time to ensure what you’ve said is understandable by the recipient.

    I hate your security reports because they’re not reports at all. They’re dumping a problem on someone else and not giving them the tools to progress. You’re expecting them to do all the work you already did, which by the way is a waste of everyone’s time. It wastes my time because now I have to do everything you already did, and it wastes yours since I’ll probably ask you for the details.

    But. If you start with a private, polite, report of “Hey, I found this. Here’s how and here’s why I know it’s an issue.” then you, you my friend, are heroes. You’re actually making the entire world better for everyone.

    Thank you.

  • oEmbedding Hugo

    oEmbedding Hugo

    No, not Hugo …

    Hugo Weaving as "Mitzi Del Bra" from "The Adventures of Priscilla, Queen of the Desert"

    No no. Hugo

    HUGO - logo to the app.

    I’ve been using Hugo to power a part of a website for quite a while now. Since 2015 I’ve had a static site, a simple library, with around 2000 posts that I wanted to be a static, less-possible-to-be-hackable site. It’s purpose was to be an encyclopedia, and as a Hugo powered site, it works amazingly.

    But… I do use WordPress, and sometimes I want to link and embed.

    Integrations Are Queen

    It helps to have a picture of how I built things. Back in *cough* 1995, the site was a single HTML page. By 1996 it was a series of SHTML (yeah) with a manually edited gallery. Fast-forward to 2005 and we have a gallery in the thousands and a full blown wiki.

    Now. Never once did I have integrated logins. While I love it for the ease of … me, I hate it from a security standpoint. Today, the blog is powered by WordPress and the gallery by NetPhotoGraphics and the ‘wiki’ by Hugo (I call it a library now). Once in a while I’ll post articles or transcripts or recaps over on the library and I want to cross link to the blog to tell people “Hey! New things!”

    But… from a practical standpoint, what are the options?

    1. A plain ol’ link
    2. A ‘table’ list of articles/transcripts/etc by name with links
    3. oEmbed

    Oh yes. Option 3.

    oEmbed and Hugo is Complex

    Since Hugo is a static HTML generator, you have to create faux ‘endpoints’ and you cannot make a dynamic JSON generator per post. Most of the things you’ll find when you google for oEmbed and Hugo is how to make it read oEmbed (like “adding a generic oEmbed handler for Hugo“). I wanted the other way, so I broke down what I needed to do:

    1. Make the ‘oembed’ JSON
    2. Make the returning iframe
    3. Add the link/alternate tag to the regular HTML

    Unlike with NetPhotoGraphics, wherein I could make a single PHP file which generated the endpoints and the json and the iframe, I had to approach it from a different angle with Hugo, and ask myself “How do I want the ‘endpoints’ to look?

    See you actually can make a pseudo endpoint of example.com/json/link/to/page which would generate the iframe from example.com/link/to/page and then example.com/oembed/link/to/page but this comes with a weird cost. You will actually end up having multiple folders on your site, and you’d want to make an .htaccess to block things.

    This has to do with how Hugo (and most static site generators) make pages. See if I wanted to make a page for ‘about’, then I would go into /posts/ and make a file called about.md with the right headers. But that doesn’t make a file called about.html, it actually makes a folder in my public_html director, called about with a file in there named index.html — that’s basic web directory stuff, though.

    But Hugo has an extra trick, which allows you to make custom files. Most people use it to make AMP pages and they explain the system like this:

    A page can be output in as many output formats as you want, and you can have an infinite amount of output formats defined as long as they resolve to a unique path on the file system. In the above table, the best example of this is AMP vs. HTMLAMP has the value amp for Path so it doesn’t overwrite the HTML version; e.g. we can now have both /index.html and /amp/index.html.

    Except… your ‘unique path’ doesn’t have to be a path! And you can customize it to kick out differently named files. So instead of /index.html and /amp/index.html I could do /index-amp.html in the same location.

    So that means my options were:

    1. A custom folder (and subfolders) for every post per ‘type’ of output
    2. Subfiles in the already existing folder

    I picked the second and here’s how:

    Output Formats

    The secret sauce for Hugo is making a new set of output formats.

    outputFormats:
      iframe:
        name: "iframe"
        baseName: "iframe"
        mediaType: "text/html"
        isHTML: true
      oembed:
        name: "oembed"
        baseName: "oembed"
        mediaType: "application/json"
        isPlainText: true

    By omitting the path value and telling it that my baseName is iframe and oembed, I’m telling Hugo not to make a new folder, but to rename the files! Instead of making /oembed/index.html and /oembed/about/index.html I’m making /about/oembed.html!

    Boom.

    The next trick was to tell Hugo what ‘type’ of content should use those new formats:

    outputs:
      home: [ "HTML", "JSON", "IFRAME", "OEMBED" ]
      page: [ "HTML", "IFRAME", "OEMBED" ]
      section: [ "HTML", "IFRAME", "OEMBED" ]

    Home also has a JSON which is something I use for search. No one else needs it.

    New Template Files

    I’ll admit, this took me some trial and error. In order to have Hugo generate the right files, and not just a copy of the main index, you have to add new template files. Remember those basenames?

    • index.oembed.json
    • index.iframe.html

    Looks pretty obvious, right? The iframe file is the HTML for the iframe. The oembed is the JSON for oembed discovery. Those go right into the main layouts folder of your theme. But… I ended up having to duplicate things in order to get everything working and that meant I also made:

    • /_default/baseof.iframe.html
    • /_default/baseof.oembed.json
    • /_default/single.iframe.html
    • /_default/single.json

    Now, if you;’re wondering “Why is it named single.json?” I don’t know. What I know is if I named it any other way, I got this error:

    WARN: found no layout file for “oembed” for layout “single” for kind “page”: You should create a template file which matches Hugo Layouts Lookup Rules for this combination.

    So I did that and it works. I also added in these:

    • /section/section.iframe.html
    • /section/section.oembed.json

    Since I make heavy use of special sections, that was needed.

    The Template Files

    They actually all look pretty much the same.

    There’s the oembed JSON:

    {
      "version": "1.0",
      "provider_name": "{{ .Site.Title }}",
      "provider_url": "{{ .Site.BaseURL }}",
      "type": "rich",
      "title": "{{ .Title }} | {{ .Site.Title }}",
      "url": "{{ .Permalink }}",
      "author_name": "{{ if .Params.author }}{{ .Params.author }}{{ else }}Anonymous{{ end }}",
      "html": "<iframe src=\"{{ .Permalink }}iframe.html\" width=\"600\" height=\"200\" title=\"{{ .Title }}\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"hugo-embedded-content\"></iframe>"
    }
    
    

    And there’s the iframe HTML:

    <!DOCTYPE html>
    <html lang="en-US" class="no-js">
    <head>
    	<title>{{ .Title }} &middot; {{ .Site.Title }}</title>
    	<base target="_top" />
    	<style>
    		{{ partial "oembed.css" . | safeCSS }}
    	</style>
    	<meta name="robots" content="noindex, follow"/>
    	<link rel="canonical" href="{{ .Permalink }}" />
    </head>
    <body class="hugo hugo-embed-responsive">
    	<div class="hugo-embed">
    		<p class="hugo-embed-heading">
    			<a href="{{ .Permalink }}" target="_top">{{ .Title }}</a>
    		</p>
    		<div class="hugo-embed-excerpt">
    			{{ .Summary }}...
    		</div>
    		<div class="hugo-embed-footer">
    			<div class="hugo-embed-site-title">
    				<a href="{{ .Site.BaseURL }}" target="_top">
    					<img src="/images/oembed-icon.png" width="32" height="32" alt="{{ .Site.Title }}" class="hugo-embed-site-icon"/>
    					<span>{{ .Site.Title }}</span>
    				</a>
    			</div>
    		</div>
    	</div>
    </body>
    </html>
    
    

    Note: I set summaryLength: 10 in my config to limit the summary to something manageable. And no, you’re not mis-reading that, the library generally has no images.

    And then in my header code for the ‘normal’ html pages:

    	{{ if not .Params.notoembed }}
    	{{ "<!-- oEmbed -->" | safeHTML }}
    	<link rel="alternate" type="application/json+oembed" href="{{ .Permalink }}/oembed.json"/>
    	{{ end }}
    

    I wanted to leave a way to say certain pages were non embeddable, and while I’m not using it at the moment, the logic remains.

    Does it Float Work?

    Of course!

    Nice, quick, to the point.

  • oEmbedding Galleries

    oEmbedding Galleries

    I use NetPhotoGraphics to handle a 2.5 gig gallery, spanning back 20 or so years. The gallery used to be a home grown PHP script, then it was Gallery, then Gallery 2, then ZenPhoto, and now NetPhotoGraphics (which ostensibly is a fork of ZenPhoto, but diverged in a way I’m more supportive of).

    Anyway. I use this gallery in conjunction with a WordPress site. I’ll post news on WordPress and link to the gallery. But for years, to do that my choices were:

    1. make a text link
    2. make a photo which is a link
    3. copy all the thumbnails over and link each one

    Those all suck. Especially the third, since you can’t (out of the box) custom link images in a gallery in WordPress and frankly I don’t like any of the plugins.

    Once upon a time, I used a ZenPhoto plugin, but it’s been abandoned for years and stopped working a while ago. I needed something that had an elegant fallback (i.e. if you uninstall the plugin) and seriously thought about forking the WordPress plugin…

    But then I had a better idea.

    Why oEmbed?

    oEmbed is an industry standard. By having your app (Flickr, Twitter, your WordPress blog) offer a custom endpoint, someone can embed it easily into their own site! WordPress has supported many embeds for a long time, but as of 2015, it’s included oEmbed Discovery. That’s why you can paste in a link to Twitter, and WordPress will automagically embed it!

    I maybe wrote an oembed plugin for another CMS so I could embed things into WordPress… Because the other option was a MASSIVE complex WP Plugin and FFS why not?

    — ipstenu (Mika E.) (@Ipstenu) September 26, 2021

    (Note: I shut down my twitter account in November ‘22 when it was taken over by a narcissist who brought back abuse.)

    I just pasted the URL https://twitter.com/Ipstenu/status/1441950326777540609 in and WordPress automagically converts it to a pretty embed. About the only social media company you can’t do that with is Facebook, who requires you to make an app (I use Jetpack for it). Anyway, point being, this is also how tools like Slack or Discord know to embed your content when you paste in a link!

    By making an oEmbed endpoint, I allow my site to become more shareable and more engageble, which is a net positive for me. If I do it right, out of the box it’ll allow anyone with a WordPress site (i.e. me) to paste in a URL to my gallery and it looks pretty! Win win!

    The NetPhotoGraphics Plugin

    Now. I’m a terrible designer, so I literally copied the design WordPress itself uses for embeds and spun up a (relatively) fast solution: oEmbed for NetPhotoGraphics.

    The code is one file (oembed.php) which goes in the /plugins/ folder in your NetPhotoGraphics install. Then you activate the plugin and you’re done. There are only one thing to customize, the ‘gallery’ icon. By default it grabs a little NPG logo, but if you put a /images/oembed-icon.png image in your gallery, it’ll use that.

    And does it work? Here’s how the first version looked on a live page:

    An example of the m

    I wanted to limit the images since sometimes I have upwards of 200 (look, episodes of CSI are a thing for me). And frankly pasting in a URL to the gallery is a lot easier than drilling down on a list of a hundred albums. This is exactly what I needed.

    Since the creation of that, I worked with netPhotoGraphics and he helped me make it better.

    One Bug and a Future

    There’s room to grow here. Thanks to S. Billard, we’ve got a lot more flexible. You can override the basic design with your own theme, you can replace the icons, and there are even options to adjust the size of the iframes. Part of me thinks it could use a nicer design, maybe a single-photo Instagram style embed instead of what I have, but that’s not my forte. Also I have yet to get around to putting in ‘share’ options. (Pull Requests welcome!)

    And yes, I know the security isn’t ‘enough’ but I wasn’t able to get it to work how I wanted due to a weird bug. You see, I did run into a rare quirk with WordPress due to how I built out the site. IF you have your gallery in a subfolder under/beside a WordPress install AND you try to embed the gallery into that WordPress site, you MAY find out WP thinks your embed is WordPress and not NPG.

    In my case, I have:

    • example.com – WordPress
    • example.com/gallery – NetPhotoGraphics

    I guess WordPress reads a little too deep into who’s WP and who’s not, which resulted in me making this WordPress filter:

    add_filter( 'embed_oembed_html', 'npg_wrap_oembed_html', 99, 4 );
    }
    
    function npg_wrap_oembed_html( $cached_html, $url, $attr, $post_id ) {
    	if ( false !== strpos( $url, '://example.com/gallery' ) ) {
    		$cached_html = '&lt;div class="responsive-check">' . $cached_html . '&lt;/div>';
    
    		$cached_html = str_replace( 'wp-embedded-content', 'npg-embedded-content', $cached_html );
    		$cached_html = str_replace( 'sandbox="allow-scripts"', '', $cached_html );
    		$cached_html = str_replace( 'security="restricted"', '', $cached_html );
    
    	}
    	return $cached_html;
    }
    
    

    Change '://example.com/gallery' to the location of your own gallery install.

    No I don’t like this either, but it was a ‘get it done’ moment. Also this is why the iframe security is lacking.

  • Hugo Deployment via GitHub Actions

    Hugo Deployment via GitHub Actions

    For a long time I’ve been managing deployment of a Hugo site via a few home-grown scripts and actions. All that has changed.

    The Setup

    So let’s get our setup explained:

    1. Hugo is used to manage a library of data
    2. The posts and the theme are in the same repo, but stored under ‘Content’ (which has data, posts, and static) and themes (which has … the theme)
    3. Most of the changes are in the post content

    Okay, now this is not a conversation about why (or why not) use Hugo. I like it for my pretty much text-only wiki type content, in that it lets me keep things organized and usable from everything including my iPad.

    But this use of Hugo comes with a cost. One of the reasons people love CMS tools like WordPress is that you can edit in your browser, and frankly that’s super easy. Using a static site builder, you have to run (somewhere) the static site build command. For a while I had a ‘deployme’ local command that did this:

    $ hugo -F -s /home/username/Development/site/hugo
    $ rsync -aCt --delete --exclude-from '/home/username/Development/rsync-exclude.txt' --force --omit-dir-times -e ssh /home/username/Development/site/hugo/ user@example.com:/home/username/domain/hugo
    

    Not super complicated, right? I have it on my laptop but it means I can’t always push code. Like if I edit directly on Github or on my iPad.

    Normally I’d look into something like Codeship (which I’ve talked about before) but … I thought it was high time I sat down and made things simpler.

    What’s Simpler?

    In this case, simpler means “fewer moving parts that could go wrong.”

    See I love Codeship, it lets me do a lot of cool things, but it’s also a little fragile and (honestly) sucky when it comes to Hugo. Creating a server and running hugo took longer than it did on my laptop. A lot longer. Many minutes longer.

    If I ran it locally it was a few seconds:

    Start building sites …
    hugo v0.87.0+extended darwin/arm64 BuildDate=unknown
    
                       |  EN
    -------------------+-------
      Pages            | 1762
      Paginator pages  |    0
      Non-page files   |    0
      Static files     |   92
      Processed images |    0
      Aliases          |    2
      Sitemaps         |    1
      Cleaned          |    0
    
    Built in 1562 ms
    

    When I did it on Codeship it would be 5-14 minutes! That’s crazy, right? Which is why I moved off Codeship and down to local. But that came with the cost of limiting when and where I could run anything. While my code is super simple, it’s also silo’d and that’s bad.

    In order to achieve simplicity, what I really needed is code that runs from Github. Or on the server where the site is. Back in ‘the day’ I installed hugo on the server, but also Git! That means I pushed to my git repo, which was on the same server, and I used post-commit hooks to deploy. I’ve toyed around with a few iterations, but then I moved to a new server where I didn’t install Go because … I didn’t need it.

    And that means here, simple is:

    • runnable from anywhere
    • automated
    • restricted when needed
    • not crossing multiple services

    Which led me to Github actions.

    Github Actions

    This is a service from Github.

    GitHub Actions makes it easy to automate all your software workflows, now with world-class CI/CD. Build, test, and deploy your code right from GitHub. Make code reviews, branch management, and issue triaging work the way you want.

    In other words, Github saw us all using Travis and Codeship and thought “We could do that and keep people here, right?”

    But Actions goes beyond just automation. The Actions interface allows you to run tests, builds, checks, and, yes, deploys. It’s an order of magnitude faster than tools like Codeship because it’s a stripped down, basic interface. It’s also controlled by Github so you don’t need more access than committing code.

    There are some cons, though. One of the headaches with Codeship was that when Hugo updated, Codeship might just … stop working right. So you had to find the magic sauce to make it work. With Github Actions, you’re using ‘actions’ built by other people a lot of the time, and if you’re familiar with the drama that happened in npm a while ago, you may share my fear of “What if someone else deletes their action…?”

    Yeah, I have concerns/

    main.yml

    Here’s my code:

    name: 'Generate and deploy'
    
    on:
      push:
        branches: [ production ]
    
    jobs:
      deploy-website:
        runs-on: ubuntu-latest
        steps:
          - name: Do a git checkout including submodules
            uses: actions/checkout@v2
            with:
              submodules: true
    
          - name: Setup Hugo
            uses: peaceiris/actions-hugo@v2
            with:
              hugo-version: 'latest'
              # extended: true
    
          - name: Build Hugo
            run: hugo --minify
    
          - name: Deploy to Server
            uses: easingthemes/ssh-deploy@main
            env:
              SSH_PRIVATE_KEY: ${{ secrets.SERVER_SSH_KEY }}
              ARGS: "-rlgoDzvc -i"
              SOURCE: "public/"
              REMOTE_HOST: ${{ secrets.REMOTE_HOST }}
              REMOTE_USER: ${{ secrets.REMOTE_USER }}
              TARGET: "/home/username/domain/library/"
              #EXCLUDE: "/dist/, /node_modules/"
    

    There are a number of alternatives, but I picked peaceiris/actions-hugo because that developer is well known and respected. And while there are all-in-one Hugo build and deploy, I decided to separate them because I linked peaceiris’ code. This meant I needed an Rsync or ssh deployment. I settled on easingthemes/ssh-deploy because they strongly encouraged the use of secrets, and that’s a good sign to me. Also it’s heavily recommended by Flywheel, and a I cannot imagine them being reckless.

    The only ‘gotcha’ I had was the directions about how to setup SSH was not great.

    To make it work, you need to create a pem key on the server:

    ssh-keygen -m PEM -t rsa -b 4096
    

    Then you need to put that key in a secret (I named mine SERVER_SSH_KEY). But what they don’t mention quite as clearly is what this means:

    Private key part of an SSH key pair. The public key part should be added to the authorized_keys file on the server that receives the deployment.

    Yes, they’re saying “the public key for your own server has to be on the authorized_keys for the server.” And yes, that’s a weird thing to say, but there it is. That means you copy your own key from your server at ~/.ssh/id_rsa.pub (the .PUB is the part that explains this is a PUBLIC key) and you paste that in to the end of ~/.ssh/authorized_keys on the same server. Yes, it’s really funky.

    My ongoing concerns

    I mentioned that there are security issues. I spend a lot of time in WordPress plugin land, where people could commit nefarious code to a plugin any day. Some do. I’ve banned enough people from .org for stupid stuff like that. And some of Github’s advice matches my own: the only way to be safe is to do the damn research yourself.

    But that’s not really something most people can do. And it’s something for a longer post in general. My short list of concerns right now is:

    • the action I’m using is deleted
    • the action is edited and breaks my flow/injects malware
    • the action is used to steal my credentials

    There are ways to mitigate this

    • I can use actions made and managed and maintained by Github only (those are under the Actions org) — those are unlikely to be deleted and can be trusted as much as Github
    • I can make copies of the actions I want to use (and periodically remember to update them…)
    • I can make use of encrypted secrets to hide sensitive information

    But. It’s a risk. And I know it.

  • FreshRSS: A Simpler Self Hosted RSS Manager

    FreshRSS: A Simpler Self Hosted RSS Manager

    I’ve been using Tiny Tiny RSS for … well years. Almost a decade. I like it a lot, the interface is nice and pretty to use. But there have always been some serious lingering issues with it.

    1. The developer is very opinionated, to the point of aggression
    2. The development is Docker, to the point that non-Docker support is non existent
    3. Support for ‘non modern’ browsers means Safari is not supported

    Now I’m opinionated, and I can be curt and blunt at times. And I work with a lot (A LOT) of people who are similar. I do plugin reviews for WordPress.org — trust me, I know from opinionated developers. I have lost track of the time I’ve spent arguing with prima donnas who cannot fathom that their code might not be god’s gift to the universe.

    The majority of people, thankfully, are not like that. They recognize no one is perfect, they understand that sometimes you have to make allowances in your code for the sake of a system, and most of all they aren’t aggro when told “no.” (If you find yourself getting pissed off, BTW, when someone reviews your code, yes, I’m talking about you.)

    Anyway. Andrew Dolgov is an amazing developer, a talented one at that. But he has a very ‘my way or GTFO’ kind of flow, and since it’s a single-man project, I really do get that. And for the time that he happily supported PHP on whatever, I didn’t care. The code worked, he didn’t have any strong opinions that offended me (like being a Nazi sympathizer, and yes, I’ve ditched software I love for that), and so what if he was a bit prickly?

    But… He’s Docker all in. And I like Docker, but I don’t want to run it all the time, and certainly not for a flippin’ RSS reader that is PHP and SQL and that’s it. As time went on, it got harder and harder and harder to manage and maintain a slight fork, to the point that it’s just not worth it.

    The Replacement: FreshRSS

    FreshRSS. It’s a barebones, simple, easy to install RSS reader. How easy? It’s practically a ‘famous five minute install.’

    The selling points are:

    That’s really all I needed.

    The install was to download the latest release, unzip it on my server, and then I went to the URL where I’d installed it ( i.e. https://example.com ) and entered the DB credentials. Then I made a new account and boom. Done.

    Much like with TTRSS, I have to set up a cron job to run the refresh, which I set to hourly:

    php /home/username/example.com/app/actualize_script.php > /home/username/FreshRSS.log 2>&1

    Now I have to migrate my content to actually have something to check.

    The Migration

    First up, you have to export from TTRSS, which is not as obvious as all that. The best way is via command line:

    $ php ./update.php --opml-export "ipstenu:ipstenu.opml"

    Don’t waste time with the various plugins, they’re not supported and in my experience, don’t work. Also if you’re mystified trying to find out how to export, it’s not just you. I had to trawl through the forums to find an example that didn’t work, but did link me to the code and I was able to figure it out from there.

    Once you have that, save the OPML file and pop over to FreshRSS and import. It will keep your categories and everything.

    Yeah, that was it!

    The Tweaks

    Most of the settings are fine as is. I turned off the option to mark as read when I scroll by (I regularly use unread to know what I need to handle next):

    I also added in a filter to mark a specific feed as read unless it mentions a keyword which was as easy as a filter for -intitle:keyword to that feed.

    I picked a theme that made me happy to boot.

    All in all, it was a super easy move.