Hugo is my favourite static site generator. I use it on a site I originally created in 1996 (yes, FLF is about to be 30!). Over the last 6 months, I’ve been totally overhauling the site from top to bottom, and one of the long-term goals I had was to add in Cookie Consent.
Hugo Has Privacy Mode
One of the nice things about Hugo is they have a built in handler for Privacy Mode.
I have everything set to respect Do Not Track and use PrivacyMode whenever possible. It lightens my load a lot.
Built Into Hinode: CookieYes
The site makes use of Hinode, which has built in support for cookie consent… Kind of. They use the CookieYes service, which I get but I hate. I don’t want to offload things to a service. In fact, part of the reason I moved of WordPress and onto Hugo for the site was GDPR.
I care deeply about privacy. People have a right to privacy, and to opt in to tracking. A huge part of that is to minimize the amount of data from your own websites that are sent around to other people and saved on your own server/services!
Obviously I need to know some things. I need to know how many mobile users there are so I can make it better. I need to know what pages have high traffic so I can expand them. If everyone is going to a recap page only to try and find a gallery, then I need to make those more prominent.
In other words, I need Analytics.
And the best analytics? Still Google.
Sigh.
Alternatively: CookieConsent
I did my research. I checked a lot of services (free and pay), I looked into solutions people have implemented for Hugo, and then I thought there has to be a simple tool for this.
CookieConsent is a free, open-source (MIT) mini-library, which allows you to manage scripts — and consequently cookies — in full GDPR fashion. It is written in vanilla js and can be integrated in any web platform/framework.
And yes, you can integrate with Hugo.
How to Add CookieConsent to Hugo
First, download it. I have node set up to handle a lot of things, so I went with the easy route:
npm i vanilla-cookieconsent@3.1.0
Next, I have to add the dist files to my site. I added in a command to my package.json:
If you’re familiar with Hinode, may notice I’m not using the suggested way to integrate JS. If I was doing this in pure Hinode, I’d be copying the files to assets/js/critical/functional/ instead of my static folder.
I tried. It errors out:
Error: error building site: EXECUTE-AS-TEMPLATE: failed to transform "/js/critical.bundle-functional.js" (text/javascript): failed to parse Resource "/js/critical.bundle-functional.js" as Template:: template: /js/critical.bundle-functional.js:210: function "revisionMessage" not defined
I didn’t feel like debugging the whole mess.
Anyway, once you get those files in, you need to make another special js file. This file is your configuration or initialization file. And if you look at the configuration directions, it’s a little lacking.
Instead of that, go look at their Google Example! This gives you everything you need to comply with Google Tag Manager Consent Mode, which matters to me. I copied that into /static/js/cookieconsent-init.js and customized it. Like, I don’t have ads so I left that out.
Add Your JS and CSS
I already have a customized header (/layouts/partials/head/head.html) for unrelated reasons, but if you don’t, copy the one from Hinode core over and add in this above the call for the SEO file:
Since your init file contains the call to the main consent code, you’re good to go!
The Output
When you visit the site, you’ll see this:
Screenshot
Now there’s a typo in this one, it should say “That means if you click “Reject” right now, you won’t get any Google Analytics cookies.” I fixed it before I pushed anything to production. But I made sure to specify that so people know right away.
If you click on manage preferences, you’ll get the expanded version:
Screenshot
The language is dry as the desert because it’s to meet Google’s specifics.
As for ‘strictly necessary cookies’?
At this time we have NO necessary cookies. This option is here as a placeholder in case we have to add any later. We’ll notify you if that happens.
This is a part of ‘white labeling’, which basically means rebranding.
When you have a website that is not used by people who really need to mess with WordPress, nor learn all about it (because you properly manage your own documentation for your writers), then that W in your admin toolbar is a bit odd, to say the least.
This doesn’t mean I don’t want my editors to know what WordPress is, we have a whole about page, and the powered-by remains everywhere in the admin pages, but that logo…
Well anyway, I decided to nuke it.
Remove What You Don’t Need
First I made a function that removes everything I don’t need:
function cleanup_admin_bar(): void {
global $wp_admin_bar;
// Remove customizer link
$wp_admin_bar->remove_menu( 'customize' );
// Remove WP Menu things we don't need.
$wp_admin_bar->remove_menu( 'contribute' );
$wp_admin_bar->remove_menu( 'wporg' );
$wp_admin_bar->remove_menu( 'learn' );
$wp_admin_bar->remove_menu( 'support-forums' );
$wp_admin_bar->remove_menu( 'feedback' );
// Remove comments
$wp_admin_bar->remove_node( 'comments' );
}
add_action( 'wp_before_admin_bar_render','cleanup_admin_bar' );
I also removed the comments node and the customizer because this site doesn’t use comments, and also how many times am I going to that Customizer anyway? Never. But the number of times I miss-click on my tablet? A lot.
But you may notice I did not delete everything. That’s on purpose.
Make Your New Nodes
Instead of recreating everything, I reused some things!
Way back when WP was simple and adding a sidebar to the editor was a simple metabox, I had a very straightforward setup with a box that, on page load, would tell you if the data in the post matched the remote API, and if not would tell you what to update.
My plan was to have it update on refresh, and then auto-correct if you press a button (because sometimes the API would be wrong, or grab the wrong account — loose searching on people’s names is always rough). But my plan was ripped asunder by this new editor thingy, Gutenberg.
I quickly ported over my simple solution and added a note “This does not refresh on page save, sorry.” and moved on.
Years later brings us to 2024 and November being my ‘funenployment’ month, where I worked on little things to keep myself sharp before I started at AwesomeMotive. Most of the work was fixing security issues, moving the plugin into the theme so there was less to manage, modernizing processes, upgrading libraries, and so on.
But one of those things was also making a real Gutenbergized sidebar that autoupdates (mostly).
What Are We Doing?
On LezWatch.TV, we collect actor information that is public and use it to generate our pages. So if you wanted to add in an actor, you put in their name, a bio, an image, and then all this extra data like websites, social media, birthdates, and so on. WikiData actually uses us to help determine gender and sexuality, so we pride ourselves on being accurate and regularly updated.
In return, we use WikiData to help ensure we’re showing the data for the right person! We do that via a simple search based on either their WikiData ID (QID), IMDb ID, or their name. The last one is pretty loose since actors can have the same name now (oh for the days when SAG didn’t allow that…). We use the QID to override the search in cases where it grabs the wrong person.
I built a CLI command that, once a week, checks actors for data validity. It makes sure the IMDb IDs and socials are formatted properly, it makes sure the dates are valid, and it pings WikiData to make sure the birth/death etc data is also correct.
With that already in place, all I needed was to call it.
You Need an API
The first thing you need to know about this, is that Gutenberg uses the JSON API to pull in data. You can have it pull in everything by custom post meta, but as I already have a CLI tool run by cron to generate that information, making a custom API call was actually going to be faster.
I went ahead and made it work in a few different ways (you can call it by IMDb ID, post ID, QID, and the slug) because I planned for the future. But really all any of them are doing is a search like this:
/**
* Get Wikidata by Post ID
*
* @param int $post_id
* @return array
*/
private function get_wikidata_by_post_id( $post_id ): array {
if ( get_post_type( $post_id ) !== 'post_type_actors' ) {
return array(
'error' => 'Invalid post ID',
);
}
$wikidata = ( new Debug_Actors() )->check_actors_wikidata( $post_id );
return array( $wikidata );
}
The return array is a list of the data we check for, and it either is a string of ‘matches’ /true, or it’s an array with WikiData’s value and our value.
Making a Sidebar
Since we have our API already, we can jump to making a sidebar. Traditionally in Gutenberg, we make a sidebar panel for the block we’re adding in. If you want a custom panel, you can add in one with an icon on the Publish Bar:
While that’s great and all, I wanted this to be on the side by default for the actor, like Categories and Tags. Since YoastSEO (among others) can do this, I knew it had to be possible:
But when I started to search around, all anyone told me was how I had to use a block to make that show.
I knew it was bullshit.
Making a Sidebar – The Basics
The secret sauce I was looking for is decidedly simple.
I knew about PanelRow but finding PluginDocumentSettingPanel took me far longer than it should have! The documentation doesn’t actually tell you ‘You can use this to make a panel on the Document settings!’ but it is obvious once you’ve done it.
Making it Refresh
This is a pared down version of the code, which I will link to at the end.
The short and simple way is I’m using UseEffect to refresh:
if (postType !== 'post_type_actors') {
return null;
}
That simply prevents the rest of the code from trying to run. You have to have it after the UseEffect because JS is weird and does things in an order. If you have a return before it, it fails to pass a lint (and I enforce linting on this project).
How it works is on page load of an auto-draft, it tells you to save the post before it will check. As soon as you do save the post (with a title), it refreshes and tells you what it found, speeding up initial data entry!
But then there’s the issue of refreshing on demand.
HeartBeat Flatline – Use a Button
I did, at one point, have a functioning heartbeat checker. That can get pretty expensive and it calls the API too many times if you leave a window open. Instead, I made a button that uses a constant:
The API returns the WikiData ID, the post ID, and the name, none of which need to be checked here, so I remove them. Otherwise it capitalizes things so they look grown up.
Then there’s a massive amount of code in the panel itself:
<Spinner /> is from '@wordpress/components' and is a default component.
Now, innerKey is actually not a simple output. I wanted to capitalize the first letter and unlike PHP, there’s no ucfirst() function, so it looks like this:
You can find the whole block, with some extra bits I didn’t mention but I do for quality of life, on our GitHub repo for LezWatch.TV. We use the @wordpress/scripts tooling to generate the blocks.
The source code is located in folders within /src/ – that’s where most (if not all) of your work will happen. Each new block gets a folder and in each folder there must be a block.json file that stores all the metadata. Read Metadata in block.json if this is your first rodeo.
The blocks will automagically build anytime anyone runs npm run build from the main folder. You can also run npm run build from the blocks folder.
All JS and CSS from blocks defined in blocks/*/block.json get pushed to the blocks/build/ folder via the build process. PHP scans this directory and registers blocks in php/class-blocks.php. The overall code is called from the /blocks/src/blocks.php file.
The build subfolders are NOT stored in Git, because they’re not needed to be. We run the build via actions on deploy.
What It Looks Like
One of the things I want to do is have a way to say “use WikiData” or “use ours” to fill in each individual data point. Sadly sometimes it gets confused and uses the wrong person (there’s a Katherine with an E Hepburn!) so we do have a QID override, but even so there can be incorrect data.
WikiData often lists socials and websites that are defunct. Mostly that’s X these days.
Takeaways
It’s a little frustrating that I either have to do a complex ‘normal’ custom meta box with a lot of extra JS, or make an API. Since I already had the API, it’s no big, but sometimes I wish Gutenberg was a little more obvious with refreshing.
Also finding the right component to use for the sidebar panel was absolutely maddening. Every single document was about doing it with a block, and we weren’t adding blocks.
Finally, errors in Javascript remain the worst. Because I’m compiling code for Gutenberg, I have to hunt down the likely culprit, which is hard when you’re still newish to the code! Thankfully, JJ from XWP was an angel and taught me tons in my 2 years there. I adore her.
In working on my Hugo powered gallery, I ran into some interesting issues, one of which was from my theme.
I use a Bootstrap powered theme called Hinode. And Hinode is incredibly powerful, but it’s also very complicated and confusing, as Hugo’s documentation is still in the alpha stage. It’s like the early days of other web apps, which means a lot of what I’m trying to do is trial and error. Don’t ask me when I learned about errorf, okay?
My primary issues are all about images, sizing and filing them.
Image Sizes
When you make a gallery, logically you want to save the large image as the zoom in, right? Click to embiggen. The problem is, in Hinode, you can load an image in a few ways:
Use the standard old img tag
Call the default Hugo shortcode of {{< figure >}}
Call a Hinode shortcode of {{< image >}}
Use a partial
Now, that last one is a little weird, but basically you can’t us a shortcode inside a theme file. While WordPress has a do_shortcode() method, you use partial calls in Hugo. And you have to know not only the exact file, but if your theme even uses partials! Some don’t, and you’re left reconstructing the whole thing.
Hinode has the shortcodes in partials and I love them for it! To call an image using the partial, it looks like this:
That call will generate webp versions of my image, saved to static image folder (which is a post of its own), and have the source sets so it’s handy and responsive.
What it isn’t is resized. Meaning if I used that code, I would end up with the actual huge ass image used. Now, imagine I have a gallery with 30 images. That’s 30 big ass images. Not good. Not good for speed, not good for anyone.
I ended up making my own version of assets/image.html (called lightbox-image.html) and in there I have this code:
{{ with resources.Get $imagefile }}
{{ $image = .Fill "250x250" }}
{{ $imagesrc = $image.RelPermalink }}
{{ end }}
If the file is local, which is what that get call is doing, it uses the file ($imagefile is the ‘path’ to the file) to make a 250×250 sized version and then grabs that new permalink to use.
This skips over all the responsive resizing, but then again I don’t need that when I’m making a gallery, do I?
Remote Image Sizes
Now let’s add in a wrinkle. What if it’s a remote image? What if I passed a URL of a remote image? For this, you need to know that on build, that Hinode code will download the image locally. Local images load faster. I can’t use the same get, I need the remote get, but now I have a new issue!
Where are the images saved? In the img folder. No subfolders, the one folder. And I have hundreds of images to add.
Mathematically speaking, you can put about four billion files in a folder before it’s an issue for the computers. But if you’ve ever tried to find a specific file to check in a folder that large, you’ve seriously reconsidered your career trajectory. And practically speaking, the more files, the slower the processing.
Anyone else remember when GoDaddy announced a maximum of 1024 files in a folder on their shared hosting? While I question the long term efficacy of that, I do try to limit my files. I know that using the get/remote get calls with tack on a randomized name at the end, but I’d like them to be organized.
Since I’m calling all my files from my assets server (assets.example.com), I can organize them there and replicate that in my build. And my method to do that is as follows:
I know that shit is weird. It pairs off the earlier code. If you don’t create the image variable, then you know the image wasn’t local. So I started by getting the image url from ‘this’ (that’s what the period is) as an absolute url. Then I used the path of the url to generate my local folder path! When I use the Copy command with the pipe, it will automatically use that as the destination.
Conclusion
You can totally make images that are resized in Hugo. While I wish that was easier to do, most people aren’t as worried as I am about storing the images on their repository, so it’s less of an issue. Also galleries on Hugo are fairly rare.
I’m starting some work now with Hinode for better gallery-esque support, and we’ll see where that goes. Maybe I’ll get a patch in there!
One issue with Hugo is that the way I’m deploying it is via Github actions, which means every time I want to update, the site has to be totally rebuilt. Now the primary flaw with that process is that when Hugo builds a lot of images, it takes a lot of time. About 8 minutes.
The reason Hugo takes this long is that every time it runs its builds, it regenerates all the images and resizes them. This is not a bad thing, since Hugo smartly caches everything in the /resources/_gen/ folder, which is not sync’d to Github, and when you run builds locally it doesn’t take half as long.
Now, this speed is about the same whether the images are locally (as in, stored in the repository) or remote (which is where mine are located – assets.example.com), because regardless it has to build the resized images. This only runs on a build, since it’s only needed for a build. Once the content is on the server, it’s unnecessary.
The obvious solution to solve my speed issues would be to include the folder in Github, only I don’t want to store any images on Github if I can help it (legal reasons, if there’s a DMCA its easier to nuke them from my own storage). The less obvious solution is how we got here.
The Basic Solution
Here’s your overview:
Checkout the repo
Install Hugo
Run the repo installer (all the dependancies etc)
Copy the files from ‘wherever’ to the Git resource
Run the build (which will use what’s in the resource folder to speed it up)
Copy the resources folder content back down to the server
This would allow me to have a ‘source of truth’ and update it as I push code.
The Setup
To start with, I had to decide where to upload the content. The folder is (right now) about 500 megs, and that’s only going to get bigger. Thankfully I have a big VPS and I was previous hosting around 30 gigs there, so I’m pretty sure this will be okay.
But the ‘where’ specifics needed a little more than that. I went with a subdomain like secretplace.example.com and in there is a folder called /resources/_gen/
Next, how do I want to upload to for starters? I went with only uploading the static CSS files because my plan involves pushing things back down after I re-run the build.
Then comes the downloading. Did you know that there’s nearly no documentation about how to rsync from a remote source to your Github Action instance? It doesn’t help that the words are all pretty generic, and search engines think “Oh you want to know about rsync and a Github Action? You must want to sync from your action to your server!” No, thank you, I wanted the opposite.
While there’s a nifty wrapper for syncing over SSH for Github, it only works one way. In order to do it the other way, you have to understand the actual issue that action is solving. The SSH-sync isn’t solving rsync at all, that’s baked in to the action image (assuming you’re using ubuntu…). No, what the action solves is the mishegas of adding in your SSH details (the key, the known hosts, etc).
I could use that action to copy back down to the server, but if you’re going to have to solve the issue once, you may as well use it all the time. Once that’s solved, the easy part begins.
Your Actions
Once we’ve understood where we’re going, we can start to get there.
I’ve set this up in my ci.yml, which runs on everything except production, and it’s a requirement for a PR to pass it before it can be merged into production. I could skip it (as admin) but I try very hard not to, so I can always confirm my code will actually push and not error when I run it.
name: 'Preflight Checks'
on:
push:
branches:
- '!production' # excludes production.
concurrency:
group: ${{ github.ref }}-ci
cancel-in-progress: true
jobs:
preflight-checks:
runs-on: ubuntu-latest
steps:
- name: Do a git checkout including submodules
uses: actions/checkout@v4
with:
submodules: true
- name: Install SSH Key
uses: shimataro/ssh-key-action@v2
with:
key: ${{ secrets.SERVER_SSH_KEY }}
known_hosts: unnecessary
- name: Adding Known Hosts
run: ssh-keyscan -H ${{ secrets.REMOTE_HOST }} >> ~/.ssh/known_hosts
- name: Setup Hugo
uses: peaceiris/actions-hugo@v3
with:
hugo-version: 'latest'
extended: true
- name: Setup Node and Install
uses: actions/setup-node@v4
with:
node-version-file: '.nvmrc'
cache: 'npm'
- name: Install Dependencies
run: npm install && npm run mod:update
- name: Lint
run: npm run lint
- name: Make Resources Folder locally
run: mkdir resources
- name: Download resources from server
run: rsync -rlgoDzvc -i ${{ secrets.REMOTE_USER }}@${{ secrets.REMOTE_HOST }}:/home/${{ secrets.REMOTE_USER }}/${{ secrets.HUGO_RESOURCES_URL }}/ resources/
- name: Test site
run: npm run tests
- name: Copy back down all the regenerated resources
run: rsync -rlgoDzvc -i --delete resources/ ${{ secrets.REMOTE_USER }}@${{ secrets.REMOTE_HOST }}:/home/${{ secrets.REMOTE_USER }}/${{ secrets.HUGO_RESOURCES_URL }}/
Obviously this is geared towards Hugo. My command npm run tests is a home-grown command that runs a build and then some tests on said build. It’s separate from the linting, which comes with my theme. Because it’s running a build, this is where I can make use of my pre-built resources.
You may notice I set known_hosts to ‘unnecessary’ — this is a lie. They’re totally needed but I had a devil of a time making it work at all, so I followed the advice from Zell, who had a similar headache, and put in the ssh-keyscan command.
When I run my deploy action, it only runs the build (no tests), but it also copies down the resources folder to speed it up. I only copy it back up on testing for the teeny speed boost.
Results
Before all this, my builds took 8 to 9 minutes.
After they took 1 to 2, which is way better. Originally I only had it down to 4 minutes, but I was using wget to test things (and that’s generally not a great idea — it’s slow). Once I switched to rsync, it’s incredibly fast. The build of Hugo is still the slowest part, but it’s around 90 seconds.
You need to search all the links in a post and, if the link is to a specific site (wikipedia.com) you want to add it to an array you output at the bottom of your post, as citations. To do this, you will:
Search for tags for every single link (<a href=.... )
If the link contains our term (Wikipedia), put it in the array
If the link also has a title, we’ll use that
If we do all that, our output looks something like this:
Source: https://en.wikipedia.com/wiki/foobar
Source: Foobar2
While you can do this with regex, you can also use the (new) HTML Processor class to do it for you.
RegEx
As I mentioned, you can do this with regex (I’ll spare you the drama of coming up with this in the first place):
This is a very over simplified version, but the basis is sound. This will loop through the whole post, find everything with a URL, check if the URL includes wikipedia.com and output a link to it. If the editor added in a link title, it will use that, and if not, it falls back to the URL itself.
But … a lot of people will tell you Regex is super powerful and a pain in the ass (it is). And WordPress now has a better way to do this, that’s both more readable and extendable.
HTML Tag Processor
Let’s try this again.
What even is this processor? Well it’s basically building out something similar to DOM Nodes of all your HTMLin a WordPress post and letting us edit them. They’re not really DOM nodes, though, they’re a weird little subset, but if you think of each HTML tag as a ‘node’ it may help.
To start using it, we’re going to ditch regex entirely, but we still want to process our tags from the whole content, so we’ll ask WordPress to use the new class to build our our tags:
$content_tags = new WP_HTML_Tag_Processor( get_the_content() );
This makes the object which also lets us use all the children functions. In this case, we know we want URLs so we can use next_tag() to get things:
$content_tags->next_tag( 'a' );
This finds the next tag matching our query of a which is for links. If we were only getting the first item, that would be enough. But we know we have multiple links in posts, so we’re going to need to loop. The good news here is that next_tag() in and of itself can keep running!
while( $content_tags->next_tag( 'a' ) ) {
// Do checks here
}
That code will actually run through every single link in the post content. Inside the loop, we can check if the URL matches using get_attribute():
if ( str_contains( $content_tags->get_attribute( 'href' ), 'wikipedia.com' ) ) {
// Do stuff here
}
Since the default of get_attribute() is null if it doesn’t exist, this is a safe check, and it means we can reuse it to get the title:
if ( ! is_null( $content_tags->get_attribute( 'title' ) ) ) {
// Change title here
}
And if we apply all this to our original code, it now looks very different:
Example:
// Array of citations:
$citations = array();
// Process the content:
$content_tags = new WP_HTML_Tag_Processor( get_the_content() );
// Search all tags for links (a)
while( $content_tags->next_tag( 'a' ) ) {
// If the href contains wikipedia, build our array:
if ( str_contains( $content_tags->get_attribute( 'href' ), 'wikipedia.com' ) ) {
$current_citation = [
'url' => $content_tags->get_attribute( 'href' ),
'title' => $content_tags->get_attribute( 'href' ),
];
// If title is defined, replace that in our array:
if ( ! is_null( $content_tags->get_attribute( 'title' ) ) ) {
$current_citation['title'] = $content_tags->get_attribute( 'title' );
}
// Add this citation to the main array:
$citations[] = $current_citation;
}
}
// If there are citations, output:
if ( ! empty( $citations ) ) :
// Output goes here.
endif;
Caveats
Since we’re only searching for links, this is pretty easy. There’s a decent example on looking for multiple items (say, by class and span) but if you read it, you realize pretty quickly that you have to be doing the exact same thing.
If you wanted to do multiple loops though, looking for all the links but also all span classes with the class ‘wikipedia’ you’d probably start like this:
while ( $content_tags->next_tag( 'a' ) ) {
// Process here
}
while ( $content_tags->next_tag( 'span' ) ) {
// Process here
}
The problem is that you would only end up looking for any spans that happened after the last link! You could go a more complex search and if check, but they’re all risky as you might miss something. To work around this, you’ll use set_bookmark() to set a bookmark to loop back to:
$content_tags = new WP_HTML_Tag_Processor( get_the_content() );
$content_tags->next_tag();
// Set a bookmark:
$content_tags->set_bookmark( 'start' );
while ( $content_tags-> next_tag( 'a' ) ) {
// Process links here.
}
// Go back to the beginning:
$content_tags->seek( 'start' );
while ( $content_tags->next_tag( 'span' ) ) {
// Process span here.
}
I admit, I’m not a super fan of that solution, but by gum, it sure works!
Cookie Consent
We use cookies to improve your experience on our site. By using our site, you consent to cookies.
Contains information related to marketing campaigns of the user. These are shared with Google AdWords / Google Ads when the Google Ads and Google Analytics accounts are linked together.
90 days
__utmx
Used to determine whether a user is included in an A / B or Multivariate test.
18 months
_gali
Used by Google Analytics to determine which links on a page are being clicked
30 seconds
_ga_
ID used to identify users
2 years
_ga
Used to distinguish users.
2 years
_gat
Used to throttle request rate.
1 minute
_gid
Used to distinguish users.
24 hours
__utma
Used to distinguish users.
Persistent
__utmb
Used to determine new sessions/visits.
30 minutes
__utmc
Used to determine if the user is in a new session/visit.
Session
__utmt
Used to throttle request rate.
10 minutes
__utmv
Used to store visitor-level custom variable data.
2 years
__utmz
Stores the traffic source or campaign that explains how the user reached your site.
6 months
Marketing cookies are used to follow visitors to websites. The intention is to show ads that are relevant and engaging to the individual user.