I wanted to plunder the hive mind for views on simple/basic strategies for testing differences between Hugo outputs.
The scenario I’m imagining is one in which a small/medium site is updated (content, resources, Hugo version…) and you want to check how the output compares. Something that is too big to do manually, but doesn’t merit time, cost and effort for a large testing solution.
I thought it might be broadly helpful to get a sense of what people currently do when they want to satisfy themselves that changes are non-breaking, without disappearing down a testing rabbit-hole. Hopefully that makes sense.
Yeah, I think there are tools like grouse that make that kind of approach easier. But as you suggest, it may not be the most efficient thing for anything above a certain size or if a user is running Hugo through some external platform.
It was perhaps a silly question as this kind of thing is very context dependent. I just thought it might be useful for people to read about some of the approaches that others are using to keep ahead of potential breaking changes when performing updates.
I’ve been thinking about doing this too, with testing frameworks like Jest or Playwright.
I’d be looking at writing targeted tests, not just comparing entire pages (although you can do that with these frameworks). Checking titles, meta descriptions, Open Graph tags, links, etc.
@ju52 mentioned Git, where you can quickly see a set of diffs. Good idea.
@lkhrs mentioned targeted tests to reduce noise. Also a good idea.
A very simple approach, using targeted tests with diffs to compare, might look like:
Create a section of your site named “tests”. Each page in this section is designed to “stay the same” from one build to the next, regardless of content changes. That means these pages should not include last modified dates, or reference to any pages outside of the “tests” section.
Use configuration directories instead of a single configuration file.
In your default configuration, use build options to exclude the “tests” directory
Create a “testing” configuration directory, using build options to include the “tests” directory. You could also add configuration options to remove everything that you won’t test.
Clear your public directory, build your site with “hugo -e testing” and save the public directory somewhere. This will be your “gold” reference for future comparisons.
To run a test, build your site using the testing environment (hugo -e testing), then use the diff command to compare before and after, limited to files generated in public/tests.
Here’s an example:
git clone --single-branch -b hugo-forum-topic-41498 https://github.com/jmooring/hugo-testing hugo-forum-topic-41498
cd hugo-forum-topic-41498
See the bash script (test.sh) in the root of the project directory. Make sure you declare the project_dir in the main() function before testing.
Personally, I would use Netlify branch deploys and enable their split testing feature. Pretty much zero config, splits traffic at the network edge. Split Testing | Netlify Docs