I would like to somehow test how various strategies impact Hugo’s build times. Is there a nice way to “hook” somehow into Hugo to get lists of times that certain things need to run? I can read and I am eager to learn, so linkdrops are ok.
Sample test:
I wonder which one of the following strategies saves/costs more time:
a configuration value is set in data/path/to/file.toml
method a) loading every time via {{ site.Data.path/to/file/variable | default "value" }}
method b) loads once and return via partialCached
method c) loads once and save in a scratch variable that is reused
i run hugo with each method and 100 calls to get that value
i get some form of output with a time per method
Does Hugo have some form of hooks that a different go script could somehow use to play around? Worst case is of course to let a shell script run a test site multiple times and collect an average.
That tag weirdly is in my “I don’t like you”-list It tells me the following layout file is not cachable (0), but I am not sure if it’s worth pestering the team with a bug report:
Of course the timing output on this param might help or even be what I want… I will play around with it. Putting a “method” into a layout and checking on it’s average timing value might be what I want.
Yeah I doubled my site creation from about 1 minute to 2 minutes by fragmenting everything into modules (up to 3 levels deep) with their own namespaced data configuration. It feeds my OCD to have everything in the right place, but it costs dearly on the speed.
That’s all there is. The code above has not one single variable and Hugo says it’s not something to cache. I would expect a 100 but hugo --templateMetrics says 0.
I sometimes use a Go benchmark to build the docs site. I suppose you could do the same thing to test changes to a site. Here’s how:
Create a Go test file in the root of the hugo codebase as seen below. Set the flags as needed, but mainly set the --source flag to the path of your site:
package main
import (
"testing"
"github.com/gohugoio/hugo/commands"
)
func BenchmarkDocsSite(b *testing.B) {
flags := []string{
"--quiet",
"--source=docs",
"--renderToMemory",
}
for i := 0; i < b.N; i++ {
r := commands.Execute(flags)
if r.Err != nil {
b.Fatal(r.Err)
}
}
}
Run the benchmark and save the output to a file. I use tee for this:
go test -bench=. -benchmem -count=5 . | tee baseline.txt
Make changes to your project and rerun the benchmark, saving to another file:
go test -bench=. -benchmem -count=5 . | tee variant-1.txt
Install benchstat if you don’t have it already (don’t do this while within the Hugo project or it may update your go.mod file):