I’m working on tools to build a ‘photo-a-day’ website, to display a collection of 365 images taken over the course of a year. Each image has a page of its own; there are also 12 ‘month pages’, which show thumbnails for each month’s photos, plus a homepage.
My current approach is to use a gulp
build script that reads the EXIF/IPTC data from the photographs and uses it to generate a ‘day page’ for each picture. hugo
then builds the site from these pages. The gulp
script also generates copies of the photographs at multiple different resolutions, ready for display.
I’m aware that hugo
has image-processing tools, so that I could probably replace the image-resizing part of the gulp
script with native hugo
features. However, as far as I can see, hugo
doesn’t support any kind of pre-processing step that would let me iterate over all the available photographs and write a Markdown file for each one (with front-matter based on the EXIF/IPTC data in the photo).
My impression from reading the documentation and the forums is that this kind of pre-processing is not The Hugo Way, and that doing it with a separate build tool such as gulp
is in fact the preferred alternative. But I just wanted to check that this impression is correct, and that there isn’t some clever way that I could use hugo
for this task, and eliminate gulp
from my build process.
Comments? Suggestions?