Hey Hugo community,
I’m Carlos, I recently built SetupScore (setupscore.com) entirely with Hugo, and wanted to share the experience.
SetupScore aggregates 15–30 expert reviews per product to find where reviewers actually agree on home office gear, for now: monitors, keyboards, headphones. Think Rotten Tomatoes, but for desk gear.
The site currently has 50+ review pages, guide pages, and structured data, all statically generated. Hugo made it surprisingly manageable. A few things that worked well:
- Structured content with front matter - each review has scoring data, pros/cons, specs, and source metadata, all driven by TOML/YAML
- Build speed - full site builds in under a second, even with 60+ detailed pages
- Taxonomy and list templates - category pages and filtering basically came for free
Still iterating on things like search and dynamic filtering, but Hugo’s been a rock-solid foundation for a content-heavy, data-driven site.
- SEO out of the box - Hugo’s built-in templates for Open Graph, structured data, sitemaps, and canonical URLs gave us a solid SEO foundation with much less custom work
Happy to answer any questions about the architecture or share specifics on how I structured the content. And always open to feedback — if anyone spots something off on the site, I’d appreciate hearing it. 
Cheers,
Carlos
3 Likes
Hi, nice work!
I found a little problem with styles -
Oops! Well spotted, thanks
, that comment is going to infinity… I must be parsing some specific HTML embedded in the comment itself.
1 Like
Thank you, now it’s fixed!
Hi! Great looking website! Can you tell more about the backend part, please?
How do you fetch reviews from big tech sites like Techradar? Do you scrape them automatically or just copy pasted content manually?
Also, how do you scrape reddit threads?
Hi,
Well it depends of the source, for expert reviews (TechRadar, RTINGS, PCMag, etc.), I use Brave Search to discover relevant sources for each product, then extract the content from those pages programmatically. Same for YouTube, where I pull transcripts with groq-whisper. For Reddit, I use the Reddit API to discovers and extracts relevant threads and comments automatically too.
Once all the sources are collected, the pipeline analyzes each one (sentiment, ratings, bias etc.) using moonshot and claude, then cross-references everything to find where reviewers agree and disagree. That consensus is what becomes the scores and summaries you see on the site.
So it’s fully automated, from discovery to the final page. I just review the output before publishing to make sure all make sense.
Incredible) Different tools, APIs and overall the approach looks complex.
Can I ask you a few more questions? Sorry if it sounds stupid)
- Are you a full-stack web developer? Which language do you use mostly? E.g. JS or python?
- How long did it take to build your website, approximately? Because your approach to automating everything looks like a complex solution to me.