Need help with HTML/CSS optimization

Hello all,

First of all this is a non-Hugo question.

The question is related to HTML/CSS optimization but about a site I generate using Hugo. Hopefully someone knowledgeable about these optimizations can help me out.

Recently, a bug got into me to make my site as fast as possible! After countless hours of figuring out new things about doing HTML/CSS/JS probably the right way, I was able to get a score of Mobile (91/100) and Desktop (95/100) on Google PageSpeed Insights.

There are now 2 things that I still believe are in my hands for improvement… but I don’t know how:

  1. How to Prioritize Visible Content? It says: None of the final above-the-fold content could be rendered even with the full HTML response. [link]
  2. I have a script called responsive-nav-orig.js that I have to load without async or defer tag, else the nav bar shows up stuck on mobile devices. I got that script from

Here is my Hugo theme refined [source] that I use for my site [source].

Any help will be appreciated!

To prioritise above-the-fold content, you’d analyse all your CSS rules and inline only those rules that affect the styling of the page above the fold. All the other CSS would be put in a separate .css file, which is then loaded after page load.

If the script cannot be deferred, then you shouldn’t defer it. :slight_smile: Or you could, but then you might end up with an ugly flash of content or something broken on the page. But that’s not worth the hassle of a somewhat quicker website.

If you want to optimise your page load time, I’d not look at the two points you mentioned above. You can make much bigger gains by dropping the Google fonts. A whopping 75% of your page size consists out of fonts, and they use 31% of the HTTP requests.

In comparison, your CSS is 12kb while the fonts are 476kb (see the speedtest link below for where I got those values). So CSS optimisation is just a tiny part of the page.

You may also want to consider reducing the JavaScript. 28% of requests are for JavaScript, and they make up 107kb. That’s very much for a straightforward blog without visual effects or animations, I think.

I’m not sure what all that code does (I block JavaScript by default), but see that you also request jQuery. That’s a relatively big library for perhaps a handful of tasks. That JS file is also requested from I see, which creates an additional DNS lookup and SSL connection for your site.

I see you also load a bunch of files for remarkbox. Perhaps you can hide that behind a button that says “Show comments and join the conversation” or something. Since normally only a fraction of people actually comment (say 5%), it’s a shame that 95% of other visitors have to download (and process) all the JavaScript, CSS, and images that remarkbox uses.

By the way, here’s a website speed test I performed on your site: speedtest.

By the way, this speedtest tests your website without JavaScript. It saves around 2 seconds (total load time), and here you can see the filmstrip comparison between both.

(I’m not saying that JavaScript is necessarily bad. Just use it as little as possible. :slight_smile: )


Thanks for taking time to write that detailed reply!

That was back in Oct 2017, and I had been gradually tinkering and optimizing my site.

Here’s the latest result: webpagetest - 2018/04/26 :sunglasses:

Older results for comparison:


I still have a lot to learn. So for now, I just call the CSS as usual; not paying any attention to what Google PageSpeed Insights (let’s call it GPSI for short now on) says. Because by experience looks like its suggestions are crap compared to what and give.

Thanks. That’s what I learned too… Blindly deferring the scripts to get “good” GPSI scores resulted in bad UX.

Done! I now just serve the fonts locally instead… what’s more I even subset my fonts so that a woff2 which would originally be 150KB would now become <20KB :smile:

This talk gives a very good understanding of web fonts, limitations, optimizations, etc:

This is one place where I need to improve.

I am using a responsive-nav-orig.js wholesale without any trimming because I don’t know Javascript. I am using it only to get the collapsible menu on on small-screen displays… that’s it. Once I have it trimmed, I’ll save quite some there.

I use that for the dynamic TOC generation in sidebar on pages like Optimize your FontAwesome ❚ A Scripter's Notes. That TOC is seen when the screen is wide enough (do unblock JS to see what I mean :)).

I used disqus for a while, then remarkbox, then back to disqus. Now I use none of those… I just use Webmentions. I do this to get sort of live update of Webmentions on my site:

Thanks for that tip. The results definitely look better now (the one I posted in the very beginning).

Non-Hugo help request… but would be cool if someone can help trim that responsive nav script.


Let me take a look.

I would personally give up on your scroll function (coloring of current element). This is a “nice to have” not worth loading the full jQuery library.

I can see you are also using jQuery to hide some toc items, which you could do with css.

For you responsive menu.
There is a lot of things you can do with only adding/removing classes with js and letting css handle the animation (you currently don’t event have any animation on your responsive menu).

Keep investigating :slight_smile:

1 Like

Your site is fairly lightweight, which is good. Chrome Dev Tools estimates 200 KB as the total download for your homepage.

You’re loading nine font files for some reason, spanning six different typefaces (you have separate bold or italic files for some). Some are woff files, while others are woff2. You should only be sending woff2 to a visitor using a recent version of Chrome – just put the standard font syntax and the browser will request woff2.

Strangely, you’ve gzipped your woff files (there are three of them, all for Linux Biolinium). Was that intentional? woffs are already compressed, using DEFLATE, the same codec as gzip. gzipping woffs doesn’t usually pay off, and it adds more work for the browser to decode them twice before getting to the actual font file decode (the glyf tables). You can optimize your woffs with zopfli like compression tools, and they remain woffs. There’s no need to wrap them in gzip or anything else.

Your CSS is small at 7 KB. Still, much of it might be unused. I’d analyze it with uncss or purify-css (both in Node/npm), remove the unused code, and put the remaining CSS in the head of your HTML files. Having separate CSS and JS files is a terrible antipattern that took hold early in this era of web development and has resulted in a massive waste of bandwidth, electricity, and users’ time. It is almost never the case that the benefits of the possible caching of those separate files outweigh the benefits of treeshaking the CSS and JS (especially the CSS), and inserting it into the head of the HTML file. This is why the Google AMP project requires that all CSS be in the head and doesn’t allow separate CSS files. (And still you’ll see tons of unused CSS on AMP pages – early 21st-century web developers are terrible at web development.)

I like the coloring of current element. But you are right, getting rid of loading even the “slim” jquery would help a lot there. I will try out the tocbot script that doesn’t depend on jquery, that I found a while back.

Thanks. I’ll keep looking… I need to look into responsive menu design online.

Thanks :slight_smile:. I reached here after a lot of iterations and learnings… at one point, my site was 1MB+! But I needed to have those fonts too, and that’s when I learned about subsetting them.

Yeah, I like specific fonts for specific faces. Sans serif for headings, serif for body, a different serif for body italics (because the default serif italics doesn’t appeal me), and then the awesome Iosevka for monospace.

Only Linux Biolinum is an oddity (woff) there… it somehow doesn’t convert from ttf to woff2… pyftsubset, the tool I use to subset woff/woff2 from ttf fails doing so.

That’s what I do.

Netlify should be doing that… the Linux Biolinum, all three, are woff-zopfli. But that’s some good investigation… how do you know those are gzipped… Firefox Inspector doesn’t tell me anything like that:


That’s pretty cool too! How did you find that out?

I’ll look into those tools. Thanks!

On a side note, looks like site performance optimization are orthogonal to security good practices? I started specifying Content-Security-Policy (CSP) for my site, and based on, it looks like inline CSS/JS are not recommended. That’s interesting :slight_smile:

Thanks for your in-depth analysis!

How I found out the size of the CSS was to just use Chrome Developer Tools and look at the files downloaded. There was only one CSS file and it was 7 KB. If you mean the comment about unused CSS, I didn’t know whether or not you had a lot of unused CSS – I was just suggesting that you check. I just now checked using Chrome’s Coverage tool, and 70.5% of your CSS is unused on the homepage. Some of this unused code might be used on other pages, like individual post pages, but your site seems fairly straightforward so I suspect that a lot of the CSS is never used by any page.

The gzipped woffs is turning out to be an interesting issue. Here’s what Chrome shows me:

It doesn’t seem to happen in Firefox. Your screenshot doesn’t tell us much because it’s going to be the Content-Encoding header field, and the screenshot excludes everything before Date (it’s alphabetical). I checked in Firefox 59, and the woffs don’t appear to be gzipped. Here’s what’s different about Firefox and Chrome with those woff files:

  1. In Chrome, the Response header includes a content-encoding field, and the value is gzip. In Firefox, the response does not include a content-encoding field at all.
  2. So I decided to look at the Request header fields. Chrome sends an accept-encoding field with a value of gzip, deflate, br, even though it’s requesting a woff. Firefox sends an accept-encoding field with a value of identity. (I don’t know what “identity” means or does here.)
  3. Chrome also sends an accept field with a value of */*, which means it will take anything. I’ve always wondered why Chrome does this (I’ve seen it for years). Firefox declares a much different accept value: application/font-woff2;q=1.0,a…ion/font-woff;q=0.9,*/*;q=0.8

It gets to */* eventually, but first it asks for actual font files, unlike Chrome.

So the server/Netlify is responding differently to Chrome’s header vs. Firefox. Maybe they gzip anything and everything if the accept-encoding request header includes gzip. Since Chrome has much more market share than Firefox, they might be wasting a lot of CPU gzipping things that don’t need to be gzipped.

On the security topic of CSPs and inlined CSS/JS, I’ve seen that recommendation, and I disagree with it. It’s more secure to have one file with a concise amount of CSS and JS in the head than to have CSS and JS coming from other servers, which is often the case. If the extra files are coming from your server, the same server as the HTML, then it’s a wash. Note also that people mean two different things when they say “inlined CSS”. CSS can be placed in the body, with the HTML elements they apply to, right there on the spot. That’s “inlining” to some people. The other use is to place CSS in the head, which some people also call “inlining”. I’ve never been sure which usage the CSP specs use. And like I said, Google requires all CSS to be in the head for AMP pages (and less than 50 KB of CSS total) – it doesn’t seem like they think it’s a security problem.

1 Like

Thanks! After your first comment, I had actually used to figure out the unused CSS. I trimmed about 0.5 to 1KB off that 7.x KB refined.css using that.

Now whatever 6.x KB remains is used almost in entirely collectively by my whole site (believe it or not :slight_smile:). I might be able to trim off another 0.5KB if I invest more time. Mainly the redundancies are like rules for h6 tags which I don’t yet? have on my site. I would rather keep rules like those… just in case I have a deeply nested subsection in one of my posts in future.

Actually it seems to happen in Firefox too. Looking closer at the image I posted earlier, I see a difference in the Transmitted bytes and the Size columns… The transmitted is always less than Size… which implies that they are transmitted in some archive format?

Thanks for the further detailed analysis of the response headers.

I have no background in web devel or web security. Just reading the CSP recommendations, it seems to be implied that it’s “easier”? to inject inline scripts than injecting whole JS files under a domain. At the moment, I have a balance of inline scripts and external script files. For the inline scripts, I calculate the sha256 and specify that under the C-S-p/script-src. And my site security is looking great!