Jacques Mattheij

Technology, Coding and Business

The Fastest Blog In The World

Update: James Hague, aka Dadgum was inspired by this to do some work on his blog and I’m happy to report that now his blog is the fastest blog in the world, 4K transferred and 1 request per page, load time < 75 ms. If you have a blog that’s even faster than that let me know.

– Read on for the original post –

I positively hate bloat in all its forms. Take this BBC News Article, it’s 2300 bytes but it loads 1.2 million bytes of data. That’s more than a megabyte for what probably should not be more than several tens of kilobytes. (edit: this used the google homepage as an example before which was a poor choice because the google homepage does a lot under the hood that is not visible to the user, though personally I actually liked the really simple old page.)

Bloat to me exemplifies the wastefulness of our nature, consuming more than we should of the resources that are available to us. A typical blog post on most blogging platforms will (even if the blog post itself is just a few kilobytes of text) load an easy megabyte. The words themselves are usually less than a few kilobytes even for the largest posts. Imagine an envelope for a letter that weighed a couple of pounds for a 1 gram letter!

So, when the time came to finally attack the issue of slow re-generation of these pages when I was using ‘octopress’ I decided to not only upgrade the blogging engine (to ‘hugo’, a lightning fast static site generator that is very easy to install), but also to strip the blog of anything and everything that did not matter without impacting functionality. The blog had to look exactly like it did before, work exactly like it worked before and it had to work on both regular browsers and mobile platforms.

Mobile matters a lot these days and I think that when a large chunk of your readers sits on metered bandwidth you can do them an easy favor by making sure that they don’t download more than they have to, it saves them both money and time.

This took a bit of doing, but I’m pretty happy with the end result, the ratio of data pushed to the user for a single page is 20:1 for old versus new, the ratio of wrapper:content is now 5:1, before it was a whopping 100:1! This particular article is about 5000 bytes in its original un-rendered form, the server has transferred about 13000 bytes to your computer fully ‘wrapped’ in HTML, with CSS and so on. That’s about 3:1, which isn’t all that bad. (You can verify this yourself using firefox by pressing shift-ctrl-Q and then reloading the page, that’s a pretty useful tool in determining what gets sent to load a page.)

The steps I took to get rid of the bloat are:

  • inlined the few images that are still left

  • inlined the stylesheet (there is a cache penalty here so you have to trim it down as much as possible but the page starts rendering immediately which is a huge gain at the cost of a little bit of extra data transferred, all the measurement tools I’ve used seem to agree on this)

  • got rid of most of the CSS rules that weren’t used

  • got rid of allmost all javascript (jquery, various plug-ins, analytics)

  • got rid of external fonts (the slightly nicer look is not worth the extra download and delay)

  • replaced twitter plug ins for ‘latest tweets’ and ‘twitter button’ with static content

  • reduced the number of resources loaded from the server to render the page to 1 (the page itself)

The end result is pretty lean-and-mean. And all of that change barely affected the look or functionality of the site for a user, the difference is really minimal. So on all pages that do not contain images (and that’s most of them) the page is one single request. That’s it. No css, no javascripts, no fonts, no images loaded from the server. The pages load < 20 kilobytes from the server on average (compressed), they load in under 150 milliseconds from start to finish and they render in less than 200 milliseconds. Clicking around on the pages in this blog should be instantaneous and should never result in having to wait for the next page to load, it should look to your eye as if you already had the page in your local cache. The embedding of the stylesheet in particular was a good move, it dramatically reduced the time required to render the page because it doesn’t require the loading of an extra resource before the rendering engine can fire up. The overhead from sending the CSS data multiple times when multiple pages are loaded is definitely not fantastic but by pruning down the CSS that overhead by itself was reduced by a factor of 4 or so.

I’m sure I can do better still, for instance the CSS block is still quite large (too many rules, not minified yet) but it can be quite hard to figure out which rules can be lost and which are essential (nice idea for a browser plug in, take the css loaded by the page and remove all unused rules). Even so, the difference between what was there before (600+K, 18 requests) and what is there now (< 20K, 1 request) is so large that any further improvements are unlikely to move the needle. Optimizing a thing like this is likely a bad investment in time but it is hard to stop doing a thing like this if you’re enjoying it and I really liked the feeling of seeing the numbers improve and the wait time go down. This is a nice example of ‘premature optimization’ but I do hope that the users of the blog like the end result.

If you know of a blog or have one that loads faster than this one or uses tricks I’m not aware of I’d like to hear about it!