Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Fastest Blog in the World (2015) (jacquesmattheij.com)
229 points by nodivbyzero on Feb 13, 2017 | hide | past | favorite | 124 comments


See also http://motherfuckingwebsite.com/ and http://bettermotherfuckingwebsite.com/

I really like the minimalism of these. I guess the interesting part of the linked article was the guy wanted to make his blog look exactly the same as it was before he made the optimisations


https://bestmotherfucking.website is another one along those lines.


Inspired by the link, I created:

https://sknsri.github.io/advice-by-kurosawa/


lol, hating on bad style and then coming with a bright background.

Is this a joke?!



I optimized my blog in a similar way: http://beza1e1.tuxen.de/blog_en.html

I also recently made a minimalistic news aggregator for german news: http://textnews.neocities.org



I'm actually not a fan of how it looks. It was hard to read on my phone.


Same. Grandparents' linked site looks like the work of a bandwagoner that doesn't know what he's doing.


Compared with the others it was hard to read on my laptop. I can only imagine how grim it'll be on a phone.


It's hard to tell when the irony and sarcasm gets that deep, but I think it's intentionally bad. Their changes are 1. a hipster domain, 2. monospace fonts and 3. lower contrast, which are recent design trends that I believe they are mocking and part of what the original was reacting against.



This looks fantastic to me, for some reason. How I wish most of the web looked more like this rather than full of Hero images.


Looks too old school to me :\

I think the idea here is to get a modern look with minimal code. Like the first page shows (the one without styling) it's easy to create a light weight website, it's hard to make it look like its from 2017.


> This is a nice example of ‘premature optimization’ but I do hope that the users of the blog like the end result.

Is it really premature? You already had a blog that worked, saw a need (or challenged yourself?), and acted. Sure, this might have been a bit overboard, but I think that it was the right time for this optimization (not premature).

> Optimizing a thing like this is likely a bad investment in time but it is hard to stop doing a thing like this if you’re enjoying it and I really liked the feeling of seeing the numbers improve and the wait time go down.

You did a good job. And I like seeing those numbers go down, too!


I guess no one here reads https://blog.fefe.de/

It's the only page you can read on a GPRS connection (http) and you can literally see every packet as it's transmitted, because the page is rendered bit by bit.


  and you can literally see every packet as it's 
  transmitted, because the page is rendered bit by bit.
Beg pardon?


Presumably he's talking about incremental rendering[1], though I couldn't get it to work on Fefes Blog by manually throttling using the dev tools.

It's pure, unstyled HTML -- you could read the site by piping curl into less -- and it should easily render incrementally; I'm just not sure how well it works when you use gzip and TLS (and I couldn't not use TLS when I just tried).

[1] https://en.wikipedia.org/wiki/Incremental_rendering


Is there a way to slow it down on fast connections and or a video of this? I have an academic interest and am curious how they did this, but don't have an easy way to demonstrate it to myself as I don't currently have a connection slow enough that it doesn't render instantaneously.


Both Firefox[1] and Chrome[2] (and, presumably, Safari and Edge?) have a network throttle in their dev tools. Apart from that, you can use OS level throttling, e.g. Network Link Conditioner[3] for OS X. Couldn't see any incremental rendering on Fefes Blog though.

[1] https://blog.nightly.mozilla.org/2016/11/07/simulate-slow-co...

[2] https://developers.google.com/web/tools/chrome-devtools/netw...

[3] http://nshipster.com/network-link-conditioner/ (We use this to throttle Websocket transmissions, which, last I checked, the browser devmode throttles don't apply to.)


You can throttle your connection in Chrome developer tools.


Even offers CSS theming: https://blog.fefe.de/?css=fefe-gut.css (compare to without!)


Nice to see you already posted fefes blog, seriously everytime someone states he has the fastest xyz -> fefe is quicker.


I think you underestimate the amount of Germans on here


This is a worthy goal for any site you build, not just a blog... that said, it's not even that hard to get crazy bloated CMSes like Drupal, Wordpress, etc. as fast (or faster), as long as you set up basic caching (e.g. Nginx, Varnish, CloudFlare, etc.).

I have a basic-ish theme, and I use Drupal's AdvAgg module to aggregate and minify JS and CSS, as well as a few other tricks to get page loads smaller. Finally, I use Nginx's dead-simple proxy cache to make most page loads take < 600 ms (and faster, if you're near NYC, where my DO Droplet is located).

See, for example: https://www.jeffgeerling.com/blog/2017/tips-managing-drupal-... (~600 ms from STL, MO, USA).

Obviously, the more images and other elements (e.g. social embeds, analytics and other junk), the more time spent downloading the page.

But, IMO, there's no excuse for your personal blog to take more than 1s to render and fully deliver a page, with an exception if you embed videos/audio (e.g. podcasters).


It's not just downloading. Websites come with css (not that big of a problem) and JS (big problem). All that has to be processed by the browser. I found that websites sometimes pull css and js bundles (as in whole ones, even over 20kB ones), even from some other servers. Just the other day i tried a website on my aging smartphone and it sometimes took over 20 sec just to write a character (one) in the search thing.

The average website on teh internets, in my opinion/experience, is bloated to hell. And i'm not talking about functionality here.


> Finally, I use Nginx's dead-simple proxy cache to make most page loads take < 600 ms

600 ms is appalling. Most site should be able to get under 50 ms easy (ignoring internet latency). A blog should be less than 25-30 ms, even if it has to hit the database.


I think you're confusing total rendering time, as being discussed here, with server response time. I'm getting 86ms response time from the mentioned blog (from London), and 900ms total rendering time.


I did mention latency, which you can't do much about. For me it's around 100ms from response received to rendering, most of which seems to be google analytics. I don't know where the server is, but I'm in Australia so it's probably crossing the pacific. See my reply to the other person for why I'm blaming analytics.


even the http://bettermotherfuckingwebsite.com/ is taking 300ms in my 2012 MBP, 190 ms with every extension disabled on wifi, I am behind corporate firewall so that might be adding some time.


It's taking 350ms for me, but most of that is high ping (250ms) and it looks like most of the rest is analytics. It takes 30ms to render when downloaded (which skips the analytics). Take out the script entirely and it's down to 10ms.


Apposite, I'm working on a personal WP theme currently. Got HTML, CSS and JS down to under 20k so far for a reasonably presentable responsive front page, less on the wire gzipped. CSS and JS minified. Experimenting with inlining "critical" CSS, ie "above fold" content. JS is for parallel loading of CSS to prevent delays in rendering, so content displays before styles are applied (may reverse this decision). This means the critical CSS has to focus on layout, font sizes etc to minimise that awful reshuffling that happens when styling is applied.

It's turning out to be quite a fun challenge, reminiscent of early web days when I'd muck about trying to shave 2k off a JPEG.

Hopefully notions such as "critical" CSS and inlining to reduce requests might encourage the re-emergence of lean yet visually pleasant websites.


I don't understand why 20k is such a challenge for web developers. I don't think I've ever gone even near 20K for HTML, CSS and JS combined. My current blog, which is the most bloated one I've ran yet, is still only around 17K.

I do roll all my own code (never touched jQuery et al), mostly because I can and I find it weirdly enjoyable. But threads like these do make me wonder if all these nice tools we have these days are really just a crutch for bad programming practices. I mean I know tech jumps exponentially with each iteration but I swear the performance gained by newer technology is just eaten up straight away by developers writing heavier code to accomplish the same things as before.

I know this all sounds hyper-cynical and I do like my compositing display manager with wobbly windows et al so I'm hardly in a position to moan about bloat. But I guess my rant is about how poor web markup is for expressing modern documents (let alone full blown web applications). Well that, and I just needed to rant because I'm stuck in a hospital waiting room at 1 in the morning...


Does anyone remember "Volkov Commander"?

An old DOS file manager [1], written in assembly. Super fast and responsive, tiny size -- a fraction of Norton Commander &co's, never mind any modern tools.

Whenever I have to deal with the insanely inconvenient cluster fuck that is modern file managers (OSX Finder!), I probably feel the way you do about "modern web development".

Hope your hospital visit goes well.

[1] https://en.wikipedia.org/wiki/Volkov_Commander


I'm with you. I have the same feeling about web, desktop and smartphone - each iteration of software does the same or less than previous one, while eating up any resource surplus created by advancements in hardware and optimizations of software stack.

I hope your visit went well.


Agreed. I'm building this theme because a search for "lightweight" WP themes didn't turn up anything usable, aesthetically tolerable, and non-bloated.

Hope the hospital visit turned out okay.


CMS's are designed to be behind a caching layer like Varnish. a CMS takes in user input and renders to html, which should only change in-page on ajax requests. I really wish there were more public configs for Varnish, some things like Mediawiki are impossible to find well-written and documented configs for, and creating one yourself can have a lot of pitfalls. There's four versions of Varnish, so that too makes it a PITA.


> CMS's are designed [...]

The popular CMSes that are mentioned here (WP, Drupal), are not designed. They where mostly hacked together and became immensely popular before any thorough software design took place. They come from a time where software design was not much applied to "web scripts".

> [...] to be behind a caching layer like Varnish.

Varnish 1.0 is from 2006. WordPress is from 2003 and Drupal from 2000. These popular CMSes where certainly not made with proxy-caching in mind.

You can properly architect and write "your own" CMS in a compiled language (like Go, Rust, C++, OCaml or Haskell) and have <10ms response times (including the db queries). It will just lack the features, plugins/modules, and market pull that WP/Drupal have.


>The popular CMSes that are mentioned here (WP, Drupal), are not designed. They where mostly hacked together and became immensely popular before any thorough software design took place. They come from a time where software design was not much applied to "web scripts".

Hacked together is still a design. It may not be a Good design, but regardless they are made to be behind a caching layer. Also, Drupal 8 is pretty well designed and thought out.

>Varnish 1.0 is from 2006. WordPress is from 2003 and Drupal from 2000. These popular CMSes where certainly not made with proxy-caching in mind.

Software changes over time, and regardless of timelines the nature of CMS's is to generate output that stays the same, which is good for caching layers.

>You can properly architect and write "your own" CMS in a compiled language (like Go, Rust, C++, OCaml or Haskell) and have <10ms response times (including the db queries). It will just lack the features, plugins/modules, and market pull that WP/Drupal have.

This highly, highly depends upon what you're building. Plugins and modules are not the reason for slowdown in CMS's, the reason is that the data is abstracted into understanable constructs that can take in all forms of data. This requires lots of queries and code to standardize things. In many cases, you're not going to build something better that is as functional or friendly to future developers.


> Hacked together is still a design.

That's degrading the definition of design. WP was not even hacked-together/designed to be a full-fledged CMS in the first place!

> Drupal 8 is pretty well designed and thought out.

It is certainly more designed then the early versions of popular CMSes. But one thing for sure, Drupal 8 is still not designed to be fast.

> the reason is that the data is abstracted into understanable constructs that can take in all forms of data.

Not all languages suffer from slowness by abstractions. Rust, OCaml and Haskell come to mind. These languages are "designed". Again I'd argue that PHP is not designed but hacked together without much thought for design -- find some Rasmus quotes and you know what I mean.


>That's degrading the definition of design.

Its not. Bad designs are still designs, design is not a word about quality, it is a word about intentions and implimentation of them.

>WP was not even hacked-together/designed to be a full-fledged CMS in the first place!

Original intentions have no bearing on how the software is now. WP definitely does have issues, but it is designed to be a full fledged CMS now.

>Not all languages suffer from slowness by abstractions.

In this case, they do, because you have to get your data from a database, we're not even talking about programming languages, but constructs of data and how they're sorted. Drupal 8 has a very interesting methodology to categorize data that is well done enough the Views module can take that and is capable of building most data-centric sites. You can build a pure catalog like digi-key with normal drupal 8, no plugins or modifications, just setting the control panel and a theme.

>But one thing for sure, Drupal 8 is still not designed to be fast.

It is, actually. But it being 'fast' is behind its ability to work with data and make it actually configurable to define your own editing workflow and how that gets represented on your editors dashboard, connected directly to how its viewed on your website. Your CMS does not need to be 2fast, since it should be behind caching layers.

>Not all languages suffer from slowness by abstractions. Rust, OCaml and Haskell

All languages suffer from the abstraction higher level languages like PHP, Python or JS provide, namely that they're not typed or compiled and can work with very dynamic data.


Drupal 8 was re-architected from scratch and builds on the excellent Symfony components.


I look forward to the day when the whole text-based internet produces output like this: http://bettermotherfuckingwebsite.com/


I like the original better. The contrast on this one is too low and makes it harder for me to read.


With Wordpress you can even produce static content from your blog using the many plugins that are available.


That is something by far most WP bog owners should do. It works, in my experience, faster and more reliable than caching does.

Because users like the user friendliness of WP editing but I do not like the bugs, security misery etc, I tend to export WP sites to HTML and make things that must be dynamic with some webservices and React.


Read Maciej's http://idlewords.com/talks/website_obesity.htm

BTW: What causes this 11 points + 4 comments thing to hit the front-page of HN? Just wondering, as I believed it's the number of comments + popularity that pushes the link to the front.



O, that's nice. I didn't know the whole HN is open-source.


Everything except the mechanisms to detect voting rings and such.


Actually, I think the code that's on github is quite old. This repo hasn't been updated in 2 years, and I know they've made changes to functionality beyond just the admin/detection features you describe above. I don't know how much different the code is now, but I'm pretty sure what's publicly available is not up to date.


Also the rate at which it accumulates points. For example, if a submission was submitted an hour ago, and in the last 5 minutes it got 4 more points, it will probably hit the front page.


In the past comments actually counted against submissions, in order to avoid flame wars. It seemed actually impossible for a submission with more comments than points to hit the front page, no matter how many points it had. Conspiracy theorists claimed this was a common way to bury a submission. This has been relaxed somewhat, but I still wouldn't expect comments to help a submission.


I think the reputation of who clicked it also matters even though it isn't publicly said. If a few people with a lot of upvoted got to it, it could hit the front page fast


Age is also a factor, optimizing for younger submissions.


I completely agree with the basic sentiment of this article. Far too many sites lead with massive images and huge javascript frameworks just to serve what could be a few kilobytes of text.

That said, I did not go as far as this author when designing my personal blog[1], I considered the following tradeoffs worth the slight cost:

* I didn't inline images or CSS. I can see the appeal but I don't believe it is really worth it unless you have relatively small amounts of CSS. In theory HTTP2 is supposed to help here as well, and on really slow connections inlining can slow things down as the browser is forced to download the inlined stuff instead of progressively displaying the page as it can.

* I ended up deciding that the custom font I wanted to use was worth the cost. I thought hard about it though and would perhaps decide against it if I was designing the site again.

* You can drive yourself insane trying to minimize traffic for images. Should you try to serve 2x images for retina displays? Small images for mobile devices? In the end I just serve the same images for everybody and minimize the use of images overall. It works for me because I don't have a lot of need for splashy pictures.

* I avoided any type of social media button or plugin, they tend to make additional requests back to the mothership. Very few people actually liked or +1ed anything on my old blog anyway, but people with better blogs might find the trade-off worth it.

[1] https://sheep.horse


I don't inline CSS or images either, but I did give them a long expiry time (a year). The first hit to my blog [1] might not be that fast, but subsequent hits should be (it's mostly text anyway). The CSS file has a unique name, and when I change it (it doesn't happen that often (last time was May 2015) it gets a new filename. Also, I serve no Javascript.

I do have one external bit---a block pointing to my Amazon affiliate account (which might have Javascript, I don't know, probably does). It's disabled for mobile devices (via CSS---I use CSS to change the layout to make it more mobile friendly).

[1] http://boston.conman.org/


I recently did a optimization pass at my own blog[0], and came to the same conclusions.

I thought a lot about images, since I have a lot of game screenshots. I decided that since my redesigned homepage will only show summaries of a post with the first image, that my original rule of images no bigger than 80k was good enough. It makes the homepage about 500k on a uncached first hit, but I figure that's more useful than 500k of CSS and JS.

Not only did I decide that my custom fonts were worth it, I started serving them in the smaller WOFF2 format, instead of (original) WOFF. Once I looked at the browser support, it was a no-brainer.[1]

I'm also kind of stingy about browser requests. I've gotten it to no more than 10 (homepage; individual articles are less). If a resource is fairly small, it might not be worth the wait for the browser to open a connection and download it. All my fonts are base64 inlined into my CSS. Sure, it makes the CSS bigger, but the extra request is gone, and after gzip compression, it's almost the same size. I used Font Squirrel to eliminate unused glyphs from the fonts I use.[2]

One thing I noticed about your CSS is that it has a enormous textual redundancy, but it's not gzip compressed. You can get an easy speed up by enabling it there.

[0] https://theandrewbailey.com/

[1] http://caniuse.com/#search=WOFF2

[2] https://www.fontsquirrel.com/tools/webfont-generator


The joy of making a site like this isn't just the speed, it's that you can continually remove and simplify things without making the experience worse. I suppose that's the definition of minimalism?

It means that when you _do_ add an image, or some JavaScript, you're doing it because it demonstrably adds something of considered value, not just because it's easy.


As wonderful as it is that we have complex tools avaibale for complex use cases: let's keep simple things simple. For example, for something as simple as a mobile dictionary web app, you don't even need a framework. Just look at this web app, it loads faster than HN, practically instantly, and it still looks sleek: http://m.dict.cc/

Take a look at the JavaScript, it is so beautifully anti-best-practices! No framework, global namespace pollution, whatever: it just works!


The one major problem that contributes to web page bloat: Efficient Developer Systems.

These are solutions that do not belong in the front end of web development. Something else must work instead.


great one! simple, good-looking and fast.


Speed and readability is one of the main reasons why I'm still sticking to RSS when it comes to reading blogs. Avoids most bloat issues and I don't really care about anyone's favorite colors and web fonts.


The "fastest blog in the world" mentioned: http://prog21.dadgum.com/

Both sites score 100 / 100 on desktop & mobile on Google PageSpeed Insights.


A blog is really fast if you don't put anything but text in it basically. Lesson learned!


> A blog is really fast if you don't put anything but text in it basically

Here's a dummy test page I made a while ago to see if I could create a fairly lengthy, fast-loading text page for slow mobile connections. It's hosted on a cheap shared hosting plan, so it may well fall over (or not!)

Version A (no font loading): http://interfacesketch.com/test/energy-book-synopsis-a.html

Version B (loads custom fonts - an extra 40kb approx): http://interfacesketch.com/test/energy-book-synopsis-b.html

The image at the top of the page hasn't been optimized (about 40kb), however I do think aesthetics are important in page design and I'm against reverting to a plain HTML look with no CSS styling. The test pages above are plain looking but, I hope, reasonably pleasant to look at. (The custom font version looks nicer in my view than the no font loading version, but of course it adds a bit of extra page weight).


Text, and some basic CSS to create a nice, readable style. What also helps is developing the CSS/HTML so that the reader mode in browsers can be triggered and used to read your content - now users can essentially make the experience fit their own requirements using a built in browser feature.


No, the actual lesson is you hardly or rarely need to put anything but text in it.

And it seems it wasn't learned...


He should revise his Apache configuration, because there's definitely something wrong there. First request is taking twice as long as second one's:

    bayesian-goat:CreditScoreIcons heyoo$ httping http://jacquesmattheij.com/the-fastest-blog-in-the-world
    PING jacquesmattheij.com:80 (/the-fastest-blog-in-the-world):
    connected to 62.129.133.242:80 (329 bytes), seq=0 time=1023.28 ms 
    connected to 62.129.133.242:80 (329 bytes), seq=1 time=554.06 ms 
    connected to 62.129.133.242:80 (329 bytes), seq=2 time=555.50 ms 
    ^CGot signal 2
    --- http://jacquesmattheij.com/the-fastest-blog-in-the-world ping statistics ---
    3 connects, 3 ok, 0.00% failed, time 5052ms
    round-trip min/avg/max = 554.1/710.9/1023.3 ms
    bayesian-goat:CreditScoreIcons heyoo$ httpstat http://jacquesmattheij.com/the-fastest-blog-in-the-world
    Connected to 62.129.133.242:80 from 192.168.1.1:53437
    
      DNS Lookup   TCP Connection   Server Processing   Content Transfer
    [    521ms   |      344ms     |       278ms       |       559ms      ]
                 |                |                   |                  |
        namelookup:521ms          |                   |                  |
                            connect:865ms             |                  |
                                          starttransfer:1143ms           |
                                                                     total:1702ms

Edit: Noticed interesting things about bettermotherfuckingwebsite.com(AmazonS3, Content-Length: 1943) and motherfuckingwebsite.com(nginx/1.10.3, Content-Length: 5108) - the Content Transfer part on those two only takes 1ms! Meanwhile dadgum.com has Content-Length: 9344 and transfer takes 162ms. Anyone got ideas why the massive difference?


Ome thing worth considering here is caching. For elements common to a whole site (like web fonts and stylesheets), the initial download might be big, but subsequent downloads won't need to happen, since the browser already has a copy. It still ain't an excuse to load dozens of WOFFs, but it's enough to make the hit a lot less severe for those who've already visited your site.

GZIP helps considerably here, too.


>Bloat to me exemplifies the wastefulness of our nature, consuming more than we should of the resources that are available to us.

That's the money quote.


Hacker: How dare you make me enable JS to view your website

Hacker: It's simple: clone the repo, install gcc, then dependencies, open command prompt, compile, now you can do the same thing as Microsoft Word, well kind of.

[This comment is 99% joke; still might be useful to look at the community aesthetic from an outside perspective]


There's a trick to make it even faster - serve from a CDN closer to the testing server :-)

https://tools.pingdom.com/#!/sNNVG/https://josharcher.uk/cod...

vs

https://tools.pingdom.com/#!/dSNHcL/http://jacquesmattheij.c...

I enjoyed this post the last time it came up[0] and learnt a few tips from it. Particularly interesting was the difference making CSS inline made, even for reasonably large amounts of CSS.

[0] https://news.ycombinator.com/item?id=9995529


Well you include the "Follow" Twitter button twice. Once in the side bar and once at the end of the blog. You can also PNGCrush your favicon.png to save 58 bytes. I'm sure I could spot a few more minor savings if I looked, not including minification and cleaning up the CSS, since those were already mentioned.

I highly recommend the advice of Heydon Pickering [0]. The best optimizations can be made by not writing code.

If I disabled the custom font I'm using (87.7KB) the home page of my "blog" [1] comes in at ~1463 bytes. 802 bytes of which is the CSS, leaving under 1kb of HTML per post once the CSS hits cache. It would have an average load time of ~45ms.

[0] https://vimeo.com/190834530

[1] nadyanay.me


My personal favorite is Jekyll Amplify - https://github.com/ageitgey/amplify

A Jekyll html theme that looks like style of Medium.com and uses Google AMP.


> Imagine an envelope for a letter that weighed a couple of pounds for a 1 gram letter!

Sounds like the licenses we receive from Cisco. They are literally an A5 sheet of (thin) paper packaged, 3 boxes deep, in something easily the size of a shoe box.


Removing images and live tweet feeds makes it a different site without improving speed as they are async. Removing embedded fonts, analytics and non-rendering js for extra functionality also does not improve load time if done asynchronously as it should. The same fastest'ness could have been achieved without any of the sacrifices. Bloat is a different issue being conflated with speed.

Personally I prefer loading the core stuff instantly while still allowing for a rich site that progressively loads micro-libs and media.


Yes, agreed. Unfortunately, there was push back against the Flash of Unstyled Content, so now many websites delay loading any content at all(!) until the correct fonts are loaded. smh


You're right. I think the tide may be switching tho (as evidenced by the recent flurry of posts like this one). I've personally opted for fast load plus a little jitter for myself, think it's a better overall experience: http://www.thinkloop.com/article/state-driven-routing-react-...


You don't even need CSS, this blog loads fine even using GPRS : http://danluu.com/


...and it gets millions of hits a month, and is a regular on HN.

Proving that all that bloat and styling (on other sites) is a complete waste of time if there's no value in your content.

Dan focuses on just the content, and it's worked, very very well.


20K HTML+CSS, 130 KB page size including 1 photo. No special hard-core optimisation, apart from using my natUIve WordPress theme on top of HTML5 Blank, with lots of features. http://rado.bg/2017/01/my-kore-eda-list/ It's possible and we don't need the bloat. Happy to see optimisation around.


While inlining images and CSS might speed up the first page of your blog people hit, won't later pages be able to use these things from the cache?


Not necessarily. We visit tons of sites and a small blog has slim chances of getting its assets into the cache in the first place (if browsers put every asset we browse in the cache it could even reach 1GB per day or so eaten by the cache...).

And most blog visitors are traffic by some random success post linked from some popular site or one-off search traffic.

So, they won't stick around for cache to matter anyway -- better give them a nicer first experience in the off chance that they do stick because of that.


Fast then fast is better than slow then fast.


Once zipped the css is tiny anyways.


HTTP2/SPDY makes inlining css irrelevant


Also looks from this https://www.webpagetest.org/result/170213_3A_1BNS/1/details/... that you are actually sending the favicon file as well (which I like but it is 2 requests and not 1)


Interesting post. I tend to agree with Jacques. The value-add of many sites these days is disproportionately low compared to their size.

My site is pretty lite :), though not in the ball park of the OP. Used to be even lighter, need to trim it down again some.

https://vasudevram.github.io/



It will be faster if you send the request closer to the origin of the user. You could push the site closer using something like the edge network from google or other providers https://peering.google.com shows how google does it


https://upload.jeaye.com/tmp/blog-performance.png

He loads 4KB in 75ms, which is about 53KB/s. I load 47.82KB in 210ms, which is 227KB/s. Technically, I'm loading 4 times faster.


Apples and oranges. If you reduce your file size to 4KB you'll be comparing apples with apples. The setup time of the connection counts dis-proportionally high for small transfers and the goal wasn't a high transfer rate but a short time to load. For a high transfer rate you should make your pages as large as you can, they will take a long time to transfer but the rate will be close to the rate of the slowest link in the chain.

This is also why you can't test available bandwidth reliably with a short transfer.


i don't understand this holy crusade against bloat. Yes, many if not most of websites i visit are bloated, but i think reasonable steps are the right answer (like an automatic, idiot-proof system that serves 4k pictures to 4k-displays and hd pictures to smartphones). Shovelling nanoseconds by manually inlining everything is not scalable. Sometimes i think about starting to blog, but this should be a small hobby and as painless as possible. Ideally i would like an automated system that handles:

  - a custom font (i like fonts)
  - being fast on wifi
  - being fast on LTE
  - being reasonable fast on slower networks (maybe don't load the font etc.)
Edit: trying to be not too provocative :)


>i don't understand this holy crusade against bloat. Yes, many if not most of websites i visit are bloated, but i think reasonable steps are the right answer (like an automatic, idiot-proof system that serves 4k pictures to 4k-displays and hd pictures to smartphones).

What's reasonable about serving MBs of BS to people who don't care for them just to follow the latest design trends, or the latest frameworks, or whatever?


The thing is, if I'm reading your blog, I don't care about your custom font. I don't even know if I care about what your blog says yet when I click on it, so I certainly don't care about how it looks.

If I could take only the text from your web page, I would probably be all for it. Maybe after reading it I might care to see the rest of your site.

A text blog honestly has no business being drastically larger than, you know, the text I came to read. Which is why I like RSS readers.


i think it seemed more provocative than it was meant to be ;) But i don't agree :)

I like things that look nice, not fancy, over the top animated. But a good picture paired with a nice font, maybe combined with a tasteful color scheme and i have much more fun reading. It doesn't that much. Idk, maybe we're just wired different, but for me reading RSS in every break would be too repetitive visually ;)


That's a bit presumptuous. Not everyone is on fast internet. Even supposedly "fast" internet benefits, as I've been very surprised about connection latency issues in the US.

Also it's more secure to have a static site.

Also it's simpler to manage in the long term.

win win win


Depends on your goals.

If you're a company like Facebook or Google, you're looking to eke growth out of every corner of the Earth, including those with really slow Internet.

So after you've conquered the 1st and 2nd world countries, you start optimizing for additional tiers. (And optionally launch balloons that shower Internet upon untapped markets)

However, if you're not one of those two behemoths, then your target audience is probably located within ~3,500 miles of you/your servers (which would translate roughly into a RTT latency figure of ~100 milliseconds, and add 40ms for those folks on low-grade ADSL or cellular connections).

Your barebones site is competing with other fully-featured sites that are taking advantage of the high bandwidth delay product that's available to them.

Unless, competing for eyeballs is not one of your goals.

But regardless, optimizing for reach beyond that mileage range by slicing bits here and there, rather than bolting on a CDN, is probably a premature optimization.

To throw out a new presumption, I'd say that if you measured user's rage, they're more angry with packet loss than with consistent latency.

One is predictable and can be planned for ("open 5 tabs, go do some chores, come back in 15 minutes when they're loaded").

The other is absolutely infuriating ("open 1 tab, get teased by some amount of partially loaded objects, spend the next 5 minutes refreshing due to socket timeouts, cross fingers that not too many refreshes evicts items out of the local cache, give up")


it wasn't meant to be presumptuous. I have an older smartphone that's limited to 3g (and way to often Edge). What i wanted to say: instead of manually selecting and limiting yourself we should invest in simple, transparent tools that help you get reasonable fast.

I would choose a static site, preferably github pages because of its simplicity and easy update-process. We use elaborate compilers every day to optimize our code, so why is there no "global" optimiser for static pages that exclude a few common libraries that one assumes are already cached (if you really need js)? Maybe it's because i don't know that much about "real" web development besides some dashboard-frontend for a way more complicated backend. But this was more composing bootstraps than developing. I usually am more interested in the backend. But to me these seem to be client-problems. Some quick inlined js for selecting the correct image and whether to load the font should not waste much time. Or is this possible via CSS? In my mind this should not be an impossible task.

I have always wondered why some website display nothing until loading of font finished.


read this:

The web sucks if you have a slow connection (danluu.com) https://news.ycombinator.com/item?id=13601451


i edited my original comment because it didn't come out right. I already read the article before, but for me the solution would be an dynamic system that only serves whats possible. For simple static sites this should be not too complicated.

Having to use webpages like we are in the 90s again doesn't seem like a good solution, or at least i don't like it (of course the opposite is also not exactly successful. AMP is being forced upon everybody for a reason).


> Having to use webpages like we are in the 90s again doesn't seem like a good solution

You underestimate the tools you have now. Mobiles are mostly on Android 4.4+, which gives us fairly advanced CSS support for example. There is a plethora of oldschool JS which can be easily replaced by simple CSS rules, a trivial example: resizing images to viewport - just use vw and vh as units, done.

I remember the word DHTML too well to know how easy it is to misuse JS. Always have a working, HTML (and maybe CSS) only solution first, especially, if you content is text. Only after this add the JS, and make whatever you want with it, make it fancy, part-reloading, whatever, but first make sure you can load your content, because that is the main reason for the site. (Again, not talking about webapps. Loading gmail is an idiotic idea on a GSM connection, just use IMAP and an offline client, if that worked during the 80s and 90s, it'll be fine.)

Apply GPRS cap in Chrome and see what's it like in rural ENGLAND. Not India, not China, the very middle (maybe not the actual, geographical middle, but you get my point) of the UK, with terrible signal reception. ( F12 -> network -> 'No throlling' dropdown ).

https://mbasic.facebook.com is a thing for good reasons. (and, by the way, a good way to browse FB without giving them the ability to trace everything via JS).

So no, not like the 90s: do it for modern browser, but the energy, bandwidth and cpu efficient way and don't block text with JS.

No one can afford not to be accessible from billions of devices from millions of people from outside of "The West".


The presentation "World Wide Web, not Wealthy Western Web" describes the effects of the bloated web around the world: https://vimeo.com/194968584


I have created a skeleton project I use for new sites made of Pure CSS with a grunt workflow.

I run jinja2 templates through Grunt and it:

* Optimizes all images - this saves megabytes upon megabytes

* minfies css, html and javascript

* the static files are served via nginx.

Pure is responsive and comes in at just 3.8kb minified and gzipped.

The result is pretty close the fastest blog in the world, and it's super simple to get started with. You can put a CDN in front of it for even better performance if you like.


>Shovelling nanoseconds by manually inlining everything is not scalable.

Neither is Internet speed (even near to) infinitely scalable (even forgetting cost), despite the dreams or hype of some.

Not to mention people on slower lines, as others have said.

Multiply those multi-MB (for pico-content) sites by gazillions of people accessing (some of) them, and the slowdown of the Net becomes serious real fast - no matter what the gung-ho types might claim or say.


I decided to optimize mine for maintenance and mobile, and gave up on doing HTTP and format optimization after moving to CloudFlare - except for adding just enough JS to do Medium-like lazy image loading to save bandwidth for visitors.

But this is impressively fast nonetheless.


> except for adding just enough JS to do Medium-like lazy image loading to save bandwidth for visitors.

Please, please don't do that: it means that visitors without JavaScript enabled simply cannot view your page.

If a visitor wishes to configure his browser not to download images until he scrolls near them, that's certainly within his power. But if you break your page and only unbreak it for those with JavaScript, then your visitors have no choice.


Well, they do get a placeholder image (a blurry one, around 1-4k in size) as default.

The percentage of folk with JS disabled seems to be around 1% of visitors to generic web sites (no hard figures here, only search hits on quora and a few analytics sites), and is likely to be pretty much zero for mobile users, so... I'm OK with the trade off, since I'd much rather improve the experience for those who pay through their nose for mobile bandwidth.


> I'm OK with the trade off, since I'd much rather improve the experience for those who pay through their nose for mobile bandwidth.

I know that your heart is in the right place, but I believe that you're making the wrong decision. I suppose it's one thing if each image is also a link to the high-res source (although I wonder why suitably-scaled images can't be served to everyone), and if the images are irrelevant to your text. But particularly if there is no way for clients without JavaScript to see images, then I think you're breaking the web.

I think that JavaScript disabling will become more and more common as advertising and tracking becomes more and more intrusive. A client who enables JavaScript disables security, disables privacy and disables performance.


Really liking Middleman (similar to Hugo as OP mentions) for this kind of stuff recently.

Powerful enough to use asset caching / on-build minification / direct deploy to s3 / dynamic page generation from JSON, light enough to load in the blink of an eye.


Minimising the CSS/HTML would be one extra improvement, and removing HTML comments.


No need for that, gzip will do that for you.


Does gzip remove comments?


No. But if there aren't many, they aren't long, and are all human readable (like in this article), the penalty after gzip is negligible.


I thought the goal was fastest blog in the world ;)


It does seem silly to inline all the external resources but not remove the comments.


I was musing about how to optimize it a bit further for multiple page requests.

Would it not be possible to do something like this? (pseudo)

on page request:

if (!user_has_visited_before) { insert_css_as_inline_into_html(); lazy_load_css_file_with_js(); }

else { insert_link_to_css_file_into_html(); }


Really, the only optimization applied are cutting of byte and this is the fastest...?

In the meantime in Romania: http://imgur.com/a/UfhoD



Compare this to the crap LinkedIn pushed out about a month ago. Holder images linger long enough ponder their existence and allow for noting their janky departure.


Could you please link it? I have no idea what you're talking about.


I should probably install a web server on my ROS box so the autonomous gokart can serve web pages.


This is so fast. Amazing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: