Our recent rebuild of Leighton.com prompted a challenge: Achieve the best Google PageSpeed Insight score possible.
*We achieve 98/100 Mobile and 100/100 for User Experience
Well… primarily for user experience. Recent studies show mobile users to be an impatient bunch when browsing websites, close to half abandon their visit early if pages take longer than 3 seconds to load (KISSmetrics), while lead generation and eCommerce conversions are also much lower. This is magnified greatly for users on slower mobile network connections. Imagine if this was happening on a large scale eCommerce site. Conversions could be dramatically increased just by improving the speed of your site and retaining visitors longer.
Google also uses PageSpeed as a measure in its algorithm to rank pages in search results. A low score could affect your rank positioning compared to your competitors, and also result in fewer pages on your site being indexed.
There are a number of specific best practices Google requires your pages implement and pass in order to acheive a high score. These are the areas we focused on to achieve our fantastic 98/100 Google PageSpeed Insight score
Server response time
Google suggests a server response time of less than 200ms for the required HTML to be loaded into the browser to begin rendering the page. There are many factors that affect response time. We found the best result for us was a balance of server resourcing (CPU & Memory), hosting provider, and frameworks. Leighton.com is hosted on a scalable, cloud based platform that allows us to automatically scale resources depending on demand. We also took the decision to move away from one of the big CMS platforms to use a custom built CMS. Our previous CMS was just too resource hungry and a huge drain on site performance. Our server response time is now between 75ms and 150ms, where previously it was 300-400ms.
Inline critical CSS in the <head>
CSS files loaded in the
<head> are render blocking assets, that is, the browser cannot continue to parse through the HTML to render the page until the CSS files have been downloaded. This slows the perceived page load speed to users, it may only be a few extra milliseconds but they all count when added up.
Our solution was to separate our SASS/CSS into 2 branches, one for the critical CSS such as the reset, grid, navigation, typography etc.. essentially everything needed to render the content first shown in the browser viewport when the page loads. The minified CSS was then added inline to the
Asynchronously load non-critical CSS
The second SASS branch contains everything else needed for content rendering below the viewport. Now, we couldn‘t just stick this in the
<head> using a
<link rel="stylesheet"> as that would be a render blocking CSS asset. Instead we are loading the CSS file into the page asynchronously using a Scott Jehl‘s loadCSS script once the page has loaded.
Use Asynchronous loading on non-critical scripts
With the exception of jQuery, all third party scripts on our site are loaded asynchronously to avoid blocking the rendering of the page. We have even opted for the async TypeKit script and the Flash Of Unstyled Text (FOUT) it comes with. Honestly, who really cares about the FOUT? we can certainly live with, I‘m sure visitors to our site can too.
Inline and place scripts at the bottom of the page
Any other scripts on our page are placed at the bottom of the page directly before the
</body> tag to avoid render blocking. These are a combination of inlined and network loaded scripts.
Minifying HTML output
We minify and concatenate CSS and JS files religiously, yet it is surprising how many sites don‘t also minify their HTML code too. Our Homepage‘s raw HTML was 12.3kb when GZipped, but when minified it dropped to 10.8kb. Yes, 1.5kb is a small marginal gain, but it all adds up.
It doesn‘t matter what graphics application you use to export assets, they all seem to add superfluous metadata to exported images, especially to PNG and SVG files. Many low colour PNG files can also benefit from being converted from 24bit+alpha to 8bit+alpha, often saving upto 60% file size. There are many tools and automated scripts out there to help process these files and optimise them. Over the course of our development we used a number of tools to optimise images such as:
Ensure server GZip compression is enabled
This should be a no-brainer and turned on by default. To give an example the main.css file asynchronously loaded into Leighton.com is 30kb uncompressed, but when GZipped it drops down to 6kb.
Blockers to a perfect 100/100
Big pat on the back! we tackled most of the issues flagged by Google Page Speed Insight and achieved a 98/100 Mobile score and 96/100 for Dekstop, along with a perfect 100/100 for User Experience.
But, Google Page Speed Insight is still flagging a couple of issues, both of which appear unachievable.
For mobile, a Browser Caching issue is flagged on 2 third party scripts;
https://www.google-analytics.com/analytics.js which is ironic as the second one is Google‘s own Analytics JS (left hand / right hand…). As these are third party scripts we have no control over server caching settings so will have to live with missing out on that last 2%.
On Desktop a number of images are noted as requiring further optimisation. I have my doubts over how achievable the reductions are suggested by Google. Google Page Speed Insight claims one JPEG file in particular could be reduced by 87% in filesize. Bear in mind this is a 250x250px image that is already compressed at 60% to 18kb, that would mean the file being compressed under 3kb to achieve the suggested 87% reduction, I don‘t think so….
To round things up, the exercise to tackle the Google Page Speed Insight challenge has actually been incredibly useful. It has even changed some of our methodologies and approaches to certain day-today development tasks in order to aim for 98/100 or higher for each and every site we build.