Improving LCP in WordPress: Part 2

This is part 2 of a two-part series (part 1 is here)

In part 1 I looked at where to start when it comes to improving LCP for WordPress sites, and at some steps you can take to improve back-end performance. Part 2 looks at the front-end.

If you’ve covered the server-side and are happy with the performance there (or at least happy that the biggest issues are not there!) then it’s time to look at the front-end.

Remember LCP (Largest Contentful Paint) is defined as the time taken to render the largest block in the viewport. We can use Chrome Dev Tools to determine what that is in different scenarios. I suggest starting with the first fold of your homepage (or other significant landing page as determined by your analytics) as most people will land at the top of the page the first time they load it. But bear in mind that the largest element in the viewport could be part way down the page if the user lands there (eg by clicking an anchor link or if they have visited the page before and left at that scroll depth).

image.png
Go to the Performance tab in Dev Tools and make a recording. Then hover over the LCP block to highlight the element which Chrome is considering in the LCP calculation.

What to do

There are a lot of options on what to do next and I won’t be able to cover everything. But here are the most common issues I have come across:

Images

If your LCP element contains an image then there are several things you can do to optimise the image’s load time (which I also recommend in general, not just for LCP!). The most important are:

  • Render images with the srcset attribute (wp_get_attachment_image will automate this for you) to ensure the browser does not have to load larger images than are necessary. It’s important you add the relevant image sizes to WordPress to generate the alternate sizes.
  • Never load the fullsize image, always a thumbnail.
  • Compress images as much as possible, and serve them in next-gen formats too where this is supported (most browsers do these days). We use the ShortPixel plugin to cover off on both of these.
  • Use vector formats for icons and logos where possible. These tend to be smaller files and scale to any size with no distortion.
  • If loading images as background images (eg style=”background-image: url(hero.jpg);”) this will delay the image load until all the stylesheets have been fully processed, thus delaying the paint of the element. Take a look at this thread for an explanation and some tips on fixing the problem, such as pre-loading the image in the <head> or repeating the image in an <img> tag. This is a particularly common problem for LCP, as we often use background images on hero elements.

Fonts

If your LCP element contains text then it may be delayed by the time taken to load any custom fonts used by your website. Here are some things to look out for:

  • Make sure you are using font-display: swap; which will display text using the system default font and then swap it to the custom font once it becomes available.
  • Also make sure your fonts are not render blocking! This will impact the load time of everything on the page. Here are some techniques you can use to load fonts asynchronously.

Render Blocking Scripts

How many render-blocking scripts are loaded by the page? The optimal state is none, which would likely involve inlining Critical CSS and loading additional stylesheets in the footer. In addition all JavaScript files should be deferred or loaded in the footer too.

With WordPress it can be tricky to completely eliminate render blocking scripts, especially on an existing site that has plugins/themes which rely on these. In these cases I like to look into which plugins are generating scripts and determine if they are really necessary and if they are then try to re-enqueue the scripts in the footer checking that nothing breaks. You can also look for alternative plugins which do the same job in a more performant way.

If you are building the site from scratch I strongly suggest running regular Lighthouse scans and checking which scripts are loaded, especially after installing new plugins (we run Lighthouse in our Bitbucket Pipeline and fail the build if performance drops below standard). I also recommend dequeuing jQuery as this is not necessary in many cases and ends up slowing the site down.

Design

I wanted to briefly mention design too. I encourage engineers to loop back with the site designer where they see issues that could be fixed with a UI change. For example, it may be more efficient to redesign a high-definition image grid rather than optimise it for performance. The cost / benefit trade-off is always important to keep in mind.

Summary

Just keep pruning and questioning everything. What plugins are really necessary? What JavaScript dependencies do you really need to add to your bundle? What scripts are only needed on a handful of pages but not all?

Let’s all keep performance centre of mind, it’s so important. What tips do you have for optimising LCP on WordPress?

Improving LCP in WordPress: Part 1

This is part 1 of a two-part series

In part 1 I look at what LCP is, why it’s important and then I explains how to measure LCP and steps to take on the back-end to improve it.

We all know that performance is a key success factor for most online businesses. Bounce rates increase exponentially with every extra second of perceived load time, and this can be tied back to lost revenue. Check out this article by Jeffrey Nolte if you need more convincing.

Google’s Core Web Vitals put most weight on a metric called LCP – Largest Contentful Paint. This is the time taken to render the largest block in the viewport. Google say a good score is 2.5s or less, and that at least 75% of your website’s visitors should experience an LCP within this range. So let’s talk about LCP…

Where to Start

Google PageSpeed Insights is a great place to start. It shows field data (gathered from real Chrome users) and lab data of the Core Web Vitals – LCP, FID and CLS.

Start by looking at the field data, as this gives you aggregated data showing what’s happening with your actual users (as opposed to lab data which is calculated from a one-off test and subject to fluctuations and bias). It’s a good idea to check the Origin Summary too, which gives you an average over all pages on the domain.

Take a look at LCP. If the green area to the left is less than 75% then we are below par and have some work to do!

59%: some work to do!

Note that Google may not show field data if the site is new or just hasn’t had enough traffic for it to produce reliable numbers. In this case you will need to jump straight down to the lab results. Here we are looking for an LCP of 2.5s or lower on both mobile and desktop, if it’s higher then it’s likely there is problem. I advise running PageSpeed 2 or 3 times to make sure, as the results can fluctuate a bit from run to run.

In addition, you can use Google Analytics to identify particular pages which are slow and have significant traffic, and make decisions about which pages are important to your business (/shop probably is!). This is a nice way to decide where to start, as the issues may be different on each page. In Analytics, go to Behaviour > Site Speed > Speed Suggestions:

image.png

So your LCP needs some work? Let’s take a look at what we can do:

Back-end

Before the browser can render anything it needs to receive the main page HTML document from the server. Often TTFB (Time To First Byte) is used to measure how long the wait is from when the browser requests the page to when the first byte of the response is returned. We want a TTFB below 500ms to be considered reasonable, and ideally below 100ms to be excellent.

The Network tab in Chrome Dev Tools will show you the TTFB if you hover over the main page’s bar in the Waterfall. Again, I advise running this 2-3 times to make sure you didn’t get an outlying result. In this example you can see we have some work to do:

Caching

We want to serve cached pages as much as possible, which removes the back-end processing time from the equation, getting the page to the user’s browser more quickly (and reducing load on the server, saving resources). Using the above example, we can click on the main page response (www.ekahau.com) to view the response headers. In this case the page is not cached and reloading it to eliminate the possibility that the cache wasn’t primed the first time didn’t make a difference:

The x-cache header is added by the CDN (we are hosting on Pantheon which includes Fastly CDN). We want to see at least one HIT to show that the page was served from the cache. I then tried using curl to see if something in the request header is causing this and sure enough:

$ curl -I https://www.ekahau.com/ | grep x-cache:
x-cache: HIT, MISS

There’s probably a cookie being sent by the browser which is causing the server to serve an uncached page. Sure enough, if I run the test in incognito mode I see that the page is served from cache. In these cases it is important to determine who will be affected by this problem, if it is only admin users logged into the WordPress dashboard then it’s probably not a big deal, but if it’s a large chunk of your audience you need to figure out what’s going on and fix it.

If none of your pages are ever being served form the cache, even in incognito mode or via curl, then there’s likely an issue on your server. If using a managed host like Pantheon you should contact them to see what’s going on, if you manage the server yourself then it’s on you.

Uncached Page Views

Whilst we want to serve cached pages as much as possible, some users will inevitably be served uncached pages. Caches expire and are primed on the next visit. Logged-in users or those POSTing data will also skip the cache in most cases. So the speed at which the server can build the page always matters. Remember we want a TTFB below 500ms to be considered acceptable, if you aren’t getting this:

The architecture provided by your host is the most important thing. However well optimised your site is, it won’t run well if your host isn’t up to the job. The WP Benchmark plugin can show you where you stand here.

One thing we’ve found with Pantheon is that their high-availability architecture has the trade-off of slow disk I/O speeds. This means that any site which has a large number of files in the codebase or includes plugins that write to the filesystem can run slow, eg WooCommerce sites often run into this problem. Whilst you may be able to optimise the code, the easiest option is to simply switch to a host that better meets your needs (is performance more important to you than availability?). A Digital Ocean server managed through Cloudways is one good option.

The Query Monitor plugin also provides some great insights into overall performance. You need to remember the total page load time and therefore the TTFB will be greater than the total query time. The plugin includes some tools to dig into queries and is a good place to go if you’ve ruled out your hosting as an issue.image.png

Stress Testing

If the page needs to handle potential spikes in traffic then it’s a good idea to stress test it by simulating large number of requests concentrated over a short period. You can use Loader for this.

This steps is optional in my opinion, it really depends on what you expect to happen during the lifetime of the site. If you will be running campaigns that may result in large spikes then I strongly suggest you run a stress test.

Read Part 2 here

In part 2 I dive into the front-end.

“I survived Gutenberg” series – Chapter 1: Rejection

It’s been almost a year and a half since the new WordPress editor “Gutenberg” was released officially as the default WordPress editor. However, before this release, we had the option to install Gutenberg as a plugin so we could test it and start getting familiar with it.

As any new major release, it introduced some challenges to WordPress editors and developers (mainly developers), besides some bugs and usability issues.

Unfortunately, it was not well received by some developers (myself included) who preferred to continue working with the Classic Editor, so a new plugin started catching on: The Classic Editor plugin, which at the time of writing this post, has more than 5 million of active installations. Fair enough for all those websites which were already using the good old Classic Editor. The Classic Editor plugin became a lifesaver for those websites.

By the way, this plugin is an official WordPress plugin, so it was released by the WordPress team precisely to avoid breaking those websites’ content with the new Gutenberg functionalities. They also announced:

This plugin will be fully supported and maintained until at least 2022, or as long as is necessary.

Once again, fair enough for all those sites using the Classic Editor, but what will happen after that date? What if the plugin developers decide to stop offering support to this plugin? Well, then it will be time to move away from the “good old times” and welcome the present.

Going back to “the rejection stage”, and talking about myself, I didn’t want to switch immediately to Gutenberg as soon as it was released, not for being afraid of change (well, actually I was afraid of that huge change)… anyway, I was aware there were many latent bugs and compatibility issues at the time it was released. In my opinion, the release of Gutenberg was too premature, I would have wait a bit for it to be more mature, more robust, and with fewer bugs.

Some of the most notorious issues when it was released were its lack of accessibility support, performance issues, some editor UX issues, lack of support on other plugins (like WooCommerce), etc.

In fact, Matt Mullenweg (who was leading the release process), in his presentation during the WordCamp US right after the launch of Gutenberg, did acknowledge that mistakes were made during the launch, and that he had learned the lesson so that these mistakes will hopefully not be repeated.

I heard (and read) many people saying that Gutenberg was a bad move and that it should never have been integrated to WordPress core as its default editor. But I think some of these feelings were also encouraged by personal reasons: fear of change, having to change the way to build custom themes, what would happen to the custom fields, the need to learn ReactJS to be able to build custom blocks, some people (like me) had already a defined workflow to build entire sites with Advanced Custom Fields and to offer a great edition experience with reusable fields, repeater fields, flexible content fields, etc. “Now, with Guteneberg, we’ll have to change everything we’ve built during these last years”, we said.

I know many people was really excited about Gutenberg, I actually was, but at the same time I felt worried about starting using it on my new projects, and more than excitement, the Gutenberg release caused anxiety and uncertainty to some (or most) of us. Instead of an improvement, I felt it would introduce a risk to my new projects, I really felt it like an Alpha release instead of a Production release.

And there are tons of additional reasons why many people felt this way and this is why I called this first chapter “Rejection”.

But what’s next? Sometimes after a rejection comes a less rejective or acceptance stage, but, do you think this was the case of my experience with Gutenberg? Judging by the title of this post, we could say yes, however accepting something not always means embracing it or adapting to it. If you want to know what happened after this rejection stage, and if I embraced it or just kept doing my job with my good old structured way to build websites, then be in the loop and wait for my next article: Chapter 2!

Why we use Pantheon to host and manage WordPress

We’ve been building, supporting, and maintaining WordPress since when we were founded in 2006. Through those years we’ve explored nearly a dozen hosting providers such as GoDaddy, MediaTemple, Linode, WP Engine, AWS, SiteGround, and Flywheel to name a few. Given the fact that our clients rely on us to make the best technical decisions for them as possible it’s imperative we make the right ones. Pantheon has overdelivered on our expectations as a provider and have raised the bar on what a provider can and should deliver. While Pantheon offers many benefits I’ve listed what I feel are the most important points below.

Reliability

From support to uptime, to performance we’ve been able to consistently rely on Pantheon for keeping both their product up and operational as well as the sites we host with them while also being thoughtful around how our account is managed.

  • 99.9% Historical Uptime
  • Support has always been responsive, knowledgeable and has assisted to resolve issues on repeated occassions.
  • Account management has always assisted our team to help work through advanced technical issues and escalate when needed.
  • We rarely experience issues with their web portal which enables our team to work efficiently on client products.

A Superior Product

The Pantheon dashboard and site management product is far superior to what we’ve seen in the market. The tools in place help Nolte to work more efficiently resulting in less time managing DevOps and more time focusing on how we deliver value to our clients and the users of their products.

  • Structured and flexible development, testing, ad-hoc, and production environments
  • Custom Upstreams to allow for enforced standards, one click updates, and customization
  • Ability to automation and build on top of the platform through automation and APIs
  • Application Monitoring via New Relic
  • Git and version control on all environments

Site & WordPress Performance

Site performance is of the utmost importance to us as the majority of the products we manage have some level of business critically inherent to the sites. Given the fact that performance has a direct business impact and our mission is to help our clients grow this needs to be top of mind.

  • Managed HTTPS
  • A Global CDN (Content Delivery Network)
  • Advanced Page Caching
  • Redis & Varnish Caching
  • SSD Underlying Infrastructure
  • Highly Performant PHP & Tuned MySQL Implementations

This benchmark has some great insights marking Pantheon #1 against many top hosts.

Security

Our clients rely on us to keep their products, users, and businesses secure and Pantheon helps ensure that we meet their expectations.

  • Container based secure infrsastructure
  • Best data safeguard practices
  • Automated updates, security monitoring, backups and retention
  • Denial of service and network intrusion protection
  • Authentication best practices (SAML/SSO/2FA)
  • Secure code and database access
  • Secure data centers

Learn more about Pantheon’s security practices here

Expert Support

We’ve tested the reliability of Pantheon’s support team on several occassions and they’ve continued to meet or exceed our expectations. The support for agencies extends to training on how to use the tools, group virtual trainings and a host of other resources. The standard site support team has been awesome and provides expert level support through agents that really get WordPress and Pantheon’s product. We’ve saved thousands in resource costs from this point alone.

Some areas where the Pantheon suporrt has overdelivered:

  • Live training for Nolte’s engineering team
  • Co-creation of open-source tools that help us build better products faster
  • Occasional engineering support on products
  • Detailed technical instruction related to product optimization

Versatility

Balancing agility with standardization is often a difficult task. In our experience with many of the previous partners we’ve found that they are usually good at one focus area of hosting such as content sites that scale or WooCommerce. These were difficult lessons but helped us to identify the right type of partner for the types of work we do. We’ve put Pantheon to the test with all types of WordPress sites such as:

  • Headless sites with multi-tenant front-ends
  • Highly trafficked WooCommerce / WordPress sites
  • Mobile applications powered by a WordPress JSON API

Have questions around how we’ve gotten the most out of the Pantheon platform or looking for more info on how to build a solution on top of the platform we’d love to hear more. Leave a comment or contact us.

How to measure and monitor site performance with Google’s Core Web Vitals

On may 5th, 2020 Google announced Web Vitals coined as “Essential metrics for a healthy site”. It’s evident that performance has a direct impact for businesses and this initiative from Google reinforces that. Being that our mission is to co-create high-quality software with our clients we definitely correlate performance with high-quality.

Why does it matter?

Performance is about human experience. It’s about how people experience the web. For the vast majority of people web is a key touchpoint their life. Performance can both positively or negatively impact how people feel. To illustrate why performance matters, I’ve pulled some data from the web to show both the positive and negative impact of performance.

15%

higher conversion on sites that loaded in 2s or less

10%

BBC lost 10% of users for every additional second of load time

43%

of consumers expect a web page to load in 2 seconds or less

15%

Pinterest’s increase in traffic and sign-ups when wait times reduced by 40%

User Centric Performance Metrics

Given performance is about people, we need a method to test performance based on peoples perception. The only way to truly know how your site performs for your users is to actually measure its performance as those users are loading and interacting with it. This type of measurment is known as RUM (Real User Management).

Noting that a single metric exists that can measure a sites performance based on a users perception there are several types of metrics:

  • Perceived load speed – how quickly a page can load and render all of the visual elements to the screen.
  • Load responsiveness – how quickly a page can load and execute any required code in order for it to respond quickly to a user
  • Runtime responsiveness – once a page is loaded, how quickly can the page respond to user interaction.
  • Visual stability – do elements on the page shift in ways that users don’t expect and potentially interfere with their interactions?
  • Smoothness – do transitions and animations render at a consistent frame rate and flow fluidly from one state to the next?

Metrics To Measure

Based on Google’s announcement of Web Vitals the following metrics are a set that they deep as most necessary when measuring user centric performance. Google has mentioned that these metrics will evolve over time.

LCP: Largest Contentful Paint (loading)
FID: First Input Delay (interactivity)
CLS: Cumulative Layout Shift (visual stability)

Definition & Target Speed

  • Largest Contentful Paint (LCP): measures loading performance. To provide a good user experience, LCP should occur within 2.5 seconds of when the page first starts loading.
  • First Input Delay (FID): measures interactivity. To provide a good user experience, pages should have a FID of less than 100 milliseconds.
  • Cumulative Layout Shift (CLS): measures visual stability. To provide a good user experience, pages should maintain a CLS of less than 0.1.

Field & Lab Measurement Methods

There are many instances where measuring performance from an actual user is not possible such as when a site is in testing or before moving into production. Given this fact, two methods for measurement have been established:

  • Lab – Before production, pre-release, local testing, etc
  • Field – Real User Monitoring, based on how the actual user experiences the performance

Toold for Field Measurement:

  • Page Speed Insights
  • Chrome UX Report
  • Search Console
  • Firebase Performance Monitoring

Tools for Lab Measurement:

  • Lighthouse
  • Chrome Dev Tools

Simply Measure Your Sites Performance

The simplest way to get started in measuring your sites performance is to use the Chrome User Experience Report in Google Data Studio. This report indexes all websites across the web into a BigQuery database which is publicly available online.

Setting up your dashboard:

  1. Go to Chrome UX Dash – the connector for the Google Data Studio Dashboard.
  2. Input your sites domain in the origin URL field
  3. Click next
  4. Choose create report

Chrome UX Dashboard for WeAreNolte.com

A talk I did during Nolte’s weekly Lightning talks on this topic.

Source: Think With Google, Google Developers:Why Performance Matters