This is part 1 of a two-part series
In part 1 I look at what LCP is, why it’s important and then I explains how to measure LCP and steps to take on the back-end to improve it.
We all know that performance is a key success factor for most online businesses. Bounce rates increase exponentially with every extra second of perceived load time, and this can be tied back to lost revenue. Check out this article by Jeffrey Nolte if you need more convincing.
Google’s Core Web Vitals put most weight on a metric called LCP – Largest Contentful Paint. This is the time taken to render the largest block in the viewport. Google say a good score is 2.5s or less, and that at least 75% of your website’s visitors should experience an LCP within this range. So let’s talk about LCP…
Where to Start
Google PageSpeed Insights is a great place to start. It shows field data (gathered from real Chrome users) and lab data of the Core Web Vitals – LCP, FID and CLS.
Start by looking at the field data, as this gives you aggregated data showing what’s happening with your actual users (as opposed to lab data which is calculated from a one-off test and subject to fluctuations and bias). It’s a good idea to check the Origin Summary too, which gives you an average over all pages on the domain.
Take a look at LCP. If the green area to the left is less than 75% then we are below par and have some work to do!

Note that Google may not show field data if the site is new or just hasn’t had enough traffic for it to produce reliable numbers. In this case you will need to jump straight down to the lab results. Here we are looking for an LCP of 2.5s or lower on both mobile and desktop, if it’s higher then it’s likely there is problem. I advise running PageSpeed 2 or 3 times to make sure, as the results can fluctuate a bit from run to run.
In addition, you can use Google Analytics to identify particular pages which are slow and have significant traffic, and make decisions about which pages are important to your business (/shop probably is!). This is a nice way to decide where to start, as the issues may be different on each page. In Analytics, go to Behaviour > Site Speed > Speed Suggestions:
So your LCP needs some work? Let’s take a look at what we can do:
Back-end
Before the browser can render anything it needs to receive the main page HTML document from the server. Often TTFB (Time To First Byte) is used to measure how long the wait is from when the browser requests the page to when the first byte of the response is returned. We want a TTFB below 500ms to be considered reasonable, and ideally below 100ms to be excellent.
The Network tab in Chrome Dev Tools will show you the TTFB if you hover over the main page’s bar in the Waterfall. Again, I advise running this 2-3 times to make sure you didn’t get an outlying result. In this example you can see we have some work to do:

Caching
We want to serve cached pages as much as possible, which removes the back-end processing time from the equation, getting the page to the user’s browser more quickly (and reducing load on the server, saving resources). Using the above example, we can click on the main page response (www.ekahau.com) to view the response headers. In this case the page is not cached and reloading it to eliminate the possibility that the cache wasn’t primed the first time didn’t make a difference:

The x-cache header is added by the CDN (we are hosting on Pantheon which includes Fastly CDN). We want to see at least one HIT to show that the page was served from the cache. I then tried using curl
to see if something in the request header is causing this and sure enough:
$ curl -I https://www.ekahau.com/ | grep x-cache:
x-cache: HIT, MISS
There’s probably a cookie being sent by the browser which is causing the server to serve an uncached page. Sure enough, if I run the test in incognito mode I see that the page is served from cache. In these cases it is important to determine who will be affected by this problem, if it is only admin users logged into the WordPress dashboard then it’s probably not a big deal, but if it’s a large chunk of your audience you need to figure out what’s going on and fix it.
If none of your pages are ever being served form the cache, even in incognito mode or via curl
, then there’s likely an issue on your server. If using a managed host like Pantheon you should contact them to see what’s going on, if you manage the server yourself then it’s on you.
Uncached Page Views
Whilst we want to serve cached pages as much as possible, some users will inevitably be served uncached pages. Caches expire and are primed on the next visit. Logged-in users or those POSTing data will also skip the cache in most cases. So the speed at which the server can build the page always matters. Remember we want a TTFB below 500ms to be considered acceptable, if you aren’t getting this:
The architecture provided by your host is the most important thing. However well optimised your site is, it won’t run well if your host isn’t up to the job. The WP Benchmark plugin can show you where you stand here.
One thing we’ve found with Pantheon is that their high-availability architecture has the trade-off of slow disk I/O speeds. This means that any site which has a large number of files in the codebase or includes plugins that write to the filesystem can run slow, eg WooCommerce sites often run into this problem. Whilst you may be able to optimise the code, the easiest option is to simply switch to a host that better meets your needs (is performance more important to you than availability?). A Digital Ocean server managed through Cloudways is one good option.
The Query Monitor plugin also provides some great insights into overall performance. You need to remember the total page load time and therefore the TTFB will be greater than the total query time. The plugin includes some tools to dig into queries and is a good place to go if you’ve ruled out your hosting as an issue.
Stress Testing
If the page needs to handle potential spikes in traffic then it’s a good idea to stress test it by simulating large number of requests concentrated over a short period. You can use Loader for this.
This steps is optional in my opinion, it really depends on what you expect to happen during the lifetime of the site. If you will be running campaigns that may result in large spikes then I strongly suggest you run a stress test.
Read Part 2 here
In part 2 I dive into the front-end.