Measuring Frontend Performance (in modern browsers)

June 19, 2019

Photo by Mathew Schwartz

Up until a few months ago, I had no idea how to think about frontend performance, what it means, or how to measure it. Previously, I've looked at server response times and often assumed if that was "fast" the whole user experience would be "fast." However, there's a lot more to the performance of an application and the overall user experience. This is an overview of what I learned after a few months of trying to understand and implement frontend performance metrics.

What is frontend performance?

My simplest definition is "the time it takes for an application to become usable." For me, this is challenging to wrap my head around because "usable" can be interpreted in different ways by different people (or maybe the same person on different days).

You could argue, when looking at something like server response time there's one definition for "is it usable?" The overall response time. Before the server responds, it's 0% usable. The user hasn't received the response and they can't do anything without that. Once the server has responded it's now 100% usable for the user (or at least for the client to make it usable). With that, improving the server response time will improve the time it takes the server to "become usable."

When looking at frontend performance it becomes less straightforward. Rarely, it's exactly 0% or 100% usable, but rather somewhere between during an application's life cycle. There are a large number of variables that can affect the usability: the actual feature code efficiency (eg: maybe using concat vs push in a large loop), network latency, server response time, server response size, JavaScript and CSS file size, browser, available resources on the client, caching, long running tasks, loading states, etc.

Why care about frontend performance?

In general, better performance means a better user experience.

Unfortunately, we usually can't stop here and need to go a step further to prove that it's a worthwhile business investment. Fortunately, a better experience will usually lead to an increase in some important business metric.

This may mean users will spend more time using your product (if you care about that) or maybe it means they actually spend less time using your product because they can achieve their task quickly. Later on they might recommend it to their friend since it was a fast and delightful experience.

It's hard to generalize what exactly improving performance will lead to, but it will generally lead to an improvement of some important metric.

If you're looking for something more concrete, there are several great examples curated by Google Web Fundamentals outlining specific cases of how performance directly improved important metrics.

(Google Web Fundamentals was an invaluable resource when investigating and learning about frontend performance. Much of this content was inspired from their resources)

What should be measured?

Even though frontend performance may not be an exact science, we need something precise to measure. Given the qualitative feedback that "the application feels slow" how do you know it's slow and not a networking issue? Or, some other factor? You don't. We need to measure something, but what?

Reframed in the context of the above definition: what measurements can be made that would determine if an application is usable? The Web Fundamentals does an excellent job outlining questions that can be related to exact numbers: Is it happening? Is it usable? Is it delightful?

Is it happening?

How does the user know if anything is happening when first navigating to your application? Something visually different appears or "paints" on the screen. There are actually three common "paint" measurements:

Lighthouse First Paint Timing

First Contentful Paint (FCP) and First Meaningful Paint (FMP) in Lighthouse

Another metric that can be helpful to provide a full perspective is the amount of time it takes to start receiving a response from the server.

From the frontend perspective, this is where you can start to control the performance experience. Before this point is the DNS lookup, request overhead, server time, network latency, etc. (things generally out of the control of the frontend). It's also straightforward to calculate using the Performance‚ÄčNavigation‚ÄčTiming API.

The Time to First Byte timing is also visible in the Chrome Devtools under the Network tab.