Optimization of Metrics for Magento 2 Site

Trying to explain all the aspects and peculiarities of metrics and optimization could take more pages than a novel. Therefore, here is a concise description and useful tips on how to optimize a Magento 2 site.


…about the basic principles of browsers.

The loading of every file (JS in particular) consists of stages: loading and execution (which on its own consists of parsing and compilation)


The loading of files occurs simultaneously. Unless your server or browser does not support the http2 protocol, the amount of the simultaneously loading files will be limited. All the contemporary browsers (even mobile ones) have already received the http2 protocol support. Moreover, all the servers preferably also use the http2 protocol. So you don’t need to worry about the amount of JS files and make too big bundlings.

However, the rumor goes that the fewer the number of the files, the less time it takes for their parsing. Thus, even though the bundles can be useful (file size 50-100KB), the tests with WebPageTest demonstrated that it is not the case.

We tested our dev site on Magento 2.4.


Without building

  • Total size of JS files: 3.97MB.
  • FCP/LCP — 4.088s (this is a test site, so do not pay attention)

With building (different size of the bundle was tested):

  • Total size of JS files: ~7MB.
  • FCP/LCP — ~5s.
  • Other metrics were ~30-60% worse.


(with configured http2 on a server)


The browser needs to reparse and execute every JS file. And if loading can occur simultaneously, execution of files occurs in the main processor thread one at a time. The execution of JS files is the most “consuming” process and it creates the main blockings on a site rendering.

(not considering the loading time)

The same CPU thread is responsible not only for the execution of JS but also for “painting” a page.

Tips for the optimization of JS execution

What you can do for the optimization:

1) The less the size of JS files on the page, the less is the time of their parsing. You do not need to load the massive pieces of code which are rarely used. Example: chat button on the site pages (that irritating one, in the bottom corner which only clicked by 5-10% of the users, while others have to load the script every time on every page). You just need to create the button “the same as in chat”, which will load in DOM and execute JS of your chat. Although this type of chat won’t launch instantly, with the addition of a preloader, users will not be irritated (because they triggered this action). Additionally, the wait time will be no more than a second. This would allow to improve the metrics results and user experience for 90-95% of the users.

It could also be a third-party search (i.e. Algolia) or maps. Here is the article about these issues.

2) The complex pieces of JS are processed in a separate thread. There is a lot to say about this section (“single-threaded JavaScript and how to deal with it”). The article would be even more than this one. In nutshell: use the “webworker.” Here is an article on the subject of Webworkers and asynchrony.

With the webworkers, the JS code will be executed outside the thread which renders a page and will improve all the speed characteristics that will be mentioned below.

Not sure, whether the asynchrony (described in MDN Web Docs) will help during the page loading. It is more useful for the optimization of interactions with a user.

You can also put the difficult pieces at the end of a thread. Every time you use setTimeout/setInterval — the code inside these methods moves to the end of a threat and will be executed only when all of the JS code is executed.

3) Async and defer attributes.

In short: you can delay the execution of JS until the DOM is fully parsed. It is great for the scripts with domready() and for those which are not required until a user clicks on something (search scripts, google maps in pop-ups, and everything that is related to the invisible content). The execution of such scripts during the parsing of a page is senseless. That is why, you can easily assign the attribute “defer” for them. Not to be confused with “moving JS to page footer” (will talk about this later in the article).  

BUT! It is not so easy.

Concerning the first point. Sometimes it is impossible to remove what is supposed to be there. Nevertheless, it is important to write code only with the crucial components. As the saying goes: 

“Brevity is the soul of wit.” Try to include all necessary elements and yet minimize the complexity.

Concerning the second point. In practice, particularly with regular sites, it is unlikely to find something that can be moved to a separate thread.

Concerning the third point. If you connect the JS files “manually,” adding the necessary attributes should be easy. However, with certain frameworks/CMS, it will depend on the capabilities of a system.


In theory, you can perform everything of the above-mentioned. However, it requires a significant amount of work and might not be justified with the results.

The main tips:

  • Try to write “good” code from the start. Plan the needed code before the development to not repeat the same code and not to write anything unnecessary.
  • If you frequently use the same source or functional clusters (chat, search), it would be effective to develop the feature of “delayed initialization” (1st point). Although it could take some time, once you make it, you will be able to easily apply it to the next projects.
  • Get familiar with the tools for the third point (async and defer attributes). It could take some time, but once you make it, you will be able to easily apply it to the next projects. Personally, I prefer the approach that will be described further (“Detailed about optimization”)

Tips on CSS execution optimization

Utilize GPU

Page rendering also occurs with the help of the CPU. A browser uses GPU only to render “composite layers.” The elements can get into a composite layer instantly or “by necessity.” Initially, the composite layers should have:

  • 3D-transformations: translate3d, translateZ, etc.
  • Elements <video>, <canvas>, <iframe>.
  • Animation “transform” and “opacity” with Element.animate().
  • Animation “transform” and “opacity” through CSS Transitions and Animations.
  • “Position:fixed”
  • Will-change.
  • Filter.

Here is the visual example which demonstrates that during the JS execution, everything else “freezes”.

However, do not overdo it because there is the possibility that the memory storage will run out. Every element is a pixel array: {width} x {height} x {number of bytes per pixel}. “number of bytes per pixel” is usually 3 (RGB). Meanwhile, with the transparency, it is 4. Thus, square 100×100 takes 30 000 bytes (with transparency — 40 000 bytes).

The trick to reduce the size of a block is to use transform (example). It can also be used for the images. In this case, the image quality will be lower (because it is stretching). Nevertheless, it can be acceptable for the secondary page blocks.

More details you can read in the article here

In general: try to write compact CSS. 

Use less CSS class inheritance


Instead of:

.page-products .column.main .box-store-locator .store-locator .box-current-store-info .current-store-hours-title {}

You can make it shorter (if not even more): 

.page-products .store-locator .box-current-store-info .current-store-hours-title {}

It is especially relevant with LESS/SASS styles, where nesting does not seem so “bulky”.

If you delete the inheritance classes for ~ 100 lines, you will save up ~ 1KB.

Even more effective method is to use the BEM method. In this case, to indicate an element, you will need just to write one class.

Do not use images

I frequently see that for the arrows or other simple icons, people use images (svg/jpg/png for a background)

No image would have less size than a couple of CSS lines. Moreover, CSS will be compressed after you minify it.

Group selectors

Instead of:

.some-common-class { margin: 0px; padding: 0px;}.menu-list { margin: 0px; padding: 0px;}

.some-another-list { margin: 0px; padding: 0px;}


.some-common-class,.menu-list,.some-another-list { margin: 0px; padding: 0px;}

Speaking of Magento, you can use the out-of-box method «&:extend()». 


What to use for tests

Everybody knows about “PageSpeed insights” from Google. However, there are also other methods.

PageSpeed insights


  • Provides numerous specific recommendations with the links (considering the platform which was used to develop a site).
  • Analyzes mobile and desktop versions at the same time.
  • Can show the data from real devices. If your page was visited more than 500 times per month from the Chrome browser, you can see the primary metrics “in real time.” The average value for the 28 days. (since JS/CSS/Pictures are cached by a browser, the statistics can be even better).


  • Usually, do not test Dev Sites (progresses to 80-90% and shows “There was a problem with the request. Please try again later”).
  • Does not have a timeline.


It has multiple indicators. Timeline and the choice of the device is available only for the signed users.


You can choose a device, browser, and type of Internet. It has a timeline which allows you to see the status of a page and the process of a components’ loading at any point in time.


Plugin for Chrome browser. 

«PageSpeed insights» by Google uses Lighthouse. Thus, you will receive the same data only faster and without problems (avoiding cases of analysis failure). Another great bonus is the fact that you can test local sites.

Mini introduction

Two primary components of analytics applications:

  1. Rendering time of a page/content which depends on the size/nesting of HTML and sizes of JS/CSS/Images. 
  2. Page behavior before and after loading. 

Page behavior before and after loading

It is better to start with the second component because it is the simplest. During and after the loading of a page, the content should be in its place. In metrics, this characteristic is called «Cumulative Layout Shift (CLS)».

It should be emphasized “… and after” because during the scrolling and interactions with a page, the elements should remain at their places. During the scrolling, such things will not be in the analytics. But they will be in the statistics of “real data” in “PageSpeed insights”.

Cumulative Layout Shift (CLS)

The content should not be “jumping” (change its position). For instance, if a page has a banner but it does not load instantly, it should have the “reserved” empty space. These “shifts” are important even during page scrolling.

You can “catch” such “shifts” by launching the following code in Chrome (or Edge, it does not work with other browsers):

new PerformanceObserver(l => { l.getEntries().forEach(e => { console.log(e); })}).observe({type: ’layout-shift’, buffered: true});


new PerformanceObserver(l => { var i = 0, sum = 0; l.getEntries().forEach(e => { console.log(e); sum +=e.value; i++; }) console.log(’Total’, i); console.log(’Summa’, sum);}).observe({type: ’layout-shift’, buffered: true});

You will see the reports about shifts.

Although individual “value” under 0,1 is not important, the total amount is important. You could have 200 shifts with the value of 0,1 and you will get a 20% shift as a result. The logs would also indicate the blocks which have been moving (it means that block is moving by itself or block before it suddenly got bigger and was moved below)

Start correcting CLS from the top. Shifts in the header are very important. The real example: deleted a couple of pixels:

And got +0,012 points (1.2%). Also, fixed the 600 pixels shift at the bottom footer and received approximately 1%.

Ignore the blocks with the position “fixed”.

Metrics of page loading

Now we can delve deeply into the first component. It encompasses many characteristics and factors which mostly depend on the size of a page.

The primary metrics:

  • First Contentful Paint (FCP). First visualization when the text or images appears. Very important metric.
  • Largest Contentful Paint (LCP). Shows when the largest block appears. Very important metric.
  • Time to Interactive (TTI). The time when a user can interact with a page.
  • Total Blocking Time. The time when a user cannot interact with a page because the processor is busy with the  “execution” of content on a page. (see Preamble – > Execution).
  • Speed Index. Imaginary value which derives from the above-mentioned metrics.

“Speed Index” might not be even mentioned separately because it improves as soon as other indicators are improved.

“Total Blocking Time” and “Time to Interactive”

The sooner a page will render, the sooner it will be available for the interactions (scrolling, clicking on the interactive elements, and animations).

To reduce the “Total Blocking Time” and speed up “Time to Interactive”, one needs to unload the “main thread”. See “Tips for the optimization” above and the next paragraph to succeed 😉

Additionally (in general) about optimization

As was already mentioned, all metrics indicators depend on the size of a page and uploaded content. And you could have a thought “I cannot delete this or that element because the client ordered them and they should be there”. The answer is — yes and no. Like with the example with the chat button.

Here is another interesting example. The site’s score is 96-98 points. Mainly due to the fact that they load ALL OF JS with delay using a principle of “lazy-load”. Tags instead of attribute “src” contain attribute “data-src”. And at the end of a page, inline JS code which creates tags with the needed “src” (and “async” attribute) after a certain amount of time (or during the scroll, if it occurs earlier). Consequently, the 1MB page loads for a couple of seconds, while the other 15MB render later.

As a result, a user sees a page instantly, thus the metrics indicators are great. The only downside is the notion that you cannot interact with a site because nothing is working properly yet. Sliders, menu, pop-ups, dropdowns, cart. Everything becomes dynamic only after 10 seconds (or after several seconds after scroll. Still not instantly). However, what are the chances that a user will immediately click on the sliders or menu?

You can also decrease the load on the processor and reduce the page size by not loading images that are not displayed on the screen. Technology is as old as time. Hope that everything is clear here and we do not need to linger on it.

However, if somebody did not know about this method, you need to immediately learn about it here and here.

If we are talking about Magento,it is important to mention that version 2.4 already has partial “out of box” support. Namely, the image of the products on the category page uses lazy-loading. But to be honest, one catalog is not enough. Usually, there are a lot of large images on the home and product pages. For this, you would need to create a “workaround” or use modules (which can be even free for Magento).

We will not touch upon the format/size of images, minification, compression of data by the server (Gzip, Deflate), and other methods because if you did not know them, the testing tools will tell you about them.

Now let’s review two other important metrics (in addition to CLS): FCP and LCP.

First Contentful Paint (FCP)

As it was already mentioned: FCP is a time until the visualization of any content. It directly depends on the size of <head> and from the server response time.

A browser will not render a page until the <head> section is loaded. Everything is simple here: the less the size of the content in the head tag, the faster the page starts its rendering. In particular, it is relevant with FCP (because there could be specific circumstances with LCP).

The perfect size of a <head> section is considered less than 170KB. Thus, the header should have only the crucial elements. Everything else we need to move to the footer. Such JS files no longer require defer attribute.

You can take out not only JS but also styles and fonts. But the styles and fonts will be applied to your page only when it will be “their turn”. Therefore, a user will see blocks and text without styles and fonts at first. Css/fonts will be applied only after execution will reach the end of a page. In this case, you will receive a very high indicator of shifts (CLS).

Even for that, there is a solution: upload in <head> only styles of the blocks/elements which are visible immediately. Styles of “invisible” blocks (menu/dropdowns/pop-ups/sliders) can be moved to the footer. Meanwhile, for the text, you can indicate the default font which is very similar to the uploaded one. However, in practice, dividing a style into two parts could be problematic. In the case of Magento, the workload is not justified in terms of time.

Thus, styles and fonts usually stay in <head>, while only JS files are moved to the footer.

Largest Contentful Paint (LCP)

Usually, it is a banner. If you have a pop-up window that displays at the end of a page load, an analyzer could identify it as LCP. Therefore, pop-up windows negatively affect LCP.

If possible, the size of the pop-up windows should be as minimal as possible. For example, instead of this text about cookies:

Should be: “accordion” or button to trigger a pop-up window or link to the separate page.

Do not use AJAX/”lazy” for the LCP objects!

You can use base64 as the image source. In this case, an image will be available on a page instantly.

Code for the detection of LCP blocks (to understand which block should be optimized).

new PerformanceObserver((entryList) => { for (const entry of entryList.getEntries()) { console.log(’LCP candidate:’, entry.startTime, entry); }}).observe({type: ’largest-contentful-paint’, buffered: true});

LCP can also be observed in metrics with the timeline (WebPageTest).

There used to be one trick: First, you can display a low quality image. Then, after loading a page — load a high-quality image. However, it does not work anymore.

Here is what works: If LCP is a banner at the beginning of a page, make it full screen. The system could recognize it as background and then, the LCP block could become smaller. For example, text on the banner or logo. 

Thus, “Width:100vw; height:100vh”. Everything under a banner should be “position:fixed”.

Personal experience

Last project, Magento 2.4.

In fact, not everything from this article was utilized during the development of the project. 

“Shifts” were checked during the development stage. Therefore, we did not have to correct CLS at all. In general, we tried to write “right”. They remained in the header.

We also did not implement “Lazy load” separately. Just used the default loading=”lazy”. (to be honest, it has weak browsers’ support)         

We moved the majority of JS to the footer. Considering that we did not do it immediately, we had to turn off the feature on the Cart and Checkout pages due to the significant number of errors.

“Defer” attribute was written with the Instagram widget. 

We applied the “trick” with the banner on mobile and increased the LCP due to this move.


Homepage: 22/78 points

PLP (category): 51/94

PDP (product page): 45/81

Couple of bonus tips

  • Try to avoid inline JS/CSS. They are not cached by a browser. Subsequently, the “real data” would be worse than it should.
  • You should always plan before doing something. Trying to fix the situation after the development is senseless. Rewriting CSS for GPU and JS refactoring would take a lot of time. Equally to writing from scratch or even more (because redoing something is more complicated than doing something in a proper way from the start). Probably, we will not do it. Unless you need to refactor a certain block that can be encountered on a site in multiple places. It is much easier to make during the development stage (before a block will be copied (for different places and stores)). Editing copies implies extra time and possible bugs.
  • If you need to develop the JS code which includes at the start of a page at the start of a page and then moves it to the end of a page, you will encounter many errors. You need to set up “JS optimizations” during the first stages of your project. Then, errors can be solved as they appear, instead of trying to deal with 99 problems at once and not knowing where they came from.
  • After the development, the average forecast is +10-30 Google PageSpeed points for 20-30 hours (only considering that your framework has the ready-to-use solutions/modules/plugins).

Thanks for reading our article! Check out our blog to find more interesting information about eCommerce development and industry in general.

Related Articles

Notify of
Inline Feedbacks
View all comments