Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement: Page speed metrics in preview #33578

Open
adamsilverstein opened this issue Jul 20, 2021 · 8 comments
Open

Enhancement: Page speed metrics in preview #33578

adamsilverstein opened this issue Jul 20, 2021 · 8 comments
Labels
Needs Design Feedback Needs general design feedback. Needs Technical Feedback Needs testing from a developer perspective. [Type] Enhancement A suggestion for improvement.

Comments

@adamsilverstein
Copy link
Member

adamsilverstein commented Jul 20, 2021

What problem does this address?

When publishing or updating a post, users check the preview screen to see how their page will look. If might help users if they could get a sense of how their page would perform.

What is your proposed solution?

  • when users preview a page, send an API request to the Lighthouse or a similar API with a preview URL
  • unpublished posts could be previewed with a temporary token, or left off
  • visually display a simple score for the page, perhaps linking to the full report

Questions

Does this type of feature belong in a plugin? I would love to hear from the project maintainers if they think this type of feature can be built into Gutenberg directly or is better served by plugins? I believe existing filters would provide everything a plugin would need to add such a feature.

@adamsilverstein adamsilverstein added [Type] Enhancement A suggestion for improvement. Needs Design Feedback Needs general design feedback. labels Jul 20, 2021
@gziolo gziolo added the Needs Technical Feedback Needs testing from a developer perspective. label Jul 21, 2021
@felixarntz
Copy link
Member

I love this idea - highlighting performance and potential issues as part of the content creation flow makes sense.

unpublished posts could be previewed with a temporary token, or left off

This is a great consideration - I think it would be vital to include unpublished posts, since highlighting performance problems may be most impactful when done before publishing. We could go down the route of a temporary token (like Public Post Preview), but I'd be worried about all posts having a public version around even when not published - for 99% it's probably fine to have a version public that is only available through a temporary token, but there is probably some sensitive content out there or sites for which this would be a dealbreaker.

For the above reason, maybe it would be more appropriate to use Lighthouse client-side instead of through e.g. PageSpeed Insights API which requires a public URL?

@westonruter
Copy link
Member

For the above reason, maybe it would be more appropriate to use Lighthouse client-side instead of through e.g. PageSpeed Insights API which requires a public URL?

Programmatic access doesn't seem to involve running in the browser, does it? Since it's not feasible to expect Chrome to be installable as an executable on the server, any such analysis would have to be performed in the user's own browser, right?

@aristath
Copy link
Member

Using lighthouse client-side would be problematic because well... lighthouse is not available by default in all browsers. Perhaps it would be possible to use the PerformanceNavigationTiming API? It's supported in all modern browsers so it should be possible to use it for some basic performance measurements...

@westonruter
Copy link
Member

Yes. Certain aspects of Lighthouse could be implemented client-side. For example PerformanceObserver can be used to determine:

Calculating FID may not be practical since it requires user interaction and this is best done in the field, however. So instead it could report Total Blocking Time (TBT) since it is lab-measureable (although I didn't immediately find a PerformanceObserver example).

Showing these metrics could be a nice first step. It could be done by loading up a preview of the post in an iframe and obtaining the CWV metrics to display in the pre-publish panel.

Nevertheless, having metrics alone would not be super helpful since it wouldn't given them any way to action on the results to improve their scores. In the context of Gutenberg for this to be helpful I think it would depend on correlating which blocks in the content are negatively impacting CWV, and then directing the user to those blocks so that they may consider using something different if possible.

@westonruter
Copy link
Member

I will note that this is an active area of research for the the AMP plugin team. While up until now the AMP plugin has been attributing AMP validation errors to blocks in the editor, we are expanding to more general PX analyses given that the focus on “AMP validity” is lessening with development of Bento.

@schlessera
Copy link
Member

When using PerformanceObserver client-side, the following limitations should be kept in mind:

  • the audit depends on your local machine's resources and network stack (so someone from India has completely different metrics than someone from the US, depending on where the server is)
  • the audit is done on a logged-in user, which is different than what an anonymous visitor would get
  • the audit measures the entire Chrome process & context, which means it measures not only the page, but all the Chrome extensions and all the Chrome background processes at the same time

It is therefore debatable, how reliable the extracted measurements actually are.

Can an iframe be configured in such a way that it counters some of these limitations?

@westonruter
Copy link
Member

the audit is done on a logged-in user, which is different than what an anonymous visitor would get

For one thing, the iframed page could be loaded with a query parameter to nullify the logged-in user. That will prevent the admin bar from being displayed and will make it look like a normal user accessing the page.

the audit measures the entire Chrome process & context, which means it measures not only the page, but all the Chrome extensions and all the Chrome background processes at the same time

True, but this could actually be a good thing. Visitors will also have Chrome extensions and background processes running, so perhaps PerformanceObserver could reflect performance of what a user may experience when they don't have just a single Chrome tab open loading just that one website.

the audit depends on your local machine's resources and network stack (so someone from India has completely different metrics than someone from the US, depending on where the server is)

This one is tricky. Not only would the network connection not be throttled during the audit, but the cache would also most likely be primed. The only way I can think of to simulate accessing the page as a first-time visitor on a poor network would be to use a service worker to intercept all requests. Another possibility would be to inject random numbers into the URLs for all page assets. But this may be overkill and perhaps insufficient.

For any CWV information being surfaced, I think instead of following the thresholds determined by Google, we could instead consider more of a pass/fail scheme or error/warning/info.

For example, PerformanceObserver will report layout shifts of elements on a page regardless of the connection speed or cache state. For first time visitors, the layout shifts will occur over a longer period of time, while for returning visitors they will be shorter (due to caches). In both cases, a layout shift will happen, even if for the returning visitor the layout shift may be less perceptible because it happens right after the page loads. Nevertheless, we can still capture the element that had a layout shift and depending on how much shift there is, mark the element as being either a warning or an error.

Analyzing LCP and TBT are more difficult due to the user's primed cached. For them, instead of using PerformanceObserver it may be better to do a DOM analysis to check for red flags. For example, if a block causes a script to be printed which doesn't have async/defer then this could be a warning related to TBT. For LCP, if a block has an image which lacks responsive sizes this could be a warning for LCP.

@dainemawer
Copy link

It could be worth using: https://www.npmjs.com/package/web-vitals - it already has a built in API for sending data to an endpoint or dashboard. I think all the points above make a ton of sense and they are all valid, but maybe we're a bit skewed on context.

As someone mentioned, the resources available to your machine, bandwidth and all that can have a serious effect on scoring. Lighthouse, ideally should be used to diagnose performance pitfalls, not measure metrics.

Perhaps, it's better to capture vital data here and place it in context of a CrUX data? WebPageTest and PSI have started providing signals that are relative to CrUX data and it's far more useful (and easier to digest). At the end of the day, thats going to set you a part right? When your page speed signals are competing with sites around the world.

Its also important, that whatever shape or form this data is presented is communicated in a manner that does not diverge away from the guiding principles of Web Vitals:

"Site owners should not have to be performance gurus in order to understand the quality of experience they are delivering to their users. The Web Vitals initiative aims to simplify the landscape, and help sites focus on the metrics that matter most, the Core Web Vitals."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Needs Design Feedback Needs general design feedback. Needs Technical Feedback Needs testing from a developer perspective. [Type] Enhancement A suggestion for improvement.
Projects
None yet
Development

No branches or pull requests

7 participants