Exciting New Tools for Designers, March 2025

March 10th, 2025 No comments

Spring into action this season with new tools to boost your workflows, facilitate creative thinking, and help you be a more efficient designer. This roundup is packed with new goodies that can help you with daily tasks and more long-term projects. Enjoy! Beatoven Projects that need background music can be a challenge when it comes […]

Categories: Designing, Others Tags:

A CSS-Only Star Rating Component and More! (Part 2)

March 7th, 2025 No comments
Red circle representing the thumb element is at the far right of the container but not quite all the way there.

To rectify this we use the animation-range property to make the start of the animation when the subject has completely entered the scroller from the right (entry 100%) and the end of the animation when the subject starts to exit the scroller from the left (exit 0%).

Red circle representing the thumb element displayed qt the far right of the container where the animation starts.

To summarize, the thumb element will move within input’s area and that movement is used to control the progress of an animation that animates a variable between the input’s min and max attribute values. We have our replacement for document.querySelector("input").value in JavaScript!

What’s going on with all the --val instances everywhere? Is it the same thing each time?

I am deliberately using the same --val everywhere to confuse you a little and push you to try to understand what is going on. We usually use the dashed ident (--) notation to define custom properties (also called CSS variables) that we later call with var(). This is still true but that same notation can be used to name other things as well.

In our examples we have three different things named --val:

  1. The variable that is animated and registered using @property. It contains the selected value and is used to style the input.
  2. The named view timeline defined by view-timeline and used by animation-timeline.
  3. The keyframes named --val and called by animation.

Here is the same code written with different names for more clarity:

@property --val {
  syntax: "<number>";
  inherits: true;
  initial-value: 0; 
}

input[type="range"] {
  --min: attr(min type(<number>));
  --max: attr(max type(<number>));

  timeline-scope: --timeline;
  animation: value_update linear both;
  animation-timeline: --timeline;
  animation-range: entry 100% exit 0%;
  overflow: hidden;
}

@keyframes value_update {
  0%   { --val: var(--max) }
  100% { --val: var(--min) }
}

input[type="range"]::thumb {
  view-timeline: --timeine inline;
}

The star rating component

All that we have done up to now is get the selected value of the input range — which is honestly about 90% of the work we need to do. What remains is some basic styles and code taken from what we made in the first article.

If we omit the code from the previous section and the code from the previous article here is what we are left with:

input[type="range"] {
  background: 
    linear-gradient(90deg,
      hsl(calc(30 + 4 * var(--val)) 100% 56%) calc(var(--val) * 100% / var(--max)),
      #7b7b7b 0
    );
}
input[type="range"]::thumb {
  opacity: 0;
}

We make the thumb invisible and we define a gradient on the main element to color in the stars. No surprise here, but the gradient uses the same --val variable that contains the selected value to inform how much is colored in.

When, for example, you select three stars, the --val variable will equal 3 and the color stop of the first color will equal 3*100%/5 , or 60%, meaning three stars are colored in. That same color is also dynamic as I am using the hsl() function where the first argument (the hue) is a function of --val as well.

Here is the full demo, which you will want to open in Chrome 115+ at the time I’m writing this:

CodePen Embed Fallback

And guess what? This implementation works with half stars as well without the need to change the CSS. All you have to do is update the input’s attributes to work in half increments. Remember, we’re yanking these values out of HTML into CSS using attr(), which reads the attributes and returns them to us.

<input type="range" min=".5" step=".5" max="5">
CodePen Embed Fallback

That’s it! We have our rating star component that you can easily control by adjusting the attributes.

So, should I use border-image or a scroll-driven animation?

If we look past the browser support factor, I consider this version better than the border-image approach we used in the first article. The border-image version is simpler and does the job pretty well, but it’s limited in what it can do. While our goal is to create a star rating component, it’s good to be able to do more and be able to style an input range as you want.

With scroll-driven animations, we have more flexibility since the idea is to first get the value of the input and then use it to style the element. I know it’s not easy to grasp but don’t worry about that. You will face scroll-driven animations more often in the future and it will become more familiar with time. This example will look easy to you in good time.

Worth noting, that the code used to get the value is a generic code that you can easily reuse even if you are not going to style the input itself. Getting the value of the input is independent of styling it.

Here is a demo where I am adding

In the last article, we created a CSS-only star rating component using the CSS mask and border-image properties, as well as the newly enhanced attr() function. We ended with CSS code that we can easily adjust to create component variations, including a heart rating and volume control.

This second article will study a different approach that gives us more flexibility. Instead of the border-image trick we used in the first article, we will rely on scroll-driven animations!

Here is the same star rating component with the new implementation. And since we’re treading in experimental territory, you’ll want to view this in Chrome 115+ while we wait for Safari and Firefox support:

CodePen Embed Fallback

Do you spot the difference between this and the final demo in the first article? This time, I am updating the color of the stars based on how many of them are selected — something we cannot do using the border-image trick!

I highly recommend you read the first article before jumping into this second part if you missed it, as I will be referring to concepts and techniques that we explored over there.

One more time: At the time of writing, only Chrome 115+ and Edge 115+ fully support the features we will be using in this article, so please use either one of those as you follow along.

Why scroll-driven animations?

You might be wondering why we’re talking about scroll-driven animation when there’s nothing to scroll to in the star rating component. Scrolling? Animation? But we have nothing to scroll or animate! It’s even more confusing when you read the MDN explainer for scroll-driven animations:

It allows you to animate property values based on a progression along a scroll-based timeline instead of the default time-based document timeline. This means that you can animate an element by scrolling a scrollable element, rather than just by the passing of time.

But if you keep reading you will see that we have two types of scroll-based timelines: scroll progress timelines and view progress timelines. In our case, we are going to use the second one; a view progress timeline, and here is how MDN describes it:

You progress this timeline based on the change in visibility of an element (known as the subject) inside a scroller. The visibility of the subject inside the scroller is tracked as a percentage of progress — by default, the timeline is at 0% when the subject is first visible at one edge of the scroller, and 100% when it reaches the opposite edge.

You can check out the CSS-Tricks almanac definition for view-timeline-name while you’re at it for another explanation.

Things start to make more sense if we consider the thumb element as the subject and the input element as the scroller. After all, the thumb moves within the input area, so its visibility changes. We can track that movement as a percentage of progress and convert it to a value we can use to style the input element. We are essentially going to implement the equivalent of document.querySelector("input").value in JavaScript but with vanilla CSS!

The implementation

Now that we have an idea of how this works, let’s see how everything translates into code.

@property --val {
  syntax: "<number>";
  inherits: true;
  initial-value: 0; 
}

input[type="range"] {
  --min: attr(min type(<number>));
  --max: attr(max type(<number>));

  timeline-scope: --val;
  animation: --val linear both;
  animation-timeline: --val;
  animation-range: entry 100% exit 0%;
  overflow: hidden;
}

@keyframes --val {
  0%   { --val: var(--max) }
  100% { --val: var(--min) }
}

input[type="range"]::thumb {
  view-timeline: --val inline;
}

I know, this is a lot of strange syntax! But we will dissect each line and you will see that it’s not all that complex at the end of the day.

The subject and the scroller

We start by defining the subject, i.e. the thumb element, and for this we use the view-timeline shorthand property. From the MDN page, we can read:

The view-timeline CSS shorthand property is used to define a named view progress timeline, which is progressed through based on the change in visibility of an element (known as the subject) inside a scrollable element (scroller). view-timeline is set on the subject.

I think it’s self-explanatory. The view timeline name is --val and the axis is inline since we’re working along the horizontal x-axis.

Next, we define the scroller, i.e. the input element, and for this, we use overflow: hidden (or overflow: auto). This part is the easiest but also the one you will forget the most so let me insist on this: don’t forget to define overflow on the scroller!

I insist on this because your code will work fine without defining overflow, but the values won’t be good. The reason is that the scroller exists but will be defined by the browser (depending on your page structure and your CSS) and most of the time it’s not the one you want. So let me repeat it another time: remember the overflow property!

The animation

Next up, we create an animation that animates the --val variable between the input’s min and max values. Like we did in the first article, we are using the newly-enhanced attr() function to get those values. See that? The “animation” part of the scroll-driven animation, an animation we link to the view timeline we defined on the subject using animation-timeline. And to be able to animate a variable we register it using @property.

Note the use of timeline-scope which is another tricky feature that’s easy to overlook. By default, named view timelines are scoped to the element where they are defined and its descendant. In our case, the input is a parent element of the thumb so it cannot access the named view timeline. To overcome this, we increase the scope using timeline-scope. Again, from MDN:

timeline-scope is given the name of a timeline defined on a descendant element; this causes the scope of the timeline to be increased to the element that timeline-scope is set on and any of its descendants. In other words, that element and any of its descendant elements can now be controlled using that timeline.

Never forget about this! Sometimes everything is correctly defined but nothing is working because you forget about the scope.

There’s something else you might be wondering:

Why are the keyframes values inverted? Why is the min is set to 100% and the max set to 0%?

To understand this, let’s first take the following example where you can scroll the container horizontally to reveal a red circle inside of it.

CodePen Embed Fallback

Initially, the red circle is hidden on the right side. Once we start scrolling, it appears from the right side, then disappears to the left as you continue scrolling towards the right. We scroll from left to right but our actual movement is from right to left.

In our case, we don’t have any scrolling since our subject (the thumb) will not overflow the scroller (the input) but the main logic is the same. The starting point is the right side and the ending point is the left side. In other words, the animation starts when the thumb is on the right side (the input’s max value) and will end when it’s on the left side (the input’s min value).

The animation range

The last piece of the puzzle is the following important line of code:

animation-range: entry 100% exit 0%;

By default, the animation starts when the subject starts to enter the scroller from the right and ends when the subject has completely exited the scroller from the left. This is not good because, as we said, the thumb will not overflow the scroller, so it will never reach the start and the end of the animation.

To rectify this we use the animation-range property to make the start of the animation when the subject has completely entered the scroller from the right (entry 100%) and the end of the animation when the subject starts to exit the scroller from the left (exit 0%).

Red circle representing the thumb element displayed qt the far right of the container where the animation starts.

To summarize, the thumb element will move within input’s area and that movement is used to control the progress of an animation that animates a variable between the input’s min and max attribute values. We have our replacement for document.querySelector("input").value in JavaScript!

What’s going on with all the --val instances everywhere? Is it the same thing each time?

I am deliberately using the same --val everywhere to confuse you a little and push you to try to understand what is going on. We usually use the dashed ident (--) notation to define custom properties (also called CSS variables) that we later call with var(). This is still true but that same notation can be used to name other things as well.

In our examples we have three different things named --val:

  1. The variable that is animated and registered using @property. It contains the selected value and is used to style the input.
  2. The named view timeline defined by view-timeline and used by animation-timeline.
  3. The keyframes named --val and called by animation.

Here is the same code written with different names for more clarity:

@property --val {
  syntax: "<number>";
  inherits: true;
  initial-value: 0; 
}

input[type="range"] {
  --min: attr(min type(<number>));
  --max: attr(max type(<number>));

  timeline-scope: --timeline;
  animation: value_update linear both;
  animation-timeline: --timeline;
  animation-range: entry 100% exit 0%;
  overflow: hidden;
}

@keyframes value_update {
  0%   { --val: var(--max) }
  100% { --val: var(--min) }
}

input[type="range"]::thumb {
  view-timeline: --timeine inline;
}

The star rating component

All that we have done up to now is get the selected value of the input range — which is honestly about 90% of the work we need to do. What remains is some basic styles and code taken from what we made in the first article.

If we omit the code from the previous section and the code from the previous article here is what we are left with:

input[type="range"] {
  background: 
    linear-gradient(90deg,
      hsl(calc(30 + 4 * var(--val)) 100% 56%) calc(var(--val) * 100% / var(--max)),
      #7b7b7b 0
    );
}
input[type="range"]::thumb {
  opacity: 0;
}

We make the thumb invisible and we define a gradient on the main element to color in the stars. No surprise here, but the gradient uses the same --val variable that contains the selected value to inform how much is colored in.

When, for example, you select three stars, the --val variable will equal 3 and the color stop of the first color will equal 3*100%/5 , or 60%, meaning three stars are colored in. That same color is also dynamic as I am using the hsl() function where the first argument (the hue) is a function of --val as well.

Here is the full demo, which you will want to open in Chrome 115+ at the time I’m writing this:

CodePen Embed Fallback

And guess what? This implementation works with half stars as well without the need to change the CSS. All you have to do is update the input’s attributes to work in half increments. Remember, we’re yanking these values out of HTML into CSS using attr(), which reads the attributes and returns them to us.

<input type="range" min=".5" step=".5" max="5">
CodePen Embed Fallback

That’s it! We have our rating star component that you can easily control by adjusting the attributes.

So, should I use border-image or a scroll-driven animation?

If we look past the browser support factor, I consider this version better than the border-image approach we used in the first article. The border-image version is simpler and does the job pretty well, but it’s limited in what it can do. While our goal is to create a star rating component, it’s good to be able to do more and be able to style an input range as you want.

With scroll-driven animations, we have more flexibility since the idea is to first get the value of the input and then use it to style the element. I know it’s not easy to grasp but don’t worry about that. You will face scroll-driven animations more often in the future and it will become more familiar with time. This example will look easy to you in good time.

Worth noting, that the code used to get the value is a generic code that you can easily reuse even if you are not going to style the input itself. Getting the value of the input is independent of styling it.

Here is a demo where I am adding a tooltip to a range slider to show its value:

CodePen Embed Fallback

Many techniques are involved to create that demo and one of them is using scroll-driven animations to get the input value and show it inside the tooltip!

Here is another demo using the same technique where different range sliders are controlling different variables on the page.

CodePen Embed Fallback

And why not a wavy range slider?

CodePen Embed Fallback

This one is a bit crazy but it illustrates how far we go with styling an input range! So, even if your goal is not to create a star rating component, there are a lot of use cases where such a technique can be really useful.

Conclusion

I hope you enjoyed this brief two-part series. In addition to a star rating component made with minimal code, we have explored a lot of cool and modern features, including the attr() function, CSS mask, and scroll-driven animations. It’s still early to adopt all of these features in production because of browser support, but it’s a good time to explore them and see what can be done soon using only CSS.

Article series

  1. A CSS-Only Star Rating Component and More! (Part 1)
  2. A CSS-Only Star Rating Component and More! (Part 2)

A CSS-Only Star Rating Component and More! (Part 2) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Categories: Designing, Others Tags:

Blast from the Past: Grumpy Cat: From Viral Fame to Cultural Icon

March 7th, 2025 No comments

Grumpy Cat, famous for her permanent scowl, captured the internet’s heart with her relatable and sarcastic meme captions. Her rise to fame turned her into a beloved cultural symbol of frustration and humor that still resonates today.

Categories: Designing, Others Tags:

How To Fix Largest Contentful Paint Issues With Subpart Analysis

March 6th, 2025 No comments

This article is a sponsored by DebugBear

The Largest Contentful Paint (LCP) in Core Web Vitals measures how quickly a website loads from a visitor’s perspective. It looks at how long after opening a page the largest content element becomes visible. If your website is loading slowly, that’s bad for user experience and can also cause your site to rank lower in Google.

When trying to fix LCP issues, it’s not always clear what to focus on. Is the server too slow? Are images too big? Is the content not being displayed? Google has been working to address that recently by introducing LCP subparts, which tell you where page load delays are coming from. They’ve also added this data to the Chrome UX Report, allowing you to see what causes delays for real visitors on your website!

Let’s take a look at what the LCP subparts are, what they mean for your website speed, and how you can measure them.

The Four LCP Subparts

LCP subparts split the Largest Contentful Paint metric into four different components:

  1. Time to First Byte (TTFB): How quickly the server responds to the document request.
  2. Resource Load Delay: Time spent before the LCP image starts to download.
  3. Resource Load Time: Time spent downloading the LCP image.
  4. Element Render Delay: Time before the LCP element is displayed.

The resource timings only apply if the largest page element is an image or background image. For text elements, the Load Delay and Load Time components are always zero.

How To Measure LCP Subparts

One way to measure how much each component contributes to the LCP score on your website is to use DebugBear’s website speed test. Expand the Largest Contentful Paint metric to see subparts and other details related to your LCP score.

Here, we can see that TTFB and image Load Duration together account for 78% of the overall LCP score. That tells us that these two components are the most impactful places to start optimizing.

What’s happening during each of these stages? A network request waterfall can help us understand what resources are loading through each stage.

The LCP Image Discovery view filters the waterfall visualization to just the resources that are relevant to displaying the Largest Contentful Paint image. In this case, each of the first three stages contains one request, and the final stage finishes quickly with no new resources loaded. But that depends on your specific website and won’t always be the case.

Time To First Byte

The first step to display the largest page element is fetching the document HTML. We recently published an article about how to improve the TTFB metric.

In this example, we can see that creating the server connection doesn’t take all that long. Most of the time is spent waiting for the server to generate the page HTML. So, to improve the TTFB, we need to speed up that process or cache the HTML so we can skip the HTML generation entirely.

Resource Load Delay

The “resource” we want to load is the LCP image. Ideally, we just have an tag near the top of the HTML, and the browser finds it right away and starts loading it.

But sometimes, we get a Load Delay, as is the case here. Instead of loading the image directly, the page uses lazysize.js, an image lazy loading library that only loads the LCP image once it has detected that it will appear in the viewport.

Part of the Load Delay is caused by having to download that JavaScript library. But the browser also needs to complete the page layout and start rendering content before the library will know that the image is in the viewport. After finishing the request, there’s a CPU task (in orange) that leads up to the First Contentful Paint milestone, when the page starts rendering. Only then does the library trigger the LCP image request.

How do we optimize this? First of all, instead of using a lazy loading library, you can use the native loading="lazy" image attribute. That way, loading images no longer depends on first loading JavaScript code.

But more specifically, the LCP image should not be lazily loaded. That way, the browser can start loading it as soon as the HTML code is ready. According to Google, you should aim to eliminate resource load delay entirely.

Resources Load Duration

The Load Duration subpart is probably the most straightforward: you need to download the LCP image before you can display it!

In this example, the image is loaded from the same domain as the HTML. That’s good because the browser doesn’t have to connect to a new server.

Other techniques you can use to reduce load delay:

Element Render Delay

The fourth and final LCP component, Render Delay, is often the most confusing. The resource has loaded, but for some reason, the browser isn’t ready to show it to the user yet!

Luckily, in the example we’ve been looking at so far, the LCP image appears quickly after it’s been loaded. One common reason for render delay is that the LCP element is not an image. In that case, the render delay is caused by render-blocking scripts and stylesheets. The text can only appear after these have loaded and the browser has completed the rendering process.

Another reason you might see render delay is when the website preloads the LCP image. Preloading is a good idea, as it practically eliminates any load delay and ensures the image is loaded early.

However, if the image finishes downloading before the page is ready to render, you’ll see an increase in render delay on the page. And that’s fine! You’ve improved your website speed overall, but after optimizing your image, you’ve uncovered a new bottleneck to focus on.

LCP Subparts In Real User CrUX Data

Looking at the Largest Contentful Paint subparts in lab-based tests can provide a lot of insight into where you can optimize. But all too often, the LCP in the lab doesn’t match what’s happening for real users!

That’s why, in February 2025, Google started including subpart data in the CrUX data report. It’s not (yet?) included in PageSpeed Insights, but you can see those metrics in DebugBear’s “Web Vitals” tab.

One super useful bit of info here is the LCP resource type: it tells you how many visitors saw the LCP element as a text element or an image.

Even for the same page, different visitors will see slightly different content. For example, different elements are visible based on the device size, or some visitors will see a cookie banner while others see the actual page content.

To make the data easier to interpret, Google only reports subpart data for images.

If the LCP element is usually text on the page, then the subparts info won’t be very helpful, as it won’t apply to most of your visitors.

But breaking down text LCP is relatively easy: everything that’s not part of the TTFB score is render-delayed.

Track Subparts On Your Website With Real User Monitoring

Lab data doesn’t always match what real users experience. CrUX data is superficial, only reported for high-traffic pages, and takes at least 4 weeks to fully update after a change has been rolled out.

That’s why a real-user monitoring tool like DebugBear comes in handy when fixing your LCP scores. You can track scores across all pages on your website over time and get dedicated dashboards for each LCP subpart.

You can also review specific visitor experiences, see what the LCP image was for them, inspect a request waterfall, and check LCP subpart timings. Sign up for a free trial.

Conclusion

Having more granular metric data available for the Largest Contentful Paint gives web developers a big leg up when making their website faster.

Including subparts in CrUX provides new insight into how real visitors experience your website and can tell if the optimizations you’re considering would really be impactful.

Categories: Others Tags:

How To Fix Largest Contentful Issues With Subpart Analysis

March 6th, 2025 No comments

This article is a sponsored by DebugBear

The Largest Contentful Paint (LCP) in Core Web Vitals measures how quickly a website loads from a visitor’s perspective. It looks at how long after opening a page the largest content element becomes visible. If your website is loading slowly, that’s bad for user experience and can also cause your site to rank lower in Google.

When trying to fix LCP issues, it’s not always clear what to focus on. Is the server too slow? Are images too big? Is the content not being displayed? Google has been working to address that recently by introducing LCP subparts, which tell you where page load delays are coming from. They’ve also added this data to the Chrome UX Report, allowing you to see what causes delays for real visitors on your website!

Let’s take a look at what the LCP subparts are, what they mean for your website speed, and how you can measure them.

The Four LCP Subparts

LCP subparts split the Largest Contentful Paint metric into four different components:

  1. Time to First Byte (TTFB): How quickly the server responds to the document request.
  2. Resource Load Delay: Time spent before the LCP image starts to download.
  3. Resource Load Time: Time spent downloading the LCP image.
  4. Element Render Delay: Time before the LCP element is displayed.

The resource timings only apply if the largest page element is an image or background image. For text elements, the Load Delay and Load Time components are always zero.

How To Measure LCP Subparts

One way to measure how much each component contributes to the LCP score on your website is to use DebugBear’s website speed test. Expand the Largest Contentful Paint metric to see subparts and other details related to your LCP score.

Here, we can see that TTFB and image Load Duration together account for 78% of the overall LCP score. That tells us that these two components are the most impactful places to start optimizing.

What’s happening during each of these stages? A network request waterfall can help us understand what resources are loading through each stage.

The LCP Image Discovery view filters the waterfall visualization to just the resources that are relevant to displaying the Largest Contentful Paint image. In this case, each of the first three stages contains one request, and the final stage finishes quickly with no new resources loaded. But that depends on your specific website and won’t always be the case.

Time To First Byte

The first step to display the largest page element is fetching the document HTML. We recently published an article about how to improve the TTFB metric.

In this example, we can see that creating the server connection doesn’t take all that long. Most of the time is spent waiting for the server to generate the page HTML. So, to improve the TTFB, we need to speed up that process or cache the HTML so we can skip the HTML generation entirely.

Resource Load Delay

The “resource” we want to load is the LCP image. Ideally, we just have an tag near the top of the HTML, and the browser finds it right away and starts loading it.

But sometimes, we get a Load Delay, as is the case here. Instead of loading the image directly, the page uses lazysize.js, an image lazy loading library that only loads the LCP image once it has detected that it will appear in the viewport.

Part of the Load Delay is caused by having to download that JavaScript library. But the browser also needs to complete the page layout and start rendering content before the library will know that the image is in the viewport. After finishing the request, there’s a CPU task (in orange) that leads up to the First Contentful Paint milestone, when the page starts rendering. Only then does the library trigger the LCP image request.

How do we optimize this? First of all, instead of using a lazy loading library, you can use the native loading="lazy" image attribute. That way, loading images no longer depends on first loading JavaScript code.

But more specifically, the LCP image should not be lazily loaded. That way, the browser can start loading it as soon as the HTML code is ready. According to Google, you should aim to eliminate resource load delay entirely.

Resources Load Duration

The Load Duration subpart is probably the most straightforward: you need to download the LCP image before you can display it!

In this example, the image is loaded from the same domain as the HTML. That’s good because the browser doesn’t have to connect to a new server.

Other techniques you can use to reduce load delay:

Element Render Delay

The fourth and final LCP component, Render Delay, is often the most confusing. The resource has loaded, but for some reason, the browser isn’t ready to show it to the user yet!

Luckily, in the example we’ve been looking at so far, the LCP image appears quickly after it’s been loaded. One common reason for render delay is that the LCP element is not an image. In that case, the render delay is caused by render-blocking scripts and stylesheets. The text can only appear after these have loaded and the browser has completed the rendering process.

Another reason you might see render delay is when the website preloads the LCP image. Preloading is a good idea, as it practically eliminates any load delay and ensures the image is loaded early.

However, if the image finishes downloading before the page is ready to render, you’ll see an increase in render delay on the page. And that’s fine! You’ve improved your website speed overall, but after optimizing your image, you’ve uncovered a new bottleneck to focus on.

LCP Subparts In Real User CrUX Data

Looking at the Largest Contentful Paint subparts in lab-based tests can provide a lot of insight into where you can optimize. But all too often, the LCP in the lab doesn’t match what’s happening for real users!

That’s why, in February 2025, Google started including subpart data in the CrUX data report. It’s not (yet?) included in PageSpeed Insights, but you can see those metrics in DebugBear’s “Web Vitals” tab.

One super useful bit of info here is the LCP resource type: it tells you how many visitors saw the LCP element as a text element or an image.

Even for the same page, different visitors will see slightly different content. For example, different elements are visible based on the device size, or some visitors will see a cookie banner while others see the actual page content.

To make the data easier to interpret, Google only reports subpart data for images.

If the LCP element is usually text on the page, then the subparts info won’t be very helpful, as it won’t apply to most of your visitors.

But breaking down text LCP is relatively easy: everything that’s not part of the TTFB score is render-delayed.

Track Subparts On Your Website With Real User Monitoring

Lab data doesn’t always match what real users experience. CrUX data is superficial, only reported for high-traffic pages, and takes at least 4 weeks to fully update after a change has been rolled out.

That’s why a real-user monitoring tool like DebugBear comes in handy when fixing your LCP scores. You can track scores across all pages on your website over time and get dedicated dashboards for each LCP subpart.

You can also review specific visitor experiences, see what the LCP image was for them, inspect a request waterfall, and check LCP subpart timings. Sign up for a free trial.

Conclusion

Having more granular metric data available for the Largest Contentful Paint gives web developers a big leg up when making their website faster.

Including subparts in CrUX provides new insight into how real visitors experience your website and can tell if the optimizations you’re considering would really be impactful.

Categories: Others Tags:

Let’s Talk About The American Dream

March 6th, 2025 No comments
Let's Talk About The American Dream

A few months ago I wrote about what it means to stay gold — to hold on to the best parts of ourselves, our communities, and the American Dream itself. But staying gold isn’t passive. It takes work. It takes action. It takes hard conversations that ask us to confront where we’ve been, where we are, and who we want to be.

Let's Talk About The American Dream

That’s why I’m incredibly honored to be joining Alexander Vindman in giving a talk at the historic Cooper Union Great Hall 14 days from now. I greatly admire the way Colonel Vindman was willing to put everything on the line to defend the ideals of democracy and the American Dream.

The American Dream is, at its core, the promise that hard work, fairness, and opportunity can lead to a better future. But in 2025, that promise feels like a question: How can we build on our dream so that it works for everyone?

Alexander and I will explore this in our joint talk through the lens of democracy, community, and economic mobility. We come from very different backgrounds, but we strongly share the belief that everyone’s American Dream is worth fighting for.

Alexander Vindman has lived many lifetimes of standing up for what’s right. He was born in the Soviet Union and immigrated to the U.S. as a child, growing up in Brooklyn before enlisting in the U.S. Army. Over the next 21 years, he served with distinction, earning a Purple Heart for injuries sustained in Iraq and eventually rising to Director of European Affairs for the National Security Council. When asked to choose between looking the other way or upholding the values he swore to protect, he chose correctly. That decision cost him his career but never his integrity. I have a lot to learn about what civic duty truly means from Alex.

I build things on the Internet, like Stack Overflow and Discourse. I write on the internet, on this blog. I’ve spent years thinking about how people interact online, how communities work (or don’t), and how we create digital spaces that encourage fairness, participation, and constructive discourse. Spaces that result in artifacts for the common good, like local parks, where everyone can enjoy them together. Whether you’re running a country or running a forum, the same rules seem to apply: people need clear expectations, fair systems, strong boundaries, and a shared sense of purpose.

This is the part of Stay Gold I couldn’t tell you about, not yet, because I was working so hard to figure it out. How do you make long-term structural change that creates opportunity for everyone? It is an incredibly complex problem. But if we focus our efforts in a particular area, I believe we can change a lot of things in this country. Maybe not everything, but something foundational to the next part of our history as a country: how to move beyond individual generosity and toward systems that create security, dignity, and possibility for all.

I can’t promise easy answers, but what I can promise is an honest, unfiltered conversation about how we move forward, with specifics. Colonel Vindman brings the perspective of someone who embodied American ideals, and I bring the experience of building self-governing digital communities that scale, which turned out to be far more relevant to the future of democracy than I ever would have dreamed possible.

Imagine what we can do if Alex and I work together. Imagine what we could do if we all worked together.

Categories: Others, Programming Tags:

Digg.com is Back: The Reboot We Didn’t Know We Needed

March 5th, 2025 No comments

Digg is making a comeback under its original founder, Kevin Rose, and Reddit co-founder Alexis Ohanian, with a renewed focus on AI-driven content curation and community engagement.

Categories: Designing, Others Tags:

Why Interoperability Testing is Essential for Multi-Device Ecosystem?

March 5th, 2025 No comments

Today, in a connected world, several devices must communicate with each other uninterruptedly. The multi-device landscape including smart phones, wearables and home devices interact to offer an immersive user experience. Often, when these devices are from different manufacturers, there arises a conflict in terms of interoperability. Therefore, the need for interoperability testing becomes essential to ensure that these devices work together flawlessly.

Significance of interoperability testing

The test verifies if different devices, applications and systems interact in such a way that it helps hassle-free exchange of data and implements tasks as expected to provide the best experience for end users. Therefore, the fundamental objective of interoperability testing is to recognize and address compatibility issues between different devices designed by different manufacturers, when they work together in a connected ecosystem.

Why is interoperability testing considered important?

Interoperability testing is key in a muti-device environment, due to a number of reasons. We shall investigate each of these reasons. Understanding the need for interoperability testing is significant in multi-device ecosystem helps manufacturers to implement it without fail.

Minimizes Compatibility Problems

As mentioned earlier, when several devices work together, compatibility issues can happen. Interoperability testing helps recognize and address these issues in the first place. As a result, devices from different manufacturers work together without any compatibility issues.

Exceptional User Experience

Interoperability testing is implemented to understand the efficiency of devices working together without any compatibility problems. Therefore, in a multi-device environment when all devices work together to offer the best performance, they can offer an interactive experience for end users.

Better Control of Resources and Time

Since interoperability testing focuses on identifying and addressing compatibility issues in advance, it helps manufacturers save time as well as resources. Further, it minimizes expensive rework, testing and debugging.

Radio Frequency (RF) Testing in Interoperability Testing

Apart from testing the efficacy of interoperability between the devices, it is also important to test the communication protocols these devices use. At this stage, Radio Frequency testing is considered significant. RF testing involves testing the radio frequency signals the devices use to interact with each other. In other words, it involves signal quality, signal strength and interference testing.

RF testing verifies if devices can interact effectively in any environment. Say, for example, a wearable device has to communicate with a smart home device during heavily crowded situations. Radio Frequency testing guarantees that the quality and signal strength are enough to provide flawless communication.

In fact, Radio Frequency testing helps manufacturers verify that their devices can interact without any issues under different environments. Also, it tests devices in different locations, for example in a field or inside a lab. 

A Few Challenges in Interoperability Testing

Interoperability testing is essential, but it has a lot many challenges. Getting to know the bottlenecks and limitations would help devise a better plan or adopt the best practices to implement interoperability testing. 

Many Devices from Different Manufacturers

Since there are several devices, operating systems and communication protocols, it can make the testing even more challenging.

Complex and Time-consuming

When multiple devices are involved, verifying the interactions between several devices can become complex and also consumes time.

Minimal Resources

Often, manufacturers have limited resources, for example in terms of time, persons, budget and so on. As a result, it can make interoperability testing difficult.

Some Best Practices for Interoperability Testing

Even though interoperability testing poses challenges, manufacturers can stick to the following best practices. 

Create a complete test plan

The first and foremost step is to devise a plan that includes all the possible interactions between the devices. It involves steps such as identifying test scenarios, defining test cases, determining test environment, prioritizing test cases, creating test scripts, executing test cases, analyzing test results and refining the test plan based on the test results.

Reliance on Automated Testing Tools

To a great extent, using automated testing tools helps streamline the testing process and minimize the risk of manual error. Automated testing brings several advantages such as improved accuracy, increased test coverage, reduced costs, faster feedback and increased efficiency. Some of the common automated testing tools include Selenium, TestComplete, Appium, Cucumber and JMeter.

Collaboration Among Manufacturers

Collaborating with other manufacturers ensures that the devices interact and work together to provide a great end user experience. Further it improves interoperability, minimizes costs and gives a competitive advantage. For this, there should be clear cut objectives, define responsibilities, create shared test plan, establish communication channels and promote a collaborative culture.

Endnote

Interoperability testing is essential for a multi-device environment. When devices work together, ensure their efficiency by carefully incorporating Radio Frequency testing, providing support for innovation and collaboration. In the end, it guarantees the best performance and user experience while reducing costs. Furthermore, manufacturers can identify and fix any compatibility issues early on when a large number of devices work together.

Featured image by Dave Weatherall on Unsplash

The post Why Interoperability Testing is Essential for Multi-Device Ecosystem? appeared first on noupe.

Categories: Others Tags:

The Worst Branding Blunder: Meta’s Messenger Logo U-Turn

March 5th, 2025 No comments

Meta’s switch back to the blue Messenger logo has sparked confusion, reflecting the company’s ongoing identity crisis. Zuckerberg’s constant flip-flopping only adds to the uncertainty.

Categories: Designing, Others Tags:

Top 9 Web Design Best Practices You Need To Utilize

March 4th, 2025 No comments
web-design-best-practices-to-utilize

If you’re feeling a bit overwhelmed by the endless options and technical jargon, you’re not alone. The good news is, that creating a user-friendly and visually appealing website doesn’t have to be complicated.

In this blog, we’ll break down the best practices in web design using everyday language so that you can understand the essentials without the stress. Whether you’re a budding designer, a small business owner, or someone curious about how websites work, we’re here to guide you through the key principles that make a website not just good, but great!

Let’s dive into the simple yet powerful practices that elevate your web presence!

What is web design?

Web design, at its core, is about creating attractive and user-friendly websites and web applications. Think of it like decorating a room, where you not only want it to look nice but also to make sure people can easily move around and find what they need.

So why is web design important?

A well-designed site can make a great first impression, which is vital for building trust with your audience. Good web design helps define your brand’s identity, improves the user experience making it easy for people to navigate and find what they’re looking for, and boosts visibility through better search engine rankings.

Benefits of having a good web design

Let’s explore why website design is important and how it’s benefiting you in the long run.

Improved user experience

Responsive web design ensures that your site looks great and functions well on all devices, providing a consistent experience for users whether they are on a smartphone, tablet, or desktop.

Increased mobile traffic

With more users browsing the internet on mobile devices, a responsive web design attracts more mobile visitors, boosting your site’s overall traffic.

Budget-friendly

Rather than creating separate mobile and desktop websites, responsive web design allows you to maintain one site, saving you time and money on updates and development.

Faster load times

A responsive website is optimized for performance, often resulting in faster load times. This is crucial since users are likely to leave sites that take too long to load.

Higher SEO rankings

Search engines like Google prefer responsive designs because they provide a better user experience. This can lead to improved search engine rankings.

Easier maintenance

Keeping a single site updated is easier than managing multiple versions. Responsive web design allows for uniform updates across all devices.

Flexibility for future devices

As new devices and screen sizes are introduced, a responsive web design adapts easily, ensuring your website remains functional and visually appealing.

Convert leads

A seamless user experience across all devices can lead to higher conversion rates, whether for sales, sign-ups, or other goals.

9 Web Design Best Practices

Let’s discover the list of the 9 best practices in web design that are inspiring and elevating great web design best practices on your website.

Mobile responsiveness

When it comes to best practices in web design, keeping mobile responsiveness in mind is essential. With so many users browsing on their phones, you want your site to look great and function well on all screen sizes.

A few key tips include using fluid grid layouts for flexible designs, optimizing images for both search engines and different devices, and ensuring your buttons are touch-friendly. Don’t forget to keep things simple and fast-loading to avoid overwhelming users.

Lastly, always test your site on various devices to catch any issues. Following these best practices will make your site more user-friendly and boost its search engine ranking!

Intuitive navigation

You want your users to find what they’re looking for without any hassle. This means that your menu should be clear and straightforward and think about how you’d naturally look for information. Use familiar terms and organize content in a way that makes sense.

For example, group related pages together and keep the most important ones easily accessible. Plus, adding a search bar can help users who want to find something specific quickly. It’s all about guiding your visitors smoothly through your site. It makes for a better experience and keeps them coming back!

Monitor site speed

When you’re diving into web design best practices you should definitely keep an eye on site speed. People tend to be impatient online. If a page takes too long to load, it’ll likely bounce and look for something else.

A fast-loading site not only enhances user experience but also boosts your search engine rankings. To monitor your site’s speed, consider using tools like Google PageSpeed Insights or GTmetrix. These tools give you insights into what’s slowing you down and offer suggestions for improvement.

So, keeping your site speedy is key to keeping visitors happy and engaged!

Use visual elements

Concerning best practices for web design, using visual elements effectively can make a world of difference! Think about how colors, images, and typography work together to tell a story.

For example, imagine a travel website. By using vibrant images of tropical beaches and adventure activities, you capture the excitement visitors are looking for.

Pair those visuals with a clean, easy-to-read font and a color palette that evokes relaxation, maybe some soft blues and greens. This combination not only draws in the user but also enhances their experience, making navigation intuitive and enjoyable.

Optimize buttons and calls-to-action

Optimizing your buttons and calls-to-action (CTAs) is key for enhancing user experience and boosting conversions. Make sure your buttons are eye-catching and easy to find. Use clear and compelling text that tells users exactly what to do next, think “Get Started” instead of just “Submit.”

Let’s say you are running an e-commerce site, instead of a “Buy Now,” you could use “Grab Your Discount!” to spark interest. Colors also matter; vibrant colors tend to attract more attention, so consider a bold color for your CTA button that contrasts with the background. This way, users can’t help but click!

Utilize white space

One of the best practices in web design you can implement is the effective use of white space. Think of white space as the breathing room for your content. It helps to declutter your pages, making them feel more open and inviting.

By strategically placing white space around text, images, and buttons, you guide users on where to look and what to focus on. This not only enhances readability but also improves overall user experience.

So, don’t be shy about leaving some areas of your design empty; it can elevate your website from good to great!

Credibility

When diving into web design for practice, implementing credibility is key. You want your designs to not only look good but also to engage and retain users. Start by prioritizing user experience to ensure your site is easy to navigate and mobile-friendly.

Incorporate clear calls to action and provide valuable content that reflects your audience’s needs. By following web design best practices and showing attention to detail in your design, you’ll build trust with users, making them more likely to return to your site and recommend it to others.

Accessibility

When it comes to website design best practices, accessibility is a big deal!

It’s all about making your website usable for everyone, including folks with disabilities. This means using clear language, ensuring your site is navigable with a keyboard, and providing text alternatives for images.

Color contrast is also essential, so make sure your text stands out against the background. By following these simple guidelines, you can create a space where all users feel welcome and can easily interact with your content. It’s not just good practice; it’s the right thing to do!

Consistent branding

In website design practice, consistent branding is super important! It helps create a great experience for your visitors, making your website instantly recognizable. Think about it when all your colors, fonts, and styles align with your brand identity, it builds trust and strengthens what your brand stands for.

Moreover, it also makes navigation smoother since users can easily recognize calls to action and various sections of your site. So, whether it’s your logo, color palette, or even the tone of your content, keeping everything in sync will boost your brand’s impact online!

Implement website design practices and create a stunning website!

By utilizing the best practices in web design, you can truly enhance the user experience while creating a captivating online presence. Remember, it’s all about simplicity, responsive design, and intuitive navigation. These elements not only make your site visually appealing but also accessible and user-friendly.

So whether you’re kicking off your brand or rebuilding an existing site, focusing on these principles is key to making a small business website that resonates with your audience.

Start implementing these strategies today, and watch your online presence thrive!

Featured image by Domenico Loia on Unsplash

The post Top 9 Web Design Best Practices You Need To Utilize appeared first on noupe.

Categories: Others Tags: