Different Ways to Get CSS Gradient Shadows
The blue area is the visible part after applying the clip-path
. I am only using the blue color to illustrate the concept, but in reality, we will only see the shadow inside that area. As you can see, we have four points defined with a big value (B
). My big value is 100vmax
, but it can be any big value you want. The idea is to ensure we have enough space for the shadow. We also have four points that are the corners of the pseudo-element.
The arrows illustrate the path that defines the polygon. We start from (-B, -B)
until we reach (0,0)
. In total, we need 10 points. Not eight points because two points are repeated twice in the path ((-B,-B)
and (0,0)
).
There’s still one more thing left for us to do, and it’s to account for the spread distance and the offsets. The only reason the demo above works is because it is a particular case where the offsets and spread distance are equal to 0
.
Let’s define the spread and see what happens. Remember that we use inset
with a negative value to do this:
The pseudo-element is now bigger than the main element, so the clip-path
cuts more than we need it to. Remember, we always need to cut the part inside the main element (the area inside the green border of the example). We need to adjust the position of the four points inside of clip-path
.
.box {
--s: 10px; /* the spread */
position: relative;
}
.box::before {
inset: calc(-1 * var(--s));
clip-path: polygon(
-100vmax -100vmax,
100vmax -100vmax,
100vmax 100vmax,
-100vmax 100vmax,
-100vmax -100vmax,
calc(0px + var(--s)) calc(0px + var(--s)),
calc(0px + var(--s)) calc(100% - var(--s)),
calc(100% - var(--s)) calc(100% - var(--s)),
calc(100% - var(--s)) calc(0px + var(--s)),
calc(0px + var(--s)) calc(0px + var(--s))
);
}
We’ve defined a CSS variable, --s
, for the spread distance and updated the polygon points. I didn’t touch the points where I am using the big value. I only update the points that define the corners of the pseudo-element. I increase all the zero values by --s
and decrease the 100%
values by --s
.
It’s the same logic with the offsets. When we translate the pseudo-element, the shadow is out of alignment, and we need to rectify the polygon again and move the points in the opposite direction.
.box {
--s: 10px; /* the spread */
--x: 10px; /* X offset */
--y: 8px; /* Y offset */
position: relative;
}
.box::before {
inset: calc(-1 * var(--s));
transform: translate3d(var(--x), var(--y), -1px);
clip-path: polygon(
-100vmax -100vmax,
100vmax -100vmax,
100vmax 100vmax,
-100vmax 100vmax,
-100vmax -100vmax,
calc(0px + var(--s) - var(--x)) calc(0px + var(--s) - var(--y)),
calc(0px + var(--s) - var(--x)) calc(100% - var(--s) - var(--y)),
calc(100% - var(--s) - var(--x)) calc(100% - var(--s) - var(--y)),
calc(100% - var(--s) - var(--x)) calc(0px + var(--s) - var(--y)),
calc(0px + var(--s) - var(--x)) calc(0px + var(--s) - var(--y))
);
}
There are two more variables for the offsets: --x
and --y
. We use them inside of transform
and we also update the clip-path
values. We still don’t touch the polygon points with big values, but we offset all the others — we reduce --x
from the X coordinates, and --y
from the Y coordinates.
Now all we have to do is to update a few variables to control the gradient shadow. And while we are at it, let’s also make the blur radius a variable as well:
Do we still need the 3D
transform
trick?
It all depends on the border. Don’t forget that the reference for a pseudo-element is the padding box, so if you apply a border to your main element, you will have an overlap. You either keep the 3D transform
trick or update the inset
value to account for the border.
Here is the previous demo with an updated inset
value in place of the 3D transform
:
I‘d say this is a more suitable way to go because the spread distance will be more accurate, as it starts from the border-box instead of the padding-box. But you will need to adjust the inset
value according to the main element’s border. Sometimes, the border of the element is unknown and you have to use the previous solution.
With the earlier non-transparent solution, it’s possible you will face a stacking context issue. And with the transparent solution, it’s possible you face a border issue instead. Now you have options and ways to work around those issues. The 3D transform trick is my favorite solution because it fixes all the issues (
It’s a question I hear asked quite often: Is it possible to create shadows from gradients instead of solid colors? There is no specific CSS property that does this (believe me, I’ve looked) and any blog post you find about it is basically a lot of CSS tricks to approximate a gradient. We’ll actually cover some of those as we go.
But first… another article about gradient shadows? Really?
Yes, this is yet another post on the topic, but it is different. Together, we’re going to push the limits to get a solution that covers something I haven’t seen anywhere else: transparency. Most of the tricks work if the element has a non-transparent background but what if we have a transparent background? We will explore this case here!
Before we start, let me introduce my gradient shadows generator. All you have to do is to adjust the configuration, and get the code. But follow along because I’m going to help you understand all the logic behind the generated code.
Table of Contents
Non-transparent solution
Let’s start with the solution that’ll work for 80% of most cases. The most typical case: you are using an element with a background, and you need to add a gradient shadow to it. No transparency issues to consider there.
The solution is to rely on a pseudo-element where the gradient is defined. You place it behind the actual element and apply a blur filter to it.
.box {
position: relative;
}
.box::before {
content: "";
position: absolute;
inset: -5px; /* control the spread */
transform: translate(10px, 8px); /* control the offsets */
z-index: -1; /* place the element behind */
background: /* your gradient here */;
filter: blur(10px); /* control the blur */
}
It looks like a lot of code, and that’s because it is. Here’s how we could have done it with a box-shadow
instead if we were using a solid color instead of a gradient.
box-shadow: 10px 8px 10px 5px orange;
That should give you a good idea of what the values in the first snippet are doing. We have X and Y offsets, the blur radius, and the spread distance. Note that we need a negative value for the spread distance that comes from the inset
property.
Here’s a demo showing the gradient shadow next to a classic box-shadow
:
If you look closely you will notice that both shadows are a little different, especially the blur part. It’s not a surprise because I am pretty sure the filter
property’s algorithm works differently than the one for box-shadow
. That’s not a big deal since the result is, in the end, quite similar.
This solution is good, but still has a few drawbacks related to the z-index: -1
declaration. Yes, there is “stacking context” happening there!
I applied a transform
to the main element, and boom! The shadow is no longer below the element. This is not a bug but the logical result of a stacking context. Don’t worry, I will not start a boring explanation about stacking context (I already did that in a Stack Overflow thread), but I’ll still show you how to work around it.
The first solution that I recommend is to use a 3D transform
:
.box {
position: relative;
transform-style: preserve-3d;
}
.box::before {
content: "";
position: absolute;
inset: -5px;
transform: translate3d(10px, 8px, -1px); /* (X, Y, Z) */
background: /* .. */;
filter: blur(10px);
}
Instead of using z-index: -1
, we will use a negative translation along the Z-axis. We will put everything inside translate3d()
. Don’t forget to use transform-style: preserve-3d
on the main element; otherwise, the 3D transform
won’t take effect.
As far as I know, there is no side effect to this solution… but maybe you see one. If that’s the case, share it in the comment section, and let’s try to find a fix for it!
If for some reason you are unable to use a 3D transform
, the other solution is to rely on two pseudo-elements — ::before
and ::after
. One creates the gradient shadow, and the other reproduces the main background (and other styles you might need). That way, we can easily control the stacking order of both pseudo-elements.
.box {
position: relative;
z-index: 0; /* We force a stacking context */
}
/* Creates the shadow */
.box::before {
content: "";
position: absolute;
z-index: -2;
inset: -5px;
transform: translate(10px, 8px);
background: /* .. */;
filter: blur(10px);
}
/* Reproduces the main element styles */
.box::after {
content: """;
position: absolute;
z-index: -1;
inset: 0;
/* Inherit all the decorations defined on the main element */
background: inherit;
border: inherit;
box-shadow: inherit;
}
It’s important to note that we are forcing the main element to create a stacking context by declaring z-index: 0
, or any other property that do the same, on it. Also, don’t forget that pseudo-elements consider the padding box of the main element as a reference. So, if the main element has a border, you need to take that into account when defining the pseudo-element styles. You will notice that I am using inset: -2px
on ::after
to account for the border defined on the main element.
As I said, this solution is probably good enough in a majority of cases where you want a gradient shadow, as long as you don’t need to support transparency. But we are here for the challenge and to push the limits, so even if you don’t need what is coming next, stay with me. You will probably learn new CSS tricks that you can use elsewhere.
Transparent solution
Let’s pick up where we left off on the 3D transform
and remove the background from the main element. I will start with a shadow that has both offsets and spread distance equal to 0
.
The idea is to find a way to cut or hide everything inside the area of the element (inside the green border) while keeping what is outside. We are going to use clip-path
for that. But you might wonder how clip-path
can make a cut inside an element.
Indeed, there’s no way to do that, but we can simulate it using a particular polygon pattern:
clip-path: polygon(-100vmax -100vmax,100vmax -100vmax,100vmax 100vmax,-100vmax 100vmax,-100vmax -100vmax,0 0,0 100%,100% 100%,100% 0,0 0)
Tada! We have a gradient shadow that supports transparency. All we did is add a clip-path
to the previous code. Here is a figure to illustrate the polygon part.
The blue area is the visible part after applying the clip-path
. I am only using the blue color to illustrate the concept, but in reality, we will only see the shadow inside that area. As you can see, we have four points defined with a big value (B
). My big value is 100vmax
, but it can be any big value you want. The idea is to ensure we have enough space for the shadow. We also have four points that are the corners of the pseudo-element.
The arrows illustrate the path that defines the polygon. We start from (-B, -B)
until we reach (0,0)
. In total, we need 10 points. Not eight points because two points are repeated twice in the path ((-B,-B)
and (0,0)
).
There’s still one more thing left for us to do, and it’s to account for the spread distance and the offsets. The only reason the demo above works is because it is a particular case where the offsets and spread distance are equal to 0
.
Let’s define the spread and see what happens. Remember that we use inset
with a negative value to do this:
The pseudo-element is now bigger than the main element, so the clip-path
cuts more than we need it to. Remember, we always need to cut the part inside the main element (the area inside the green border of the example). We need to adjust the position of the four points inside of clip-path
.
.box {
--s: 10px; /* the spread */
position: relative;
}
.box::before {
inset: calc(-1 * var(--s));
clip-path: polygon(
-100vmax -100vmax,
100vmax -100vmax,
100vmax 100vmax,
-100vmax 100vmax,
-100vmax -100vmax,
calc(0px + var(--s)) calc(0px + var(--s)),
calc(0px + var(--s)) calc(100% - var(--s)),
calc(100% - var(--s)) calc(100% - var(--s)),
calc(100% - var(--s)) calc(0px + var(--s)),
calc(0px + var(--s)) calc(0px + var(--s))
);
}
We’ve defined a CSS variable, --s
, for the spread distance and updated the polygon points. I didn’t touch the points where I am using the big value. I only update the points that define the corners of the pseudo-element. I increase all the zero values by --s
and decrease the 100%
values by --s
.
It’s the same logic with the offsets. When we translate the pseudo-element, the shadow is out of alignment, and we need to rectify the polygon again and move the points in the opposite direction.
.box {
--s: 10px; /* the spread */
--x: 10px; /* X offset */
--y: 8px; /* Y offset */
position: relative;
}
.box::before {
inset: calc(-1 * var(--s));
transform: translate3d(var(--x), var(--y), -1px);
clip-path: polygon(
-100vmax -100vmax,
100vmax -100vmax,
100vmax 100vmax,
-100vmax 100vmax,
-100vmax -100vmax,
calc(0px + var(--s) - var(--x)) calc(0px + var(--s) - var(--y)),
calc(0px + var(--s) - var(--x)) calc(100% - var(--s) - var(--y)),
calc(100% - var(--s) - var(--x)) calc(100% - var(--s) - var(--y)),
calc(100% - var(--s) - var(--x)) calc(0px + var(--s) - var(--y)),
calc(0px + var(--s) - var(--x)) calc(0px + var(--s) - var(--y))
);
}
There are two more variables for the offsets: --x
and --y
. We use them inside of transform
and we also update the clip-path
values. We still don’t touch the polygon points with big values, but we offset all the others — we reduce --x
from the X coordinates, and --y
from the Y coordinates.
Now all we have to do is to update a few variables to control the gradient shadow. And while we are at it, let’s also make the blur radius a variable as well:
Do we still need the 3D
transform
trick?
It all depends on the border. Don’t forget that the reference for a pseudo-element is the padding box, so if you apply a border to your main element, you will have an overlap. You either keep the 3D transform
trick or update the inset
value to account for the border.
Here is the previous demo with an updated inset
value in place of the 3D transform
:
I‘d say this is a more suitable way to go because the spread distance will be more accurate, as it starts from the border-box instead of the padding-box. But you will need to adjust the inset
value according to the main element’s border. Sometimes, the border of the element is unknown and you have to use the previous solution.
With the earlier non-transparent solution, it’s possible you will face a stacking context issue. And with the transparent solution, it’s possible you face a border issue instead. Now you have options and ways to work around those issues. The 3D transform trick is my favorite solution because it fixes all the issues (The online generator will consider it as well)
Adding a border radius
If you try adding border-radius
to the element when using the non-transparent solution we started with, it is a fairly trivial task. All you need to do is to inherit the same value from the main element, and you are done.
Even if you don’t have a border radius, it’s a good idea to define border-radius: inherit
. That accounts for any potential border-radius
you might want to add later or a border radius that comes from somewhere else.
It’s a different story when dealing with the transparent solution. Unfortunately, it means finding another solution because clip-path
cannot deal with curvatures. That means we won’t be able to cut the area inside the main element.
We will introduce the mask
property to the mix.
This part was very tedious, and I struggled to find a general solution that doesn’t rely on magic numbers. I ended up with a very complex solution that uses only one pseudo-element, but the code was a lump of spaghetti that covers only a few particular cases. I don’t think it is worth exploring that route.
I decided to insert an extra element for the sake of simpler code. Here’s the markup:
<div class="box">
<sh></sh>
</div>
I am using a custom element, , to avoid any potential conflict with external CSS. I could have used a
The first step is to position the element and purposely create an overflow:
.box {
--r: 50px;
position: relative;
border-radius: var(--r);
}
.box sh {
position: absolute;
inset: -150px;
border: 150px solid #0000;
border-radius: calc(150px + var(--r));
}
The code may look a bit strange, but we’ll get to the logic behind it as we go. Next, we create the gradient shadow using a pseudo-element of .
.box {
--r: 50px;
position: relative;
border-radius: var(--r);
transform-style: preserve-3d;
}
.box sh {
position: absolute;
inset: -150px;
border: 150px solid #0000;
border-radius: calc(150px + var(--r));
transform: translateZ(-1px)
}
.box sh::before {
content: "";
position: absolute;
inset: -5px;
border-radius: var(--r);
background: /* Your gradient */;
filter: blur(10px);
transform: translate(10px,8px);
}
As you can see, the pseudo-element uses the same code as all the previous examples. The only difference is the 3D transform
defined on the element instead of the pseudo-element. For the moment, we have a gradient shadow without the transparency feature:
Note that the area of the element is defined with the black outline. Why I am doing this? Because that way, I am able to apply a
mask
on it to hide the part inside the green area and keep the overflowing part where we need to see the shadow.
I know it’s a bit tricky, but unlike clip-path
, the mask
property doesn’t account for the area outside an element to show and hide things. That’s why I was obligated to introduce the extra element — to simulate the “outside” area.
Also, note that I am using a combination of border
and inset
to define that area. This allows me to keep the padding-box of that extra element the same as the main element so that the pseudo-element won’t need additional calculations.
Another useful thing we get from using an extra element is that the element is fixed, and only the pseudo-element is moving (using translate
). This will allow me to easily define the mask, which is the last step of this trick.
mask:
linear-gradient(#000 0 0) content-box,
linear-gradient(#000 0 0);
mask-composite: exclude;
It’s done! We have our gradient shadow, and it supports border-radius
! You probably expected a complex mask
value with oodles of gradients, but no! We only need two simple gradients and a mask-composite
to complete the magic.
Let’s isolate the element to understand what is happening there:
.box sh {
position: absolute;
inset: -150px;
border: 150px solid red;
background: lightblue;
border-radius: calc(150px + var(--r));
}
Here’s what we get:
Note how the inner radius matches the main element’s border-radius
. I have defined a big border (150px
) and a border-radius
equal to the big border plus the main element’s radius. On the outside, I have a radius equal to 150px + R
. On the inside, I have 150px + R - 150px = R
.
We must hide the inner (blue) part and make sure the border (red) part is still visible. To do that, I’ve defined two mask layers —One that covers only the content-box area and another that covers the border-box area (the default value). Then I excluded one from another to reveal the border.
mask:
linear-gradient(#000 0 0) content-box,
linear-gradient(#000 0 0);
mask-composite: exclude;
I used the same technique to create a border that supports gradients and border-radius
. Ana Tudor has also a good article about masking composite that I invite you to read.
Are there any drawbacks to this method?
Yes, this definitely not perfect. The first issue you may face is related to using a border on the main element. This may create a small misalignment in the radii if you don’t account for it. We have this issue in our example, but perhaps you can hardly notice it.
The fix is relatively easy: Add the border’s width for the element’s
inset
.
.box {
--r: 50px;
border-radius: var(--r);
border: 2px solid;
}
.box sh {
position: absolute;
inset: -152px; /* 150px + 2px */
border: 150px solid #0000;
border-radius: calc(150px + var(--r));
}
Another drawback is the big value we’re using for the border (150px
in the example). This value should be big enough to contain the shadow but not too big to avoid overflow and scrollbar issues. Luckily, the online generator will calculate the optimal value considering all the parameters.
The last drawback I am aware of is when you’re working with a complex border-radius
. For example, if you want a different radius applied to each corner, you must define a variable for each side. It’s not really a drawback, I suppose, but it can make your code a bit tougher to maintain.
.box {
--r-top: 10px;
--r-right: 40px;
--r-bottom: 30px;
--r-left: 20px;
border-radius: var(--r-top) var(--r-right) var(--r-bottom) var(--r-left);
}
.box sh {
border-radius: calc(150px + var(--r-top)) calc(150px + var(--r-right)) calc(150px + var(--r-bottom)) calc(150px + var(--r-left));
}
.box sh:before {
border-radius: var(--r-top) var(--r-right) var(--r-bottom) var(--r-left);
}
The online generator only considers a uniform radius for the sake of simplicity, but you now know how to modify the code if you want to consider a complex radius configuration.
Wrapping up
We’ve reached the end! The magic behind gradient shadows is no longer a mystery. I tried to cover all the possibilities and any possible issues you might face. If I missed something or you discover any issue, please feel free to report it in the comment section, and I’ll check it out.
Again, a lot of this is likely overkill considering that the de facto solution will cover most of your use cases. Nevertheless, it’s good to know the “why” and “how” behind the trick, and how to overcome its limitations. Plus, we got good exercise playing with CSS clipping and masking.
And, of course, you have the online generator you can reach for anytime you want to avoid the hassle.
Different Ways to Get CSS Gradient Shadows originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Healthcare, Selling Lemons, and the Price of Developer Experience
Every now and then, a one blog post is published and it spurs a reaction or response in others that are, in turn, published as blogs posts, and a theme starts to emerge. That’s what happened this past week and the theme developed around the cost of JavaScript frameworks — a cost that, in this case, reveals just how darn important it is to use JavaScript responsibly.
Eric Bailey: Modern Health, frameworks, performance, and harm
This is where the story begins. Eric goes to a health service provider website to book an appointment and gets… a blank screen.
In addition to a terrifying amount of telemetry, Modern Health’s customer-facing experience is delivered using React and Webpack.
If you are familiar with how the web is built, what happened is pretty obvious: A website that over-relies on JavaScript to power its experience had its logic collide with one or more other errant pieces of logic that it summons. This created a deadlock.
If you do not make digital experiences for a living, what happened is not obvious at all. All you see is a tiny fake loading spinner that never stops.
D’oh. This might be mere nuisance — or even laughable — in some situations, but not when someone’s health is on the line:
A person seeking help in a time of crisis does not care about TypeScript, tree shaking, hot module replacement, A/B tests, burndown charts, NPS, OKRs, KPIs, or other startup jargon. Developer experience does not count for shit if the person using the thing they built can’t actually get what they need.
This is the big smack of reality. What happens when our tooling and reporting — the very things that are supposed to make our work more effective — get in the way of the user experience? These are tools that provide insights that can help us anticipate a user’s needs, especially in a time of need.
I realize that pointing the finger at JavaScript frameworks is already divisive. But this goes beyond whether you use React or framework d’jour. It’s about business priorities and developer experience conflicting with user experiences.
Alex Russell: The Market for Lemons
Partisans for slow, complex frameworks have successfully marketed lemons as the hot new thing, despite the pervasive failures in their wake, crowding out higher-quality options in the process.
These technologies were initially pitched on the back of “better user experiences”, but have utterly failed to deliver on that promise outside of the high-management-maturity organisations in which they were born. Transplanted into the wider web, these new stacks have proven to be expensive duds.
There’s the rub. Alex ain’t mincing words, but notice that the onus is on the way frameworks haved been marketed to developers than developers themselves. The sales pitch?
Once the lemon sellers embed the data-light idea that improved “Developer Experience” (“DX”) leads to better user outcomes, improving “DX” became and end unto itself, and many who knew better felt forced to play along. The long lead times in falsifying trickle-down UX was a feature, not a bug; they don’t need you to succeed, only to keep buying.
As marketing goes, the “DX” bait-and-switch is brilliant, but the tech isn’t delivering for anyone but developers.
Tough to stomach, right? No one wants to be duped, and it’s tough to admit a sunken cost when there is one. It gets downright personal if you’ve invested time in a specific piece of tech and effort integrating it into your stack. Development workflows are hard and settling into one is sorta like settling into a house you plan on living in a little while. But you’d want to know if your house was built on what Alex calls a “sandy foundation”.
I’d just like to pause here a moment to say I have no skin in this debate. As a web generalist, I tend to adopt new tools early for familiarity then drop them fast, relegating them to my toolshed until I find a good use for them. In other words, my knowledge is wide but not very deep in one area or thing. HTML, CSS, and JavaScript is my go-to cocktail, but I do care a great deal about user experience and know when to reach for a tool to solve a particular thing.
And let’s acknowledge that not everyone has a say in the matter. Many of us work on managed teams that are prescribed the tools we use. Alex says as much, which I think is important to call out because it’s clear this isn’t meant to be personal. It’s a statement on our priorities and making sure they along to user expectations.
Let’s alow Chris to steer us back to the story…
Chris Coyier: End-To-End Tests with Content Blockers?
So, maybe your app is built on React and it doesn’t matter why it’s that way. There’s still work to do to ensure the app is reliable and accessible.
Just blocking a file shouldn’t totally wreck a website, but it often does! In JavaScript, that may be because the developers have written first-party JavaScript (which I’ll generally allow) that depends on third-party JavaScript (which I’ll generally block).
[…]
If I block resources from
tracking-website.com
, now my first-party JavaScript is going to throw an error. JavaScript isn’t chill. If an error is thrown, it doesn’t execute more JavaScript further down in the file. If further down in that file istransitionToOnboarding();
— that ain’t gonna work.
Maybe it’s worth revisiting your workflow and tweaking it to account to identify more points of failure.
So here’s an idea: Run your end-to-end tests in browsers that have popular content blockers with default configs installed.
Doing so may uncover problems like this that stop your customers, and indeed people in need, from being stopped in their tracks.
Good idea! Hey, anything that helps paint a more realistic picture of how the app is used. That sort of clarity could happen a lot earlier in the process, perhaps before settling on development decisions. Know your users. Why are they using the app? How do they browse the web? Where are they phsically located? What problems could get in their way? Chris has a great talk on that, too.
Healthcare, Selling Lemons, and the Price of Developer Experience originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
AI’s Impact on the Web Is Growing
Despite the massive strides tech has taken in the last few years, we rarely see a week as tumultuous as this.
When your grandkids ask you where you were when AI took over, you’ll be reminiscing about February 2023.
ChatGPT Goes ‘Pro’
First, we felt a great disturbance in AI, as if millions of chatbots cried out in terror and were suddenly silenced. ChatGPT has reached capacity.
ChatGPT is one of the more accomplished AI tools available, and until now, it’s been free to use, which has prompted an AI gold rush — ProductHunt transformed in weeks into a list of ‘trustworthy’ chatbots. OpenAI, the company behind ChatGPT, is a commercial company and has opted to release a premium account for ChatGPT that provides priority access for $20/month.
In addition to derailing the plans of thousands of indy-developers, there are ethical questions about an AI that is trained on other people’s content, questions that were less pressing when access was free.
Bing Chat Beats Google
Google has dominated search for a long time. Its main strength has been its jealously guarded algorithm.
When Microsoft — no stranger itself to monopolies — launched Bing, there was speculation that Google would lose its dominance. But the move away from Google never materialized.
So Bing’s product team returned to the drawing board to find a way to combat Google’s algorithm. The solution they came up with was Bing Chat, a chatbot that, in addition to returning search results, would also answer the query in a simple statement cribbed from the most credible results.
Bing Chat is powered by ChatGPT — Microsoft is presumably paying $20/month for priority access.
The move from search engines directing users to results hosted on sites to search engines taking, rewriting, and presenting answers as their own will make the fuss over AMP pale into insignificance.
Google Bard Gets It Wrong
Google was so spooked by Bing Chat that it rushed out a preview of Bard, its own AI-powered service.
Google is generally regarded as one of the leading lights in AI research, so being caught napping when it can to AI search integration must have stung someone into pushing the launch button too soon.
In a preview video intended to take the wind out of Microsoft’s sails, Bard was asked, “What new discoveries from the James Webb space telescope can I tell my nine-year-old about?” The response from Bard stated the JWST took the very first pictures of a planet outside of Earth‘s solar system. However, those very first photos were actually taken in 2004, 17 years before the JWST was launched.
And just like that, $100 billion was wiped off (Google parent company) Alphabet‘s share price.
As Google spokespeople were quick to point out, Bard is still being tested and will be much more powerful when using the full version of LaMDA. But the error highlights one of the biggest problems with AI content: It is not only highly inaccurate but also extremely convincing, making errors difficult to spot.
What’s Next?
The thrust and parry between Google Bard and Microsoft Bing leaves us on the brink of another remarkable technology race. Bing won the first round, but somewhere in The Googleplex, an audit is taking place with the express purpose of not losing any more ground. And this is all before the rumored Apple iSearch is installed by default on millions of iPhones.
There are so many ethical, technical, and cultural questions surrounding AI that it’s impossible to know where this is heading.
One thing is certain: something changed this week. We’ve seen the first exchanges in a competition that will transform the web over the next decade.
Featured image by GarryKillian on Freepik
The post AI’s Impact on the Web Is Growing first appeared on Webdesigner Depot.
The truth about CSS selector performance
Geez, leave it to Patrick Brosset to talk CSS performance in the most approachable and practical way possible. Not that CSS is always what’s gunking up the speed, or even the lowest hanging fruit when it comes to improving performance.
But if you’re looking for gains on the CSS side of things, Patrick has a nice way of sniffing out your most expensive selectors using Edge DevTools:
- Crack open DevTools.
- Head to the Performance Tab.
- Make sure you have the “Enable advanced rendering instrumentation” option enabled. This tripped me up in the process.
- Record a page load.
- Open up the “Bottom-Up” tab in the report.
- Check out your the size of your recalculated styles.
From here, click on one of the Recalculated Style events in the Main waterfall view and you’ll get a new “Selector Stats” tab. Look at all that gooey goodness!
Now you see all of the selectors that were processed and they can be sorted by how long they took, how many times they matched, the number of matching attempts, and something called “fast reject count” which I learned is the number of elements that were easy and quick to eliminate from matching.
A lot of insights here if CSS is really the bottleneck that needs investigating. But read Patrick’s full post over on the Microsoft Edge Blog because he goes much deeper into the why’s and how’s, and walks through an entire case study.
To Shared Link — Permalink on CSS-Tricks
The truth about CSS selector performance originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
The Double Emphasis Thing
I used to have this boss who loved, loved, loved, loved to emphasize words. This was way back before we used a WYSIWYG editors and I’d have to handcode that crap.
<p>
I used to have this boss who <em>loved</em>, <strong>loved</strong>,
<strong><em>loved</em></strong>, <strong><em><u>loved</u></em></strong>
to emphasize words.
</p>
(Let’s not go into the colors he used for even MOAR emphasis.)
Writing all that markup never felt great. The effort it took, sure, whatever. But is it even a good idea to add overload content with double — or more! — emphases?
Different tags convey different emphasis
For starters, the and
tags are designed for different uses. We got them back in HTML5, where:
-
: Is used to convey “strong importance, seriousness, or urgency for its contents”.
-
: Represents “stress emphasis”.
So, gives the content more weight in the sense it suggests that the content in it is important or urgent. Think of a warning:
Warning: The following content has been flagged for being awesome.
It might be tempting to reach for to do the same thing. Italicized text can be attention-grabbing after all. But it’s really meant as a hint to use more emphasis when readingt the content in it. For example, here are two versions of the same sentence with the emphasis in different locations:
<p>I ate the <em>entire</em> plate of burritos.</p>
<p>I ate the entire <em>plate</em> of burritos.</p>
Both examples stress emphasis, but on different words. And they would sound different if you were to read them out loud. That makes a great way to express tone in your writing. It changes the meaning of the sentence in a way that
does not.
Visual emphasis vs. semantic emphasis
Those are two things you gotta weigh when emphasizing content. Like, there are plenty of instances where you may need to italicize content without affecting the meaning of the sentence. But those can be handled with other tags that render italics:
-
: This is the classic one! Before HTML5, this was used to stress emphasis with italics all over the place. Now, it’s purely used to italicize content visually without changing the semantic meaning.
-
: Indicating the source of a fact or figure. (“Source: CSS-Tricks“)
-
)
It’s going to he the same thing with . Rather than using it for styling text you want to look heavier, it’s a better idea to use the classic
tag for boldfacing to avoid giving extra signficance to content that doesn’t need it. And remember, some elements like headings are already rendered in bold, thanks to the browser’s default styles. There’s no need to add even more strong emphasis.
Using italics in emphasized content (and vice versa)
There are legitimate cases where you may need to italicize part of a line that’s already emphasized. Or maybe add emphasis to a bit of text that’s already italicized.
A blockquote might be a good example. I’ve seen plenty of times where they are italicized for style, even though default browser styles don’t do it:
blockquote {
font-style: italic;
}
What if we need to mention a movie title in that blockquote? That should be italicized. There’s no stress emphasis needed, so an tag will do. But it’s still weird to italicize something when it’s already rendered that way:
<blockquote>
This movie’s opening weekend performance offers some insight in
to its box office momentum as it fights to justify its enormous
budget. In its first weekend, <i>Avatar: The Way of Water</i> made
$134 million in North America alone and $435 million globally.
</blockquote>
In a situation where we’re italicizing something within italicized content like this, we’re supposed to remove the italics from the nested element… in this case.
blockquote i {
font-style: normal;
}
Container style queries will be super useful to nab all these instances if we get them:
blockquote {
container-name: quote;
font-style: italic;
}
@container quote (font-style: italic) {
em, i, cite, address {
font-style: normal;
}
}
This little snippet evaluates the blockquote to see if it’s font-style
is set to italic
. If it is, then it’ll make sure the ,
,
, and
But back to emphasis within emphasis
I wouldn’t nest inside
like this:
<p>I ate the <em><strong>entire</strong></em> plate of burritos.</p>
…or nest inside
instead:
<p>I ate the <em><strong>entire</strong></em> plate of burritos.</p>
The rendering is fine! And it doesn’t matter what order they’re in… at least in modern browsers. Jennifer Kyrnin mentions that some browsers only render the tag nearest to the text, but I didn’t bump into that anywhere in my limited tests. But something to watch for!
The reason I wouldn’t nest one form of emphasis in another is because it simply isn’t needed. There is no grammar rule that calls for it. Like exclamation points, one form of emphasis is enough, and you ought to use the one that matches what you’re after whether it’s visual, weight, or announced emphasis.
And even though some screen readers are capable of announcing emphasized content, they won’t read the markup with any additional importance or emphasis. So, no additional accessibility perks either, as far as I can tell.
But I really want all the emphasis!
If you’re in the position where your boss is like mine and wants ALL the emphasis, I’d reach for the right HTML tag for the type of emphasis, then apply the rest of the styles with a mix of tags that don’t affect semantics with CSS to help account for anything browser styles won’t handle.
<style>
/* If `em` contains `b` or `u` tags */
em:has(b, u) {
color: #f8a100;
}
</style>
<p>
I used to have this boss who <em>loved</em>, <strong>loved</strong>,
<strong><em>loved</em></strong>, <strong><em><u>loved</u></em></strong>
to emphasize words.
</p>
I might even do it with the tag too as a defensive measure:
/* If `em` contains `b` or `u` tags */
em:has(b, u),
/* If `strong` contains `em` or `u` tags */
strong:has(i, u) {
color: #f8a100;
}
As long as we’re playing defense, we can identify errors where emphases are nested within emphases by highlighting them in red or something:
/* Highlight semantic emphases within semantic emphases */
em:has(strong),
strong:has(em) {
background: hsl(0deg 50% 50% / .25);
border: 1px dashed hsl(0deg 50% 50% / .25);
}
Then I’d probably use that snippet from the last section that removes the default italic styling from an element when it is nested in another italiczed element.
Anything else?
Mayyyyybe:
- Make sure your webfont includes bold and italic variations — otherwise, you’ll be relying on the browser to try to bold or italicize text for you. But limit your font files to just the weights and styles you need for better performance.
- Consider re-writing the content if the formatting seems off. There are natural ways to phrase content so that certain bits gain emphasis.
- Check your analytics for the browsers your visitors use and test accordingly. Even though I didn’t run into a browser that balked at
in
or the other way around, there may be a browser or several that will.
The Double Emphasis Thing originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Exciting New Tools for Designers, February 2023
No matter what you’re working on, you can guarantee that there’s a cool app, resource, or service that will help you do it faster, better, and cheaper. And so every month we post this roundup of the most exciting new tools we’ve found in the previous four weeks.
This month, the AI revolution continues with tons of tools backed by AI. But that’s not all; you’ll also find business apps, marketing tools, and services to help you grow a startup. Enjoy!
The Org
The Org is a new way of attracting top talent to your startup. Use it to showcase your existing team, show candidates where they fit, and highlight the roles you have available.
Fibery
Fibery is a streamlined, all-in-one solution for guiding your startup from idea to MVP. Manage your project using a roadmap, CRM, research, and feedback tools.
Ordinary Prompts for Ordinary People
Ordinary Prompts for Ordinary People is a collection of ChatGPT prompts to help maximize the use you get out of AI. Explore prompts, upvote, and collect the prompts that work best for you.
Sociality
Sociality is a marketing platform for managing your social media channels. It allows you to schedule content, analyze performance, and monitor your competitors.
Swimm
Swimm is a streamlined method of documenting internal code and making it searchable inside an IDE. You can share notes across development teams and keep everything in sync.
WebWave
WebWave is a no-code drag-and-drop website builder that doesn’t rely on grids to layout elements. Designs are fully responsive and can be animated.
BlogHunch
BlogHunch is a brand-new blogging platform. It’s SEO-optimized and based on a no-code design tool, so you can express yourself exactly as you’d like to without worrying about server stuff.
Rayst
Rayst is a browser extension for Chrome that lets you see the facts behind websites. You can see their technology stack, traffic, funding, and more.
Morise
Morise is an AI-powered tool to help you conquer YouTube. It uses data drawn from the most successful YouTube channels so you, too, can go viral.
GhostWrite
GhostWrite is a simple tool that uses AI to compose emails for you. So whether you’re quitting a job or applying for a promotion, it’s an excellent way to hit the right tone.
Week
Week is a task management tool with many features to help you control your work life. Create tasks, tag them for different projects, view your day, and see how you perform over the week.
Ivory
Ivory is a brand new social media client for Mastodon from the makers of the universally-loved Tweetbot. It’s currently early-access and iOS only.
Mirror
Mirror is a flexible platform for publishing content using web3 technologies. For example, publish a blog, creative writing, and community updates, and tie it together with Ethereum.
Fonty.io
Fonty.io is a handy little tool for analyzing the fonts being used on a website. Simply enter the URL, and the site will spit out the fonts, and their weights, and styles.
Slope
Slope is a payment solution for businesses that provides flexible payment solutions for B2B transactions. In addition, there’s built-in fraud prevention, and it’s easy to integrate into existing payment flows.
Humanic
Humanic is a service that allows you to uncover the sign-ups that never convert, so you can focus on the ones that do.
ToolJet
ToolJet is an open-source platform for developing internal tools. The low-code app constructor has built-in UI components, a drag-and-drop builder, and linked multi-page apps.
Pais
Pais is a new family of fonts from Latinotype that contains 36 styles and weights ranging from Thin to Black.
Kodezi
Kodezi is a helpful tool for coders that debugs, corrects, and guides your programming, so you produce higher-quality code.
Astro 2.0
Astro is a web framework that has just been released in version 2. It is designed to work with the tools you already use, like React and Vue.
The post Exciting New Tools for Designers, February 2023 first appeared on Webdesigner Depot.
A Fancy Hover Effect For Your Avatar
Do you know that kind of effect where someone’s head is poking through a circle or hole? The famous Porky Pig animation where he waves goodbye while popping out of a series of red rings is the perfect example, and Kilian Valkhof actually re-created that here on CSS-Tricks a while back.
I have a similar idea but tackled a different way and with a sprinkle of animation. I think it’s pretty practical and makes for a neat hover effect you can use on something like your own avatar.
See that? We’re going to make a scaling animation where the avatar seems to pop right out of the circle it’s in. Cool, right? Don’t look at the code and let’s build this animation together step-by-step.
The HTML: Just one element
If you haven’t checked the code of the demo and you are wondering how many div
s this’ll take, then stop right there, because our markup is nothing but a single image element:
<img src="" alt="">
Yes, a single element! The challenging part of this exercise is using the smallest amount of code possible. If you have been following me for a while, you should be used to this. I try hard to find CSS solutions that can be achieved with the smallest, most maintainable code possible.
I wrote a series of articles here on CSS-Tricks where I explore different hover effects using the same HTML markup containing a single element. I go into detail on gradients, masking, clipping, outlines, and even layout techniques. I highly recommend checking those out because I will re-use many of the tricks in this post.
An image file that’s square with a transparent background will work best for what we’re doing. Here’s the one I’m using if you want start with that.
I’m hoping to see lots of examples of this as possible using real images — so please share your final result in the comments when you’re done so we can build a collection!
Before jumping into CSS, let’s first dissect the effect. The image gets bigger on hover, so we’ll for sure use transform: scale()
in there. There’s a circle behind the avatar, and a radial gradient should do the trick. Finally, we need a way to create a border at the bottom of the circle that creates the appearance of the avatar behind the circle.
Let’s get to work!
The scale effect
Let’s start by adding the transform:
img {
width: 280px;
aspect-ratio: 1;
cursor: pointer;
transition: .5s;
}
img:hover {
transform: scale(1.35);
}
Nothing complicated yet, right? Let’s move on.
The circle
We said that the background would be a radial gradient. That’s perfect because we can create hard stops between the colors of a radial gradient, which make it look like we’re drawing a circle with solid lines.
img {
--b: 5px; /* border width */
width: 280px;
aspect-ratio: 1;
background:
radial-gradient(
circle closest-side,
#ECD078 calc(99% - var(--b)),
#C02942 calc(100% - var(--b)) 99%,
#0000
);
cursor: pointer;
transition: .5s;
}
img:hover {
transform: scale(1.35);
}
Note the CSS variable, --b
, I’m using there. It represents the thickness of the “border” which is really just being used to define the hard color stops for the red part of the radial gradient.
The next step is to play with the gradient size on hover. The circle needs to keep its size as the image grows. Since we are applying a scale()
transformation, we actually need to decrease the size of the circle because it otherwise scales up with the avatar. So, while the image scales up, we need the gradient to scale down.
Let’s start by defining a CSS variable, --f
, that defines the “scale factor”, and use it to set the size of the circle. I’m using 1
as the default value, as in that’s the initial scale for the image and the circle that we transform from.
Here is a demo to illustrate the trick. Hover to see what is happening behind the scenes:
I added a third color to the radial-gradient
to better identify the area of the gradient on hover:
radial-gradient(
circle closest-side,
#ECD078 calc(99% - var(--b)),
#C02942 calc(100% - var(--b)) 99%,
lightblue
);
Now we have to position our background at the center of the circle and make sure it takes up the full height. I like to declare everything directly on the background
shorthand property, so we can add our background positioning and make sure it doesn’t repeat by tacking on those values right after the radial-gradient()
:
background: radial-gradient() 50% / calc(100% / var(--f)) 100% no-repeat;
The background is placed at the center (50%
), has a width equal to calc(100%/var(--f))
, and has a height equal to 100%
.
Nothing scales when --f
is equal to 1
— again, our initial scale. Meanwhile, the gradient takes up the full width of the container. When we increase --f
, the element’s size grows — thanks to the scale()
transform — and the gradient’s size decreases.
Here’s what we get when we apply all of this to our demo:
We’re getting closer! We have the overflow effect at the top, but we still need to hide the bottom part of the image, so it looks like it is popping out of the circle rather than sitting in front of it. That’s the tricky part of this whole thing and is what we’re going to do next.
The bottom border
I first tried tackling this with the border-bottom
property, but I was unable to find a way to match the size of the border to the size to the circle. Here’s the best I could get and you can immediately see it’s wrong:
The actual solution is to use the outline
property. Yes, outline
, not border
. In a previous article, I show how outline
is powerful and allows us to create cool hover effects. Combined with outline-offset
, we have exactly what we need for our effect.
The idea is to set an outline
on the image and adjust its offset to create the bottom border. The offset will depend on the scaling factor the same way the gradient size did.
Now we have our bottom “border” (actually an outline
) combined with the “border” created by the gradient to create a full circle. We still need to hide portions of the outline
(from the top and the sides), which we’ll get to in a moment.
Here’s our code so far, including a couple more CSS variables you can use to configure the image size (--s
) and the “border” color (--c
):
img {
--s: 280px; /* image size */
--b: 5px; /* border thickness */
--c: #C02942; /* border color */
--f: 1; /* initial scale */
width: var(--s);
aspect-ratio: 1;
cursor: pointer;
border-radius: 0 0 999px 999px;
outline: var(--b) solid var(--c);
outline-offset: calc((1 / var(--f) - 1) * var(--s) / 2 - var(--b));
background:
radial-gradient(
circle closest-side,
#ECD078 calc(99% - var(--b)),
var(--c) calc(100% - var(--b)) 99%,
#0000
) 50% / calc(100% / var(--f)) 100% no-repeat;
transform: scale(var(--f));
transition: .5s;
}
img:hover {
--f: 1.35; /* hover scale */
}
Since we need a circular bottom border, we added a border-radius
on the bottom side, allowing the outline
to match the curvature of the gradient.
The calculation used on outline-offset
is a lot more straightforward than it looks. By default, outline
is drawn outside of the element’s box. And in our case, we need it to overlap the element. More precisely, we need it to follow the circle created by the gradient.
When we scale the element, we see the space between the circle and the edge. Let’s not forget that the idea is to keep the circle at the same size after the scale transformation runs, which leaves us with the space we will use to define the outline’s offset as illustrated in the above figure.
Let’s not forget that the second element is scaled, so our result is also scaled… which means we need to divide the result by f
to get the real offset value:
Offset = ((f - 1) * S/2) / f = (1 - 1/f) * S/2
We add a negative sign since we need the outline to go from the outside to the inside:
Offset = (1/f - 1) * S/2
Here’s a quick demo that shows how the outline follows the gradient:
You may already see it, but we still need the bottom outline to overlap the circle rather than letting it bleed through it. We can do that by removing the border’s size from the offset:
outline-offset: calc((1 / var(--f) - 1) * var(--s) / 2) - var(--b));
Now we need to find how to remove the top part from the outline. In other words, we only want the bottom part of the image’s outline
.
First, let’s add space at the top with padding to help avoid the overlap at the top:
img {
--s: 280px; /* image size */
--b: 5px; /* border thickness */
--c: #C02942; /* border color */
--f: 1; /* initial scale */
width: var(--s);
aspect-ratio: 1;
padding-block-start: calc(var(--s)/5);
/* etc. */
}
img:hover {
--f: 1.35; /* hover scale */
}
There is no particular logic to that top padding. The idea is to ensure the outline doesn’t touch the avatar’s head. I used the element’s size to define that space to always have the same proportion.
Note that I have added the content-box
value to the background
:
background:
radial-gradient(
circle closest-side,
#ECD078 calc(99% - var(--b)),
var(--c) calc(100% - var(--b)) 99%,
#0000
) 50%/calc(100%/var(--f)) 100% no-repeat content-box;
We need this because we added padding and we only want the background set to the content box, so we must explicitly tell the background to stop there.
Adding CSS mask to the mix
We reached the last part! All we need to do is to hide some pieces, and we are done. For this, we will rely on the mask
property and, of course, gradients.
Here is a figure to illustrate what we need to hide or what we need to show to be more accurate
The left image is what we currently have, and the right is what we want. The green part illustrates the mask we must apply to the original image to get the final result.
We can identify two parts of our mask:
- A circular part at the bottom that has the same dimension and curvature as the radial gradient we used to create the circle behind the avatar
- A rectangle at the top that covers the area inside the outline. Notice how the outline is outside the green area at the top — that’s the most important part, as it allows the outline to be cut so that only the bottom part is visible.
Here’s our final CSS:
img {
--s: 280px; /* image size */
--b: 5px; /* border thickness */
--c: #C02942; /* border color */
--f: 1; /* initial scale */
--_g: 50% / calc(100% / var(--f)) 100% no-repeat content-box;
--_o: calc((1 / var(--f) - 1) * var(--s) / 2 - var(--b));
width: var(--s);
aspect-ratio: 1;
padding-top: calc(var(--s)/5);
cursor: pointer;
border-radius: 0 0 999px 999px;
outline: var(--b) solid var(--c);
outline-offset: var(--_o);
background:
radial-gradient(
circle closest-side,
#ECD078 calc(99% - var(--b)),
var(--c) calc(100% - var(--b)) 99%,
#0000) var(--_g);
mask:
linear-gradient(#000 0 0) no-repeat
50% calc(-1 * var(--_o)) / calc(100% / var(--f) - 2 * var(--b)) 50%,
radial-gradient(
circle closest-side,
#000 99%,
#0000) var(--_g);
transform: scale(var(--f));
transition: .5s;
}
img:hover {
--f: 1.35; /* hover scale */
}
Let’s break down that mask
property. For starters, notice that a similar radial-gradient()
from the background
property is in there. I created a new variable, --_g
, for the common parts to make things less cluttered.
--_g: 50% / calc(100% / var(--f)) 100% no-repeat content-box;
mask:
radial-gradient(
circle closest-side,
#000 99%,
#0000) var(--_g);
Next, there’s a linear-gradient()
in there as well:
--_g: 50% / calc(100% / var(--f)) 100% no-repeat content-box;
mask:
linear-gradient(#000 0 0) no-repeat
50% calc(-1 * var(--_o)) / calc(100% / var(--f) - 2 * var(--b)) 50%,
radial-gradient(
circle closest-side,
#000 99%,
#0000) var(--_g);
This creates the rectangle part of the mask. Its width is equal to the radial gradient’s width minus twice the border thickness:
calc(100% / var(--f) - 2 * var(--b))
The rectangle’s height is equal to half, 50%
, of the element’s size.
We also need the linear gradient placed at the horizontal center (50%
) and offset from the top by the same value as the outline’s offset. I created another CSS variable, --_o
, for the offset we previously defined:
--_o: calc((1 / var(--f) - 1) * var(--s) / 2 - var(--b));
One of the confusing things here is that we need a negative offset for the outline (to move it from outside to inside) but a positive offset for the gradient (to move from top to bottom). So, if you’re wondering why we multiply the offset, --_o
, by -1
, well, now you know!
Here is a demo to illustrate the mask’s gradient configuration:
Hover the above and see how everything move together. The middle box illustrates the mask layer composed of two gradients. Imagine it as the visible part of the left image, and you get the final result on the right!
Wrapping up
Oof, we’re done! And not only did we wind up with a slick hover animation, but we did it all with a single HTML element. Just that and less than 20 lines of CSS trickery!
Sure, we relied on some little tricks and math formulas to reach such a complex effect. But we knew exactly what to do since we identified the pieces we needed up-front.
Could we have simplified the CSS if we allowed ourselves more HTML? Absolutely. But we’re here to learn new CSS tricks! This was a good exercise to explore CSS gradients, masking, the outline
property’s behavior, transformations, and a whole bunch more. If you felt lost at any point, then definitely check out my series that uses the same general concepts. It sometimes helps to see more examples and use cases to drive a point home.
I will leave you with one last demo that uses photos of popular CSS developers. Don’t forget to show me a demo with your own image so I can add it to the collection!
A Fancy Hover Effect For Your Avatar originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Caching Data in SvelteKit
My previous post was a broad overview of SvelteKit where we saw what a great tool it is for web development. This post will fork off what we did there and dive into every developer’s favorite topic: caching. So, be sure to give my last post a read if you haven’t already. The code for this post is available on GitHub, as well as a live demo.
This post is all about data handling. We’ll add some rudimentary search functionality that will modify the page’s query string (using built-in SvelteKit features), and re-trigger the page’s loader. But, rather than just re-query our (imaginary) database, we’ll add some caching so re-searching prior searches (or using the back button) will show previously retrieved data, quickly, from cache. We’ll look at how to control the length of time the cached data stays valid and, more importantly, how to manually invalidate all cached values. As icing on the cake, we’ll look at how we can manually update the data on the current screen, client-side, after a mutation, while still purging the cache.
This will be a longer, more difficult post than most of what I usually write since we’re covering harder topics. This post will essentially show you how to implement common features of popular data utilities like react-query; but instead of pulling in an external library, we’ll only be using the web platform and SvelteKit features.
Unfortunately, the web platform’s features are a bit lower level, so we’ll be doing a bit more work than you might be used to. The upside is we won’t need any external libraries, which will help keep bundle sizes nice and small. Please don’t use the approaches I’m going to show you unless you have a good reason to. Caching is easy to get wrong, and as you’ll see, there’s a bit of complexity that’ll result in your application code. Hopefully your data store is fast, and your UI is fine allowing SvelteKit to just always request the data it needs for any given page. If it is, leave it alone. Enjoy the simplicity. But this post will show you some tricks for when that stops being the case.
Speaking of react-query, it was just released for Svelte! So if you find yourself leaning on manual caching techniques a lot, be sure to check that project out, and see if it might help.
Setting up
Before we start, let’s make a few small changes to the code we had before. This will give us an excuse to see some other SvelteKit features and, more importantly, set us up for success.
First, let’s move our data loading from our loader in +page.server.js
to an API route. We’ll create a +server.js
file in routes/api/todos
, and then add a GET
function. This means we’ll now be able to fetch (using the default GET verb) to the /api/todos
path. We’ll add the same data loading code as before.
import { json } from "@sveltejs/kit";
import { getTodos } from "$lib/data/todoData";
export async function GET({ url, setHeaders, request }) {
const search = url.searchParams.get("search") || "";
const todos = await getTodos(search);
return json(todos);
}
Next, let’s take the page loader we had, and simply rename the file from +page.server.js
to +page.js
(or .ts
if you’ve scaffolded your project to use TypeScript). This changes our loader to be a “universal” loader rather than a server loader. The SvelteKit docs explain the difference, but a universal loader runs on both the server and also the client. One advantage for us is that the fetch
call into our new endpoint will run right from our browser (after the initial load), using the browser’s native fetch
function. We’ll add standard HTTP caching in a bit, but for now, all we’ll do is call the endpoint.
export async function load({ fetch, url, setHeaders }) {
const search = url.searchParams.get("search") || "";
const resp = await fetch(`/api/todos?search=${encodeURIComponent(search)}`);
const todos = await resp.json();
return {
todos,
};
}
Now let’s add a simple form to our /list
page:
<div class="search-form">
<form action="/list">
<label>Search</label>
<input autofocus name="search" />
</form>
</div>
Yep, forms can target directly to our normal page loaders. Now we can add a search term in the search box, hit Enter, and a “search” term will be appended to the URL’s query string, which will re-run our loader and search our to-do items.
Let’s also increase the delay in our todoData.js
file in /lib/data
. This will make it easy to see when data are and are not cached as we work through this post.
export const wait = async amount => new Promise(res => setTimeout(res, amount ?? 500));
Remember, the full code for this post is all on GitHub, if you need to reference it.
Basic caching
Let’s get started by adding some caching to our /api/todos
endpoint. We’ll go back to our +server.js
file and add our first cache-control header.
setHeaders({
"cache-control": "max-age=60",
});
…which will leave the whole function looking like this:
export async function GET({ url, setHeaders, request }) {
const search = url.searchParams.get("search") || "";
setHeaders({
"cache-control": "max-age=60",
});
const todos = await getTodos(search);
return json(todos);
}
We’ll look at manual invalidation shortly, but all this function says is to cache these API calls for 60 seconds. Set this to whatever you want, and depending on your use case, stale-while-revalidate
might also be worth looking into.
And just like that, our queries are caching.
Note make sure you un-check the checkbox that disables caching in dev tools.
Remember, if your initial navigation on the app is the list page, those search results will be cached internally to SvelteKit, so don’t expect to see anything in DevTools when returning to that search.
What is cached, and where
Our very first, server-rendered load of our app (assuming we start at the /list
page) will be fetched on the server. SvelteKit will serialize and send this data down to our client. What’s more, it will observe the Cache-Control
header on the response, and will know to use this cached data for that endpoint call within the cache window (which we set to 60 seconds in put example).
After that initial load, when you start searching on the page, you should see network requests from your browser to the /api/todos
list. As you search for things you’ve already searched for (within the last 60 seconds), the responses should load immediately since they’re cached.
What’s especially cool with this approach is that, since this is caching via the browser’s native caching, these calls could (depending on how you manage the cache busting we’ll be looking at) continue to cache even if you reload the page (unlike the initial server-side load, which always calls the endpoint fresh, even if it did it within the last 60 seconds).
Obviously data can change anytime, so we need a way to purge this cache manually, which we’ll look at next.
Cache invalidation
Right now, data will be cached for 60 seconds. No matter what, after a minute, fresh data will be pulled from our datastore. You might want a shorter or longer time period, but what happens if you mutate some data and want to clear your cache immediately so your next query will be up to date? We’ll solve this by adding a query-busting value to the URL we send to our new /todos
endpoint.
Let’s store this cache busting value in a cookie. That value can be set on the server but still read on the client. Let’s look at some sample code.
We can create a +layout.server.js
file at the very root of our routes
folder. This will run on application startup, and is a perfect place to set an initial cookie value.
export function load({ cookies, isDataRequest }) {
const initialRequest = !isDataRequest;
const cacheValue = initialRequest ? +new Date() : cookies.get("todos-cache");
if (initialRequest) {
cookies.set("todos-cache", cacheValue, { path: "/", httpOnly: false });
}
return {
todosCacheBust: cacheValue,
};
}
You may have noticed the isDataRequest
value. Remember, layouts will re-run anytime client code calls invalidate()
, or anytime we run a server action (assuming we don’t turn off default behavior). isDataRequest
indicates those re-runs, and so we only set the cookie if that’s false
; otherwise, we send along what’s already there.
The httpOnly: false
flag is also significant. This allows our client code to read these cookie values in document.cookie
. This would normally be a security concern, but in our case these are meaningless numbers that allow us to cache or cache bust.
Reading cache values
Our universal loader is what calls our /todos
endpoint. This runs on the server or the client, and we need to read that cache value we just set up no matter where we are. It’s relatively easy if we’re on the server: we can call await parent()
to get the data from parent layouts. But on the client, we’ll need to use some gross code to parse document.cookie
:
export function getCookieLookup() {
if (typeof document !== "object") {
return {};
}
return document.cookie.split("; ").reduce((lookup, v) => {
const parts = v.split("=");
lookup[parts[0]] = parts[1];
return lookup;
}, {});
}
const getCurrentCookieValue = name => {
const cookies = getCookieLookup();
return cookies[name] ?? "";
};
Fortunately, we only need it once.
Sending out the cache value
But now we need to send this value to our /todos
endpoint.
import { getCurrentCookieValue } from "$lib/util/cookieUtils";
export async function load({ fetch, parent, url, setHeaders }) {
const parentData = await parent();
const cacheBust = getCurrentCookieValue("todos-cache") || parentData.todosCacheBust;
const search = url.searchParams.get("search") || "";
const resp = await fetch(`/api/todos?search=${encodeURIComponent(search)}&cache=${cacheBust}`);
const todos = await resp.json();
return {
todos,
};
}
getCurrentCookieValue('todos-cache')
has a check in it to see if we’re on the client (by checking the type of document), and returns nothing if we are, at which point we know we’re on the server. Then it uses the value from our layout.
Busting the cache
But how do we actually update that cache busting value when we need to? Since it’s stored in a cookie, we can call it like this from any server action:
cookies.set("todos-cache", cacheValue, { path: "/", httpOnly: false });
The implementation
It’s all downhill from here; we’ve done the hard work. We’ve covered the various web platform primitives we need, as well as where they go. Now let’s have some fun and write application code to tie it all together.
For reasons that’ll become clear in a bit, let’s start by adding an editing functionality to our /list
page. We’ll add this second table row for each todo:
import { enhance } from "$app/forms";
<tr>
<td colspan="4">
<form use:enhance method="post" action="?/editTodo">
<input name="id" value="{t.id}" type="hidden" />
<input name="title" value="{t.title}" />
<button>Save</button>
</form>
</td>
</tr>
And, of course, we’ll need to add a form action for our /list
page. Actions can only go in .server
pages, so we’ll add a +page.server.js
in our /list
folder. (Yes, a +page.server.js
file can co-exist next to a +page.js
file.)
import { getTodo, updateTodo, wait } from "$lib/data/todoData";
export const actions = {
async editTodo({ request, cookies }) {
const formData = await request.formData();
const id = formData.get("id");
const newTitle = formData.get("title");
await wait(250);
updateTodo(id, newTitle);
cookies.set("todos-cache", +new Date(), { path: "/", httpOnly: false });
},
};
We’re grabbing the form data, forcing a delay, updating our todo, and then, most importantly, clearing our cache bust cookie.
Let’s give this a shot. Reload your page, then edit one of the to-do items. You should see the table value update after a moment. If you look in the Network tab in DevToold, you’ll see a fetch to the /todos
endpoint, which returns your new data. Simple, and works by default.
Immediate updates
What if we want to avoid that fetch that happens after we update our to-do item, and instead, update the modified item right on the screen?
This isn’t just a matter of performance. If you search for “post” and then remove the word “post” from any of the to-do items in the list, they’ll vanish from the list after the edit since they’re no longer in that page’s search results. You could make the UX better with some tasteful animation for the exiting to-do, but let’s say we wanted to not re-run that page’s load function but still clear the cache and update the modified to-do so the user can see the edit. SvelteKit makes that possible — let’s see how!
First, let’s make one little change to our loader. Instead of returning our to-do items, let’s return a writeable store containing our to-dos.
return {
todos: writable(todos),
};
Before, we were accessing our to-dos on the data
prop, which we do not own and cannot update. But Svelte lets us return our data in their own store (assuming we’re using a universal loader, which we are). We just need to make one more tweak to our /list
page.
Instead of this:
{#each todos as t}
…we need to do this since todos
is itself now a store.:
{#each $todos as t}
Now our data loads as before. But since todos
is a writeable store, we can update it.
First, let’s provide a function to our use:enhance
attribute:
<form
use:enhance={executeSave}
on:submit={runInvalidate}
method="post"
action="?/editTodo"
>
This will run before a submit. Let’s write that next:
function executeSave({ data }) {
const id = data.get("id");
const title = data.get("title");
return async () => {
todos.update(list =>
list.map(todo => {
if (todo.id == id) {
return Object.assign({}, todo, { title });
} else {
return todo;
}
})
);
};
}
This function provides a data
object with our form data. We return an async function that will run after our edit is done. The docs explain all of this, but by doing this, we shut off SvelteKit’s default form handling that would have re-run our loader. This is exactly what we want! (We could easily get that default behavior back, as the docs explain.)
We now call update
on our todos
array since it’s a store. And that’s that. After editing a to-do item, our changes show up immediately and our cache is cleared (as before, since we set a new cookie value in our editTodo
form action). So, if we search and then navigate back to this page, we’ll get fresh data from our loader, which will correctly exclude any updated to-do items that were updated.
The code for the immediate updates is available at GitHub.
Digging deeper
We can set cookies in any server load function (or server action), not just the root layout. So, if some data are only used underneath a single layout, or even a single page, you could set that cookie value there. Moreoever, if you’re not doing the trick I just showed manually updating on-screen data, and instead want your loader to re-run after a mutation, then you could always set a new cookie value right in that load function without any check against isDataRequest
. It’ll set initially, and then anytime you run a server action that page layout will automatically invalidate and re-call your loader, re-setting the cache bust string before your universal loader is called.
Writing a reload function
Let’s wrap-up by building one last feature: a reload button. Let’s give users a button that will clear cache and then reload the current query.
We’ll add a dirt simple form action:
async reloadTodos({ cookies }) {
cookies.set('todos-cache', +new Date(), { path: '/', httpOnly: false });
},
In a real project you probably wouldn’t copy/paste the same code to set the same cookie in the same way in multiple places, but for this post we’ll optimize for simplicity and readability.
Now let’s create a form to post to it:
<form method="POST" action="?/reloadTodos" use:enhance>
<button>Reload todos</button>
</form>
That works!
We could call this done and move on, but let’s improve this solution a bit. Specifically, let’s provide feedback on the page to tell the user the reload is happening. Also, by default, SvelteKit actions invalidate everything. Every layout, page, etc. in the current page’s hierarchy would reload. There might be some data that’s loaded once in the root layout that we don’t need to invalidate or re-load.
So, let’s focus things a bit, and only reload our to-dos when we call this function.
First, let’s pass a function to enhance:
<form method="POST" action="?/reloadTodos" use:enhance={reloadTodos}>
import { enhance } from "$app/forms";
import { invalidate } from "$app/navigation";
let reloading = false;
const reloadTodos = () => {
reloading = true;
return async () => {
invalidate("reload:todos").then(() => {
reloading = false;
});
};
};
We’re setting a new reloading
variable to true
at the start of this action. And then, in order to override the default behavior of invalidating everything, we return an async
function. This function will run when our server action is finished (which just sets a new cookie).
Without this async
function returned, SvelteKit would invalidate everything. Since we’re providing this function, it will invalidate nothing, so it’s up to us to tell it what to reload. We do this with the invalidate
function. We call it with a value of reload:todos
. This function returns a promise, which resolves when the invalidation is complete, at which point we set reloading
back to false
.
Lastly, we need to sync our loader up with this new reload:todos
invalidation value. We do that in our loader with the depends
function:
export async function load({ fetch, url, setHeaders, depends }) {
depends('reload:todos');
// rest is the same
And that’s that. depends
and invalidate
are incredibly useful functions. What’s cool is that invalidate
doesn’t just take arbitrary values we provide like we did. We can also provide a URL, which SvelteKit will track, and invalidate any loaders that depend on that URL. To that end, if you’re wondering whether we could skip the call to depends
and invalidate our /api/todos
endpoint altogether, you can, but you have to provide the exact URL, including the search
term (and our cache value). So, you could either put together the URL for the current search, or match on the path name, like this:
invalidate(url => url.pathname == "/api/todos");
Personally, I find the solution that uses depends
more explicit and simple. But see the docs for more info, of course, and decide for yourself.
If you’d like to see the reload button in action, the code for it is in this branch of the repo.
Parting thoughts
This was a long post, but hopefully not overwhelming. We dove into various ways we can cache data when using SvelteKit. Much of this was just a matter of using web platform primitives to add the correct cache, and cookie values, knowledge of which will serve you in web development in general, beyond just SvelteKit.
Moreover, this is something you absolutely do not need all the time. Arguably, you should only reach for these sort of advanced features when you actually need them. If your datastore is serving up data quickly and efficiently, and you’re not dealing with any kind of scaling problems, there’s no sense in bloating your application code with needless complexity doing the things we talked about here.
As always, write clear, clean, simple code, and optimize when necessary. The purpose of this post was to provide you those optimization tools for when you truly need them. I hope you enjoyed it!
Caching Data in SvelteKit originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
15 Best New Fonts, January 2023
Your choice of typeface significantly impacts the tone of voice your designs adopt. Heritage, ambition, freshness, energy, utility and more can all be communicated with the right font.
And so, every month, we put together this roundup of the 15 best new fonts we’ve found on the web in the previous four weeks. Enjoy!
Bulk
Bulk is an awesome typeface that challenges how letters are constructed. Bulk uses heavy, block-shaped outlines and delicate linear ‘cuts’ to form its letters. It’s an excellent choice for posters, giant typography, and branding.
AW Conqueror Stincilla
We’ve featured AW Conqueror before, and AW Conqueror Stincilla is a delightful stencil variation on the form. It produces some beautiful shapes and is ideal for luxury branding, editorial work, and even as a display face.
Vesterbro Sans
Rarely do we see a sans-serif that we can honestly describe as refreshing, but Vesterbro Sans falls into that category. It’s expertly executed with simple details adding to the overall feeling of effortlessness. It’s also available as a variable font.
Miau
Miau is an awesomely over-the-top that is barely legible. The ribbon-like letterforms are packed with energy. It works best when used in small doses.
Rikna
Rikna is a workhorse of a slab serif that works well at body font sizes and has enough detail to be interesting at display sizes. It’s a solid all-around choice for a project that’s serious but needs a touch of human warmth.
Austerlitz
Austerlitz is a family of pseudo-didone typefaces. It’s a flexible and highly usable family that works well for serious publications, digital, and print. The refined rhythm means that Austerlitz will work well in some branding projects.
Gramma
Gramma is a modern-looking sans-serif. Gramma has a distinctive style of terminal that creates visual interest at larger sizes and helps the letterforms keep a clean outline on screen at smaller sizes. It would make a great brand font.
Miracle Fairway
Miracle Fairway is a thick-stroked display typeface with tapered serifs that give the overall design a sense of motion. It’s a great option for logo design.
Vitrine
Vitrine is a high-contrast sans-serif that’s great a large sizes. It comes in nine weights, but the semi-bold, bold, and black have the highest contrast and, as a result, the most character. It works well as a display face and for logos.
Kelyon
Kelyon is a graceful display face with medieval and Art Nouveau influences. It has numerous alternates. It works best at display sizes and is a good choice for editorial design.
Fit Devanagari
Fit Devanagari is a highly stylized typeface, designed as a companion for the Latin typeface Fit, that can be used at any size. If you need to fill a particular sized space, then Fit allows you to do so elegantly.
Precise Sans
Precise Sans is a tech-feeling sans-serif with a range of weights and (eventually) two italics. It’s an excellent choice for UI design, where clarity trumps character, but you still want a little personality. It’s still in beta so expect changes.
Mistont
Mistont is a beautiful serif font with elegant curves and graceful ligatures. It’s an ideal choice for branding lifestyle products.
Exergue
Exergue is a stunning serif typeface that uses flared terminals to match its serifs. The result is blocks of text that feel unexpected and familiar at the same time. Exergue is an excellent choice for extended text passages where it adds character while maintaining readability.
Manier
Manier is a very usable typeface with angular wedges and generous, modern proportions. It comes in six weights with matching italics. It’s ideal if you’re looking to infuse your design with some confidence.
The post 15 Best New Fonts, January 2023 first appeared on Webdesigner Depot.
The Pros and Cons of Responsive Web Design in 2023
Responsive web design has been such a success for many web designers that it is generally seen as the default approach to creating a website, but it’s not as cut and dried as all that.
There are many different factors to consider when deciding whether or not to use a responsive approach to designing your websites, such as budget, timescale, and audience.
In this blog post, we’ll weigh the pros and cons of responsive web design to help you make an informed decision.
What Is Responsive Web Design?
In short, responsive web design (RWD) is a modern approach to designing websites that allows the website to respond intelligently to the device on which it is being viewed.
RWD uses techniques like media queries and relative units to create a flexible design that can grow or shrink depending on the size of the screen. Rather than having multiple versions for mobile and desktop, as used to be the case, this type of web design offers an all-in-one solution with a flexible layout that can adapt to various scenarios.
RWD is often confused with mobile-first web design, firstly because mobile-first is a crucial technique of responsive workflows, and secondly because RWD grew in popularity as the number of mobile devices users viewed the web on grew. However, you can have a mobile-first site that isn’t responsive.
Responsive web design essentially eliminates the need to have separate versions of sites for mobile and desktop-style devices.
The Pros of Responsive Web Design
There are seemingly endless pros to responsive web design.
- UX-friendly: RWD is excellent for responding to the needs of users. It allows users to access your website on any device, so they don’t have to switch devices. It also allows you to reach customers who don’t have a computer and only use a mobile device like a cell phone.
- SEO-friendly: RWD is good for SEO (Search Engine Optimization) because it helps people find your website on different devices, like phones and computers. Also, because you don’t have to maintain separate versions of your website for mobile and desktop, Google is less likely to penalize your site for duplicate content.
- Cost-effective: RWD can save a lot of time and money in creating multiple versions of the same website. Additionally, responsive web design allows you to maintain one website instead of several, which reduces maintenance and hosting costs.
- Future-proof: As technology continues to evolve, websites that are built responsively will be able to adapt quickly and keep up with the changes. This means that with responsive web design, your website won’t become obsolete as quickly.
The Cons of Responsive Web Design
Although there are considerable benefits to a responsive approach to building your websites, there are a few drawbacks that it’s important to consider.
- Front-end only: The biggest flaw with RWD is that it is a front-end approach only. This means that while you can change the layout of your website, you can’t change the actual content using responsive techniques.
- Design restrictions: As clever as RWD can be, some design elements don’t translate to different screen sizes; menus can be particularly difficult. You may find that you must compromise on your vision to make a site responsive.
- Increased development time: Creating a responsive website can take significantly longer than creating two versions (one for mobile and one for desktop), so it’s important to factor in additional development time when considering RWD.
- Performance issues: RWD uses code to adapt the design to different viewports. That code adds to the website payload and, if not carefully managed, can impact the performance of the website.
Is Responsive Web Design Worth the Effort?
For the vast majority of sites, RWD is a practical approach to creating a website. It increases the number of users you’re able to attract and ensures that when they arrive, your users have a better experience. RWD also improves your search engine ranking.
However, there are some cases when RWD is not the right choice. For example, if you need to deliver different content for mobile devices than desktop devices, then you will need separate sites for each type of device.
Tips for Responsive Web Design
If you choose an RWD approach, you can do a few things to mitigate the downsides and ensure that your website performs as well as you hope.
- Design for multiple viewports early: Create different designs for every significant viewport size. Make sure you know how the design should change at different sizes, so you’re not forced to adapt the design as you build it.
- Choose mobile-first: Take a mobile-first approach by designing the mobile version of your site before the desktop version; it is easier to scale a design up than scale it down.
- Limit media queries: Media queries are great for adapting a design but quickly lead to code bloat. Instead, rely on relative units as much as possible and reserve media queries for essential changes.
- Test extensively: Testing is essential for responsive web design. You must preview your finished site on as many devices as possible so that you know how your users will see it.
Conclusion
Responsive web design can be an excellent choice for most websites, as it allows you to create an experience that is optimized for different devices without the need for separate versions of your website.
However, there are some drawbacks to RWD that should also be taken into account before making a decision. It’s important to consider how much time and effort will go into creating a responsive site, whether or not you have content that must vary between mobile and desktop users, and if there may be any performance issues.
By following best practices, such as adopting a mobile-first approach and spending extra time on the design phase to ensure you have layouts prepared for multiple viewports, you can ensure that your website looks great across all devices while avoiding potential pitfalls associated with RWD.
Featured Image by vectorjuice on Freepik
The post The Pros and Cons of Responsive Web Design in 2023 first appeared on Webdesigner Depot.