Every human being needs, rather desperately, to feel accepted. They don’t necessarily need to feel accepted by everyone, but they do need to feel accepted by someone. It’s a part of our nature, and nature in general. Even as I type this, my cat Cleocatra is demanding that I give her attention, and if I don’t, she’ll leave my room in a huff.
[A few minutes later:] She didn’t leave. She stuck her claws into me. Users can have very similar reactions if their needs aren’t met when interacting with your website. Humans will anthropomorphize anything, and if they feel like your website doesn’t accept them for who they are, they might just go find one that will. Or do the claw thing.
This is why users need to feel accepted and respected when they use your site. I’ve put together some ideas about how to achieve that. Don’t expect a list of fifty genders to put in your forms, I’m not the guy to ask about that; this is just usability 101:
1. Respect Their Time
Build your website or product to be as efficient as possible. When users leave because your website didn’t load in five seconds or less, it’s not because they’re entitled. It’s because they have stuff to do, and a finite amount of hours in the day to do it.
If your site asks them to jump through a bunch of hoops before they can even read your content, or get that free e-book, or what-have-you, you’re not respecting their time. If your sign-up form is too detailed and asks for too much information, you’re wasting their time.
Think of all the meetings you’ve had that could have been emails. If your site feels more like going to a meeting, they won’t want to go.
2. Avoid Assumptions & Judgment in all Content
People often hate it when you make assumptions about them. They hate it when those assumptions are wrong, and they especially hate it when the assumptions are right. What will tick anyone right off is when those assumptions come with judgment.
Goddammit Netflix. Yes, I’m still watching.
Marketing in general has operated on assumptions and judgment for years, promoting now-derided ideas about what it means to be male, female, a good parent, or a good person. If you haven’t noticed, these assumptions have been the subject of considerable debate for years, now, and many are even considered harmful.
When you make too many assumptions about who is, or should, be using your website, you risk running off users and customers. Now, why would you do that?
3. Encourage Trying Again
People make mistakes, and lots of them. It’s normal, it’s life, and it’s why we have form validation. Now, if your users are signing up for or buying something they feel they absolutely need, and can’t get anywhere else, they’ll stay. They’ll keep trying to fill out your form, or follow your app’s arcane process for doing things, or no matter what.
I know I’ve personally abandoned checkout processes because, in the end, I felt like the thing I kinda sorta wanted to buy wasn’t worth the effort to try to buy it twice. I figured that if they wanted my money so badly they wouldn’t have made it so hard for me to spend, and went on my merry way.
Having to re-fill entire forms is a particular pet peeve of mine. If there’s a mistake, let me go back and fix the one mistake I made. Do not make me refill entire sections of the form. That just makes me wish I hadn’t bothered in the first place. Encourage people to try again when a process fails by not making them redo more work.
You can also encourage them to try again by never making them feel dumb for screwing up in the first place. Error messages should be clear, gentle, and encouraging. Making users feel dumb is the fastest way to put them on the defensive, and defensive people don’t buy stuff. Make it clear that mistakes are just part of the process sometimes, and they are more likely to feel accepted.
4. Provide Clear Instructions
People are more likely to feel like they belong when they know what they’re “supposed” to do. Learning the rules of any peer group is the first step to feeling accepted in a new place.
Clear instructions can provide this confidence to users (and help avoid situations where they feel dumb). You can do this in your copy, in your micro-copy, or even with those animated walk-throughs that so many apps have nowadays. I’d even go further, and say to have illustrated instructions whenever you can.
Think of those credit card input forms that look like actual credit cards. They’re a perfect example of setting clear expectations and easily-understood requirements. Users that have a clear idea of how to do what they want to do on your site will feel like they belong there.
5. Make a Human Point of Contact Available
“Nah, you’re doing fine.”
“Don’t worry about it. We have it covered on this end.”
“No, that was our mistake, really.”
“No worries, you got everything right.”
“Okay, that is a problem, but we can fix it.”
We all need reassurance from time to time. It helps to have another human being tell us that everything’s going to be okay in the end, and that we didn’t irrevocably ruin things for everyone. Want people to feel accepted? Give them a point of contact with people who can tell them those things.
Even if it’s just an email address, people need a way to talk to another person. Ultimately, a website can only do so much to make people feel accepted and respected. Sometimes, you just need the human touch to make a human connection.
Above all, delivering a feeling of acceptance and respect to people is about recognizing and even appreciating their humanity. We don’t always get enough of that in a digital world.
If a real estate agency wants to stay ahead of the competition, it is no longer enough to simply have great agents and outstanding service. Today every aspect of how an agency is run can have an impact on its bottom line – including real estate website design.
Think of it the way you would think of curb appeal: A home can be beautiful inside but if the first impression a potential buyer has is unkempt landscaping and a dirty exterior, it will be much harder to get them in the door. It is the same way with a website – a potential client’s first impression. Check out five of the most competitive real estate website designs and what makes them unique.
1. PropertyShark
One of the most challenging aspects of creating a competitive real estate website design is striking a balance between providing enough information and not providing enough. You do want visitors to instantly know what the website is for and what they can do there, but you do not want to overwhelm them with search boxes, graphics, illustrations, and links.
PropertyShark’s modern website manages strikes this balance by carefully selecting and limiting the color and types of fonts and pictures in such a way that it is able to offer a wide range of search options, subscription choices, and drop-down menus without overwhelming the reader. A quick scroll down the page offers access to interesting blogs with well-chosen titles and quotes that give the reader an immediate sense of the subjects covered and relevance.
For a website with so much valuable information available to its readers, PropertyShark manages to do an exceptional job of presenting the information in a fun and manageable way.
2. Homes.com
From the moment a visitor arrives at Homes.com they understand their three main options: buying, renting, and checking home values. With a dynamic, expansive picture to great viewers, followed by specific property listings in the area from which the viewer is visiting the website, there are several sections that immediately grab attention.
For the visitor who is committed to seeing everything the site has to offer, a short scroll down takes them to a section that then dominates the screen: an ad for the company’s Snap & Search program. The main picture is bright, bold, and instantly draws the eye, and the out of focus home in the background allows a visitor to substitute their perfect home for this rough outline.
By using contrasting information, photos, and data, Homes.com excites the visitor, forcing them to choose their own perspective. This gets them involved in the viewing process and increases the chance they will take the time to click around.
3. Century 21
Like PropertyShark, Century 21 focuses on just a few colors to help their website feel less cluttered. In fact, the overall aesthetic is one of minimalism. Compared to the other pages discussed this far, this website has fewer links, pictures, and information on its home page. The one picture they chose to showcase their entire brand brings one thing immediately to mind: luxury.
When a real estate company is making design decisions, one of the first choices they must make is whether to focus on a market niche or the market as a whole. While the previous websites have taken a more macro approach, Century 21 has clearly decided to focus on luxury and commercial clients. With clean lines, little clutter, and soft colors, their website instantly makes their focus clear.
4. CommercialCafe
The name and the website make it clear that CommercialCafe is all about commercial real estate. From the choice of the main image – a towering skyscraper – to center justifying the font, this is another example of focusing on a niche market. The website keeps words to a minimum while offering over a dozen active links. In short, it is all about getting down to business – just like a commercial property owner would be.
5. Redfin
With a style that is opposite of the cool, calm, collected design of CommercialCafe, Redfin focuses on the personal, warm relationships between people and the properties they occupy. The main image is a welcoming home with blue skies behind it. Below it, a slider showcases smiling families and individuals who are telling a person story about their experience with the website. Redfin has focused their entire design process on providing an intimate experience for home buyers and sellers.
In a world built on abundance, where clients have their unlimited choices—with numerous competitors offering similar products and services—becoming noticed is becoming more and more difficult. Now more than ever, it is important for a petrepreneur to have an effective brand and marketing strategy to consistently attract ideal clients.
So, what is the key to standing out from the pack? Having a BRAND WITH BITE.
1 – BRANDING BUILDS TRUST
Your brand is the first experience most clients have with your company. It’s the persona of your business—the voice, feeling, design, and strategy that calls your audience to action and compels them to stay. A solid pet business brand helps is also a visual reminder of who you are, what you stand for, and the problems you solve, so it is important to remain consistent and cohesive.
Before you jump straight to the design portion of your brand image, think about the adjectives you want people to think of when they think of your brand. Does your brand reflect those adjectives? If it is not, then your customers will wind up confused and your messaging will be muddied—not exactly the ideal situation if you want to attract your ideal client.
For example, the brand image below was created for a new pet business located in Texas. This business wanted to create a design that read as upscale and soothing while still projecting a modern minimalism. The softened blue palette gives their brand that soothing, trusting feeling and the thin, long fonts give off a dainty, upscale feeling.
Having a custom and cohesive guide for your brand visuals helps you create a memorable and recognizable image for your company. A simple logo alone does not make for a strong brand identity to start. Simply put, your pet businesses brand needs to be a series of thoughtful and professionally designed elements specifically created to help you be:
Distinct – stand out from the pack
Memorable – strong visual impact and therefore, easily recognizable
Cohesive – various print + digital assets that compliment each other while establishing a consistent message
Authentic – tells a story that is uniquely you
Iconic – allows your brand to be timeless while still being able to evolve
This pet business brand was designed to be distinguished and easily recognizable by utilizing bold and striking shapes to make their logo. To add a personal touch, it also showcases the client’s love of Boston Terriers. A set of patterns was also used to help further this brand’s all natural look and feel; since their treats are truly all-natural. After the launch of their new brand, their online and offline sales have nearly doubled. With a strong brand, your customers will have no problem remembering you for their future needs.
With the pet industry consistently growing and not showing any signs of slowing down, it is more important than ever to set yourself apart from your competition. A strong brand identity leads to increased engagement, conversion, and growth. After all, first impressions matter and you’ll never get a second chance to make that first impression (you WILL be judged on how your brand or lack thereof, is initially perceived). Clients often make assessments on service or product quality based on how polished (or unpolished) a brand looks. So, if your brand image is less than stellar, you could be leaving sales on the table. A professionally-designed brand matters when it comes to everything you do and it adds value. Ultimately, it will give you a major competitive advantage because branding is a visual, tangible, and controllable area that will set you apart from your competition. Make your audience’s experience with your unique and tail wagging-ly exciting.
This pet business re-brand for Catnip Takeout is truly unique. Not only is it for a feline pet business, but also was designed and developed to only use two colors. This was then carried throughout the entire company branding. A custom typeface that uses a strong pattern was created to emphasize that this toy maker is known for their use of bold and beautiful patterns for their cat toys.
There is no point in having a great pet product or service if your visual communication fails to connect your audience with your messaging. If your visual identity isn’t consistent and appealing then your audience may not take you seriously. Worse yet, your client or customer may choose your competitors instead of you – just because your competitors looked and felt more professional.
Branding is the key tool for meticulously connecting and presenting all of your communications in a way that provides clarity on your company differentiators and product offerings. Setting the brand standards for your company’s voice and visuals provides focus and guidance—no more aimless wanderings. With brand standards, you’ll be able to create content with intent.
An example of brand clarity and direction is seen here in The Tell-Tail Paw, LLC. This pet business’ brand was created and developed around the client’s formal background in gourmet baking and her love of literary classics. This brand was designed to exude a look of sophistication and class by using a bright, yet elegant color palette combined with a classy and upscaled pattern and iconography to match.
As mentioned previously, creating a consistent, cohesive brand will help to build brand trust and emotional connections with your target audience and since most consumers purchase based on emotion, it’s important to have this established from the beginning.
This company knowing their audience well, had a sweet, bright and sophisticated brand look created that would instantly connect with her upscale clientele. In addition, the brand also included her two moms’ favorite colors: feminine pink and chocolate brown. To top things off, custom illustrations were created to further and solidify her brand. This logo has been around for years and yet still connects with customers today.
When it comes to your pet business we know all your time, skill, effort, and energy is poured into it. Having a professionally-designed brand identity will help solidify your worth and place within this highly-competitive industry. Since great design is the first thing your audience will notice both online and offline, you can not afford to cut corners. Know you’re worth it and be willing to invest so you can show this to the rest of the world.
As a young nerd, I loved to immerse myself in digital worlds, learning the ins and outs of the rules someone else had created for me (intentionally or not). But the older and crankier I get, the more I find myself losing patience when navigating these “delightful” experiences.
This fascination was great for my eventual career as a designer, but unfortunately, it was also like teaching someone kerning—once you learn how to quantify a bad user experience, you can’t go back.
These days, I’m an impatient grump who doesn’t want to take work home. I just want to get in, get what I need, and get out. If there’s any delight I’m experiencing, it’s lost on me because I had such an effortless and annoyance-free time that it simply doesn’t stand out.
One of the features I find myself turning to over and over again is Safari’s Reader Mode. I read a lot of news, and with that comes a lot of bullshit. I now tap the cryptic little icon almost reflexively, confident that I’ll be transported to a land where I can focus on what matters most to me: content.
Tapping this button transports me to a land free of newsletter signup modals, surveys, pop-ups, pop-unders, flashing ad banners, automatically-playing video, app install prompts, breaking news alerts, passive-aggressive interstitials, and faux notification permission banners. It slices through the undesirable and unnecessary with ease; the Alexander the Great to the Gordian Knot that is poor user interface design.
Firefox also offers this reading mode. So does Edge. I find myself using it more and more on my laptop with every passing day—especially for reading long-form articles, like this piece. I’d be very surprised to see Chrome institute one natively, as Google is ultimately in the advertising business.
Building with accessible HTML standards is not a dead-end skill. Far from it. If you spend the effort to craft your experiences with a mind to semantics from the start, your content will be able to adapt to specialized reading modes, as well as whatever the future holds with little to no additional effort. Today’s Reader Mode could be tomorrow’s smart bathroom mirror.
Spending the effort is an important point: Good design isn’t about forcing someone to walk a tightrope across your carefully manicured lawn. Nor is it a puzzle box casually tossed to the user, hoping they’ll unlock it to reveal a hidden treasure. Good design is about doing the hard work to accommodate the different ways people access a solution to an identified problem.
For reading articles, the core problem is turning my ignorance about an issue into understanding (the funding model for this is a whole other complicated concern). The more obstructions you throw in my way to achieve this goal, the more I am inclined to leave and get my understanding elsewhere—all I’ll remember is how poor a time I had while trying to access your content. What is the value of an ad impression if it ultimately leads to that user never returning?
But this isn’t a website about digital media strategy, nor is it one about user conversion. This is a website about CSS and front-end development. What we’re going to discuss is how to keep people like me from hitting that button by relying on this nifty programming language the W3C so wisely gave us. Because if you don’t, all that other stuff—your newsletter signup boxes, your comments, your related articles, your engagement—will be cut away.
Inclusivity
What you want to do first is cast a wide net. The more people you can proactively accommodate from the outset, the more people you don’t unintentionally alienate. Our design choices should be invisible—we’re not trying to say, “this is for you.” That should be self-evident. What we’re trying to avoid are scenarios where someone encounters something that communicates, “this is for someone else.”
A basic paragraph style is the wellspring from which all your other type decisions should flow. It’s probably the most common and frequently invoked content type a website has, so it’s important to treat it with the care and respect it deserves. The web is typography, after all.
Heydon Pickering wrote about styling paragraphs way back when in 2011 with his post, The Perfect Paragraph. And here’s the thing: eight years later, this is all still solid advice (sheesh, I’ve been doing this for awhile). When you make design decisions that work with the grain of the web platform, you gain the confidence that you’re creating resilient, robust, and accessible solutions that last.
The neat part about this is that it frees up time to do other things, say reading about gender bias and the undervaluing of HTML and CSS. If anything, do it for me. I am honestly not sure I can handle another case of 2,000 lines of JavaScript used to recreate position: absolute;.
Circumstance
Form
Even though responsive design is nearly a decade old at this point(!), we still seem to ignore a lot of the wisdom Ethan Marcotte so nicely teaches us for free. He’s a smart guy, you should pay attention to what he has to say.
After a complete lack of breakpoints, perhaps the biggest offender I still come across with regards to responsive design is the assumption that a small viewport means teeny-tiny type. Typically, the opposite is true. Small devices are made to be worn or carried, meaning that we move them in physical space to get them into a comfortable reading position. This is the opposite of a larger, more stationary device, such as a monitor, where we move our body to accommodate it instead.
A comfortable reading position means not forcing someone to hold a phone two centimeters away from their face. Ergonomics aren’t likely to change, but devices will. Because of that, you should craft your breakpoint names to be abstract. I personally like names that keep usability in mind, so something along the lines of, “wrist, palm, lap, desk, wall.” It helps keep the user’s circumstance top-of-mind, and moves you away from associating only certain kinds of content as being viable on certain kinds of devices.
These ergonomically-derived designs can be achieved with the help from people like Rachel Andrew, whose in-depth explorations of CSS grid help us understand the power behind a real CSS layout system. Sass experts like Miriam Suzanne then teach us how to use True to codify these layouts and reliably integrate them into our larger Sass systems.
You also want to avoid fallacious device sniffing approaches, or making gross assumptions about a user’s circumstances and capabilities. Just let me increase and decrease that type size. Reader Mode lets me, so I’m going to get there one way or another.
Connection
The other thing you need to think about is how that ideal paragraph design actually gets served to a device. A big part of that involves loading our fonts, and ensuring that the loading process prioritizes user experience.
Text
Text downloads quickly; a lot faster than other exotic kinds of content. Browsers will render it gleefully, as it is historically the most important part of the payload. This means that the Reader Mode button is going to show up a lot faster than that distracting auto-playing video of talking heads so thoughtfully jammed into the bottom right-hand corner of my viewport.
And what if we’re on a slow, intermittent, and/or metered connection? Top-of-the-line MacBooks still have to use hotel wifi, just like everyone else.
You want to keep the page from jumping around when our paragraph font loads. This prevents the terrible experience of forcing me to scroll around to rediscover my place as things shift into place. It also helps prevent me from mis-clicking, taking me away from what I want to read because I had the audacity to interact with the page before the bitcoin miners are deployed (thankfully, good people like Laura Kalbag can help us with that one).
The temptation to hit that Reader Mode button is strong, because when I see the main text of the page show up, I know I can easily and reliably avoid all these potential issues.
Helen V. Holmes wrote Type is Your Right!, a beautiful article that effortlessly blends typographic history, capability, and performance. Notably, she discusses how to manage the Flash of Invisible Text (FOIT) and Flash of Unstyled Text (FOUT) to best corral all the aforementioned issues. In response, Monica Dinculescu made Font style matcher, a fantastic tool that lets you bend, stretch, squish, squash, and torture type in ways that would make your stodgy typography professor faint, all in the service of preventing layout jank.
Images
You can (and should) make all sorts of clever optimizations to ensure we’re delivering our images as efficiently as possible. But what happens while I’m waiting for those images to show up? What if they never do?
The other type of image you want to consider are icons. There’s lots of reasons to not use icon fonts. Adding one more reason to toss on the pile: icon fonts may not hold up in Reader Mode, as they are constructed using text glyphs. When Reader Mode passes over a page, it may convert the glyph to use the font you specify. This could make for a disastrous experience, especially if the icon is used to communicate critical functionality (e.g. “Press the Home button (?) to return to the main menu.”).
To help make my reading experience as comfortable as possible, Reader Mode allows me to adjust things like the typeface, the text and background colors, the font size and line height, and the number of words per line. This is great. I’ll frequently toggle back and forth between light and dark backgrounds depending on the time of day.
The web is flexible. Working on it means getting over your ego and learning to let go. That means accepting that the medium will never be pixel perfect. It means embracing technology like relative units, and more importantly, philosophies like Intrinsic Web Design. That’s brought to us by Jen Simmons, a tireless and passionate advocate for web standards.
I’d love to read your website. I’d love for your harmonious typography to quietly usher me into a flow state, making me forget I was even browsing your site at all.
If the importance of good HTML isn’t well-understood by the newer breed of JavaScript developers, then it’s my job as a DOWF (Dull Old Web Fart) to explain it.
Then he points out some very practical situations in which good HTML brings meaningful benefits. Maybe benefits isn’t the right word, as much as requirement since most of it is centered around accessibility.
I hope I’ve shown you that choosing the correct HTML isn’t purely an academic exercise…
Semantic HTML will give usability benefits to many users, help to future-proof your work, potentially boost your search engine results, and help people with disabilities use your site.
I think it’s fair to call HTML easy. Compared to many other things you’ll learn in your journey building websites, perhaps it is. All the more reason to get it right.
… take the radio button. All you have to do is give all the radio buttons in your button group the same name, preferably with differed values. Associate a label to each radio button to define what each one means. Simply using allows for selecting only one value with completely accessible, fast keyboard navigation. Each radio button is keyboard focusable. Users can select a different radio button by using the arrow keys, or clicking anywhere on the label or button. The arrows cycle thru the radio buttons, going from the last in the group to the first with a click of the down or right arrow. Developers don’t have to listen for keyboard, mouse, or touch interactions with JavaScript. These native interactions are robust and accessible. There is rarely a reason to rewrite them, especially since they’ll always work, even if the JavaScript doesn’t.
Web performance is a tricky beast, isn’t it? How do we actually know where we stand in terms of performance, and what our performance bottlenecks exactly are? Is it expensive JavaScript, slow web font delivery, heavy images, or sluggish rendering? Is it worth exploring tree-shaking, scope hoisting, code-splitting, and all the fancy loading patterns with intersection observer, server push, clients hints, HTTP/2, service workers and — oh my — edge workers? And, most importantly, where do we even start improving performance and how do we establish a performance culture long-term?
Back in the day, performance was often a mere afterthought. Often deferred till the very end of the project, it would boil down to minification, concatenation, asset optimization and potentially a few fine adjustments on the server’s config file. Looking back now, things seem to have changed quite significantly.
Performance isn’t just a technical concern: it matters, and when baking it into the workflow, design decisions have to be informed by their performance implications. Performance has to be measured, monitored and refined continually, and the growing complexity of the web poses new challenges that make it hard to keep track of metrics, because metrics will vary significantly depending on the device, browser, protocol, network type and latency (CDNs, ISPs, caches, proxies, firewalls, load balancers and servers all play a role in performance).
So, if we created an overview of all the things we have to keep in mind when improving performance — from the very start of the process until the final release of the website — what would that list look like? Below you’ll find a (hopefully unbiased and objective) front-end performance checklist for 2019 — an updated overview of the issues you might need to consider to ensure that your response times are fast, user interaction is smooth and your sites don’t drain user’s bandwidth.
Micro-optimizations are great for keeping performance on track, but it’s critical to have clearly defined targets in mind — measurable goals that would influence any decisions made throughout the process. There are a couple of different models, and the ones discussed below are quite opinionated — just make sure to set your own priorities early on.
Establish a performance culture. In many organizations, front-end developers know exactly what common underlying problems are and what loading patterns should be used to fix them. However, as long as there is no established endorsement of the performance culture, each decision will turn into a battlefield of departments, breaking up the organization into silos. You need a business stakeholder buy-in, and to get it, you need to establish a case study on how speed benefits metrics and Key Performance Indicators (KPIs) they care about.
Without a strong alignment between dev/design and business/marketing teams, performance isn’t going to sustain long-term. Study common complaints coming into customer service and see how improving performance can help relieve some of these common problems.
Run performance experiments and measure outcomes — both on mobile and on desktop. It will help you build up a company-tailored case study with real data. Furthermore, using data from case studies and experiments published on WPO Stats will help increase sensitivity for business about why performance matters, and what impact it has on user experience and business metrics. Stating that performance matters alone isn’t enough though — you also need to establish some measurable and trackable goals and observe them.
How to get there? In her talk on Building Performance for the Long Term, Allison McKnight shares a comprehensive case-study of how she helped establish a performance culture at Etsy (slides).
Goal: Be at least 20% faster than your fastest competitor. According to psychological research, if you want users to feel that your website is faster than your competitor’s website, you need to be at least 20% faster. Study your main competitors, collect metrics on how they perform on mobile and desktop and set thresholds that would help you outpace them. To get accurate results and goals though, first study your analytics to see what your users are on. You can then mimic the 90th percentile’s experience for testing.
Note: If you use Page Speed Insights (no, it isn’t deprecated), you can get CrUX performance data for specific pages instead of just the aggregates. This data can be much more useful for setting performance targets for assets like “landing page” or “product listing”. And if you are using CI to test the budgets, you need to make sure your tested environment matches CrUX if you used CrUX for setting the target (thanks Patrick Meenan!).
Collect data, set up a spreadsheet, shave off 20%, and set up your goals (performance budgets) this way. Now you have something measurable to test against. If you’re keeping the budget in mind and trying to ship down just the minimal script to get a quick time-to-interactive, then you’re on a reasonable path.
Need resources to get started?
Addy Osmani has written a very detailed write-up on how to start performance budgeting, how to quantify the impact of new features and where to start when you are over budget.
Also, make both performance budget and current performance visible by setting up dashboards with graphs reporting build sizes. There are many tools allowing you to achieve that: SiteSpeed.io dashboard (open source), SpeedCurve and Calibre are just a few of them, and you can find more tools on perf.rocks.
For instance, just like Pinterest, you could create a custom eslint rule that disallows importing from files and directories that are known to be dependency-heavy and would bloat the bundle. Set up a listing of “safe” packages that can be shared across the entire team.
Beyond performance budgets, think about critical customer tasks that are most beneficial to your business. Set and discuss acceptable time thresholds for critical actions and establish “UX ready” user timing marks that the entire organization has agreed on. In many cases, user journeys will touch on the work of many different departments, so alignment in terms of acceptable timings will help support or prevent performance discussions down the road. Make sure that additional costs of added resources and features are visible and understood.
Also, as Patrick Meenan suggested, it’s worth to plan out a loading sequence and trade-offs during the design process. If you prioritize early on which parts are more critical, and define the order in which they should appear, you will also know what can be delayed. Ideally, that order will also reflect the sequence of your CSS and JavaScript imports, so handling them during the build process will be easier. Also, consider what the visual experience should be in “in-between”-states, while the page is being loaded (e.g. when web fonts aren’t loaded yet).
Planning, planning, planning. It might be tempting to get into quick “low-hanging-fruits”-optimizations early on — and eventually it might be a good strategy for quick wins — but it will be very hard to keep performance a priority without planning and setting realistic, company-tailored performance goals.
Choose the right metrics. Not all metrics are equally important. Study what metrics matter most to your application: usually it will be related to how fast you can start render most important pixels of your product and how quickly you can provide input responsiveness for these rendered pixels. This knowledge will give you the best optimization target for ongoing efforts.
One way or another, rather than focusing on full page loading time (via onLoad and DOMContentLoaded timings, for example), prioritize page loading as perceived by your customers. That means focusing on a slightly different set of metrics. In fact, choosing the right metric is a process without obvious winners.
Based on Tim Kadlec’s research and Marcos Iglesias’ notes in his talk, traditional metrics could be grouped into a few sets. Usually, we’ll need all of them to get a complete picture of performance, and in your particular case some of them might be more important than others.
Quantity-based metrics measure the number of requests, weight and a performance score. Good for raising alarms and monitoring changes over time, not so good for understanding user experience.
Milestone metrics use states in the lifetime of the loading process, e.g. Time To First Byte and Time To Interactive. Good for describing the user experience and monitoring, not so good for knowing what happens between the milestones.
Rendering metrics provide an estimate of how fast content renders (e.g. Start Render time, Speed Index). Good for measuring and tweaking rendering performance, but not so good for measuring when important content appears and can be interacted with.
Custom metrics measure a particular, custom event for the user, e.g. Twitter’s Time To First Tweet and Pinterest’s PinnerWaitTime. Good for describing the user experience precisely, not so good for scaling the metrics and comparing with with competitors.
To complete the picture, we’d usually look out for useful metrics among all of these groups. Usually, the most specific and relevant ones are:
First Meaningful Paint(FMP) Provides the timing when primary content appears on the page, providing an insight into how quickly the server outputs any data. Long FMP usually indicates JavaScript blocking the main thread, but could be related to back-end/server issues as well.
Time to Interactive(TTI) The point at which layout has stabilized, key webfonts are visible, and the main thread is available enough to handle user input — basically the time mark when a user can interact with the UI. The key metrics for understanding how much wait a user has to experience to use the site without a lag.
First Input Delay(FID), or Input responsiveness The time from when a user first interacts with your site to the time when the browser is actually able to respond to that interaction. Complements TTI very well as it describes the missing part of the picture: what happens when a user actually interacts with the site. Intended as a RUM metric only. There is a JavaScript library for measuring FID in the browser.
Speed Index Measures how quickly the page contents are visually populated; the lower the score, the better. The Speed Index score is computed based on the speed of visual progress, but it’s merely a computed value. It’s also sensitive to the viewport size, so you need to define a range of testing configurations that match your target audience (thanks, Boris!).
CPU time spent A metric that indicates how busy is the main thread with the processing of the payload. It shows how often and how long the main thread is blocked, working on painting, rendering, scripting and loading. High CPU time is a clear indicator of a janky experience, i.e. when the user experiences a noticeable lag between their action and a response. With WebPageTest, you can select “Capture Dev Tools Timeline” on the “Chrome” tab to expose the breakdown of the main thread as it runs on any device using WebPageTest.
Ad Weight Impact If your site depends on the revenue generated by advertising, it’s useful to track the weight of ad related code. Paddy Ganti’s script constructs two URLs (one normal and one blocking the ads), prompts the generation of a video comparison via WebPageTest and reports a delta.
Deviation metrics As noted by Wikipedia engineers, data of how much variance exists in your results could inform you how reliable your instruments are, and how much attention you should pay to deviations and outlers. Large variance is an indicator of adjustments needed in the setup. It also helps understand if certain pages are more difficult to measure reliably, e.g. due to third-party scripts causing significant variation. It might also be a good idea to track browser version to understand bumps in performance when a new browser version is rolled out.
Custom metrics Custom metrics are defined by your business needs and customer experience. It requires you to identify important pixels, critical scripts, necessary CSS and relevant assets and measure how quickly they get delivered to the user. For that one, you can monitor Hero Rendering Times, or use Performance API, marking particular timestaps for events that are important for your business. Also, you can collect custom metrics with WebPagetest by executing arbitrary JavaScript at the end of a test.
Steve Souders has a detailed explanation of each metric. It’s important to notice that while Time-To-Interactive is measured by running automated audits in the so-called lab environment, First Input Delay represents the actual user experience, with actual users experiencing a noticeable lag. In general, it’s probably a good idea to always measure and track both of them.
Note: both FID and TTI do not account for scrolling behavior; scrolling can happen independently since it’s off-main-thread, so for many content consumption sites these metrics might be much less important (thanks, Patrick!).
Gather data on a device representative of your audience. To gather accurate data, we need to thoroughly choose devices to test on. It’s a good option to choose a Moto G4, a mid-range Samsung device, a good middle-of-the-road device like a Nexus 5X and a slow device like Alcatel 1X, perhaps in an open device lab. For testing on slower thermal-throttled devices, you could also get a Nexus 2, which costs just around $100.
If you don’t have a device at hand, emulate mobile experience on desktop by testing on a throttled network (e.g. 150ms RTT, 1.5 Mbps down, 0.7 Mbps up) with a throttled CPU (5× slowdown). Eventually switch over to regular 3G, 4G and Wi-Fi. To make the performance impact more visible, you could even introduce 2G Tuesdays or set up a throttled 3G network in your office for faster testing.
Keep in mind that on a mobile device, you should be expecting a 4×–5× slowdown compared to desktop machines. Mobile devices have different GPUs, CPU, different memory, different battery characteristics. While download times are critical for low-end networks, parse times are critical for phones with slow CPUs. In fact, parse times on mobile are 36% higher than on desktop. So always test on an average device — a device that is most representative of your audience.
Luckily, there are many great options that help you automate the collection of data and measure how your website performs over time according to these metrics. Keep in mind that a good performance picture covers a set of performance metrics, lab data and field data:
Synthetic testing tools collect lab data in a reproducible environment with predefined device and network settings (e.g. Lighthouse, WebPageTest) and
Real User Monitoring (RUM) tools evaluate user interactions continuously and collect field data (e.g. SpeedCurve, New Relic — both tools provide synthetic testing, too).
The former is particularly useful during development as it will help you identify, isolate and fix performance issues while working on the product. The latter is useful for long-term maintenance as it will help you understand your performance bottlenecks as they are happening live — when users actually access the site.
Note: It’s always a safer bet to choose network-level throttlers, external to the browser, as, for example, DevTools has issues interacting with HTTP/2 push, due to the way it’s implemented (thanks, Yoav, Patrick!). For Mac OS, we can use Network Link Conditioner, for Windows Windows Traffic Shaper, for Linux netem, and for FreeBSD dummynet.
Set up “clean” and “customer” profiles for testing. While running tests in passive monitoring tools, it’s a common strategy to turn off anti-virus and background CPU tasks, remove background bandwidth transfers and test with a clean user profile without browser extensions to avoid skewed results (Firefox, Chrome).
However, it’s also a good idea to study which extensions your customers are using frequently, and test with a dedicated “customer” profile as well. In fact, some extensions might have a profound performance impact on your application, and if your users use them a lot, you might want to account for it up front. “Clean” profile results alone are overly optimistic and can be crushed in real-life scenarios.
Share the checklist with your colleagues. Make sure that the checklist is familiar to every member of your team to avoid misunderstandings down the line. Every decision has performance implications, and the project would hugely benefit from front-end developers properly communicating performance values to the whole team, so that everybody would feel responsible for it, not just front-end developers. Map design decisions against performance budget and the priorities defined in the checklist.
Setting Realistic Goals
100-millisecond response time, 60 fps. For an interaction to feel smooth, the interface has 100ms to respond to user’s input. Any longer than that, and the user perceives the app as laggy. The RAIL, a user-centered performance model gives you healthy targets: To allow for <100 milliseconds response, the page must yield control back to main thread at latest after every <50 milliseconds. Estimated Input Latency tells us if we are hitting that threshold, and ideally, it should be below 50ms. For high-pressure points like animation, it’s best to do nothing else where you can and the absolute minimum where you can’t.
Also, each frame of animation should be completed in less than 16 milliseconds, thereby achieving 60 frames per second (1 second ÷ 60 = 16.6 milliseconds) — preferably under 10 milliseconds. Because the browser needs time to paint the new frame to the screen, your code should finish executing before hitting the 16.6 milliseconds mark. We’re starting having conversations about 120fps (e.g. iPad’s new screens run at 120Hz) and Surma has covered some rendering performance solutions for 120fps, but that’s probably not a target we’re looking at just yet.
Speed Index < 1250, TTI < 5s on 3G, Critical file size budget Although it might be very difficult to achieve, a good ultimate goal would be First Meaningful Paint under 1 second and a Speed Index value under 1250. Considering the baseline being a $200 Android phone (e.g. Moto G4) on a slow 3G network, emulated at 400ms RTT and 400kbps transfer speed, aim for Time to Interactive under 5s, and for repeat visits, aim for under 2s (achievable only with a service worker),
Notice that, when speaking about interactivity metrics, it’s a good idea to distinguish between First CPU Idle and Time To Interactive to avoid misunderstandings down the line. The former is the earliest point after the main content has rendered (where there is at least a 5-second window where the page is responsive). The latter is the point where the page can be expected to always be responsive to input (thanks, Philip Walton!).
We have two major constraints that effectively shape a reasonable target for speedy delivery of the content on the web. On the one hand, we have network delivery constraints due to TCP Slow Start. The first 14KB of the HTML is the >most critical payload chunk — and the only part of the budget that can be delivered in the first roundtrip (which is all you get in 1 sec at 400ms RTT due to mobile wake-up times).
On the other hand, we have hardware constraints on memory and CPU due to JavaScript parsing times (we’ll talk about them in detail later). To achieve the goals stated in the first paragraph, we have to consider the critical file size budget for JavaScript. Opinions vary on what that budget should be (and it heavily depends on the nature of your project), but a budget of 170KB JavaScript gzipped already would take up to 1s to parse and compile on an average phone. Assuming that 170KB expands to 3× that size when decompressed (0.7MB), that already could be the death knell of a “decent” user experience on a Moto G4 or Nexus 2.
Of course, your data might show that your customers are not on these devices, but perhaps they simply don’t show up in your analytics because your service is inaccessible to them due to slow performance. In fact, Google’s Alex Russels recommends to aim for 130–170KB gzipped as a reasonable upper boundary, and exceeding this budget should be an informed and deliberate decision. In real-life world, most products aren’t even close: an average bundle size today is around 400KB, which is up 35% compared to late 2015. On a middle-class mobile device, that accounts for 30-35 seconds for Time-To-Interactive.
We could also go beyond the bundle size budget though. For example, we could set performance budgets based on the activities of the browser’s main thread, i.e. paint time before start render, or track down front-end CPU hogs. Tools such as Calibre, SpeedCurve and Bundlesize can help you keep your budgets in check, and can be integrated into your build process.
Also, a performance budget probably shouldn’t be a fixed value. Depending on the network connection, performance budgets should adapt, but payload on slower connection is much more “expensive”, regardless of how they’re used.
Defining The Environment
Choose and set up your build tools. Don’t pay too much attention to what’s supposedly coolthese days. Stick to your environment for building, be it Grunt, Gulp, Webpack, Parcel, or a combination of tools. As long as you are getting results you need and you have no issues maintaining your build process, you’re doing just fine.
Among the build tools, Webpack seems to be the most established one, with literally hundreds of plugins available to optimize the size of your builds. Getting started with Webpack can be tough though. So if you want to get started, there are some great resources out there:
Sean Learkin has a free course on Webpack: The Core Concepts and Jeffrey Way has released a fantastic free course on Webpack for everyone. Both of them are great introductions for diving into Webpack.
Webpack Fundamentals is a very comprehensive 4h course with Sean Larkin, released by FrontendMasters.
Webpack examples has hundreds of ready-to-use Webpack configurations, categorized by topic and purpose. Bonus: there is also a Webpack config configurator that generates a basic configuration file.
awesome-webpack is a curated list of useful Webpack resources, libraries and tools, including articles, videos, courses, books and examples for Angular, React and framework-agnostic projects.
Use progressive enhancement as a default. Keeping progressive enhancement as the guiding principle of your front-end architecture and deployment is a safe bet. Design and build the core experience first, and then enhance the experience with advanced features for capable browsers, creating resilient experiences. If your website runs fast on a slow machine with a poor screen in a poor browser on a sub-optimal network, then it will only run faster on a fast machine with a good browser on a decent network.
Choose a strong performance baseline. With so many unknowns impacting loading — the network, thermal throttling, cache eviction, third-party scripts, parser blocking patterns, disk I/O, IPC latency, installed extensions, antivirus software and firewalls, background CPU tasks, hardware and memory constraints, differences in L2/L3 caching, RTTS — JavaScript has the heaviest cost of the experience, next to web fonts blocking rendering by default and images often consuming too much memory. With the performance bottlenecks moving away from the server to the client, as developers, we have to consider all of these unknowns in much more detail.
As noted by Seb Markbåge, a good way to measure start-up costs for frameworks is to first render a view, then delete it and then render again as it can tell you how the framework scales. The first render tends to warm up a bunch of lazily compiled code, which a larger tree can benefit from when it scales. The second render is basically an emulation of how code reuse on a page affects the performance characteristics as the page grows in complexity.
It might sound obvious but worth stating: some projects can also benefit benefit from removing an existing framework altogether. Once a framework is chosen, you’ll be staying with it for at least a few years, so if you need to use one, make sure your choice is informed and well considered.
Inian Parameshwaran has measured performance footprint of top 50 frameworks (against First Contentful Paint — the time from navigation to the time when the browser renders the first bit of content from the DOM). Inian discovered that, out there in the wild, Vue and Preact are the fastest across the board — both on desktop and mobile, followed by React (slides). You could examine your framework candidates and the proposed architecture, and study how most solutions out there perform, e.g. with server-side rendering or client-side rendering, on average.
Baseline performance cost matters. According to a study by Ankur Sethi, “your React application will never load faster than about 1.1 seconds on an average phone in India, no matter how much you optimize it. Your Angular app will always take at least 2.7 seconds to boot up. The users of your Vue app will need to wait at least 1 second before they can start using it.” You might not be targeting India as your primary market anyway, but users accessing your site with suboptimal network conditions will have a comparable experience. In exchange, your team gains maintainability and developer efficiency, of course. But this consideration needs to be deliberate.
You could go as far as evaluating a framework (or any JavaScript library) on Sacha Greif’s 12-point scale scoring system by exploring features, accessibility, stability, performance, package ecosystem, community, learning curve, documentation, tooling, track record, team, compatibility, security for example. But on a tough schedule, it’s a good idea to consider at least the total cost on size + initial parse times before choosing an option; lightweight options such as Preact, Inferno, Vue, Svelte or Polymer can get the job done just fine. The size of your baseline will define the constraints for your application’s code.
A good starting point is to choose a good default stack for your application. Gatsby.js (React), Preact CLI, and PWA Starter Kit provide reasonable defaults for fast loading out of the box on average mobile hardware.
Consider using PRPL pattern and app shell architecture. Different frameworks will have different effects on performance and will require different strategies of optimization, so you have to clearly understand all of the nuts and bolts of the framework you’ll be relying on. When building a web app, look into the PRPL pattern and application shell architecture. The idea is quite straightforward: Push the minimal code needed to get interactive for the initial route to render quickly, then use service worker for caching and pre-caching resources and then lazy-load routes that you need, asynchronously.
Have you optimized the performance of your APIs? APIs are communication channels for an application to expose data to internal and third-party applications via so-called endpoints. When designing and building an API, we need a reasonable protocol to enable the communication between the server and third-party requests. Representational State Transfer (REST) is a well-established, logical choice: it defines a set of constraints that developers follow to make content accessible in a performant, reliable and scalable fashion. Web services that conform to the REST constraints, are called RESTful web services.
As with good ol’ HTTP requests, when data is retrieved from an API, any delay in server response will propagate to the end user, hence delaying rendering. When a resource wants to retrieve some data from an API, it will need to request the data from the corresponding endpoint. A component that renders data from several resources, such as an article with comments and author photos in each comment, may need several roundtrips to the server to fetch all the data before it can be rendered. Furthermore, the amount of data returned through REST is often more than what is needed to render that component.
If many resources require data from an API, the API might become a performance bottleneck. GraphQL provides a performant solution to these issues. Per se, GraphQL is a query language for your API, and a server-side runtime for executing queries by using a type system you define for your data. Unlike REST, GraphQL can retrieve all data in a single request, and the response will be exactly what is required, without over or under-fetching data as it typically happens with REST.
Will you be using AMP or Instant Articles? Depending on the priorities and strategy of your organization, you might want to consider using Google’s AMP or Facebook’s Instant Articles or Apple’s Apple News. You can achieve good performance without them, but AMP does provide a solid performance framework with a free content delivery network (CDN), while Instant Articles will boost your visibility and performance on Facebook.
The seemingly obvious benefit of these technologies for users is guaranteed performance, so at times they might even prefer AMP-/Apple News/Instant Pages-links over “regular” and potentially bloated pages. For content-heavy websites that are dealing with a lot of third-party content, these options could potentially help speed up render times dramatically.
Unless they don’t. According to Tim Kadlec, for example, “AMP documents tend to be faster than their counterparts, but they don’t necessarily mean a page is performant. AMP is not what makes the biggest difference from a performance perspective.”
A benefit for the website owner is obvious: discoverability of these formats on their respective platforms and increased visibility in search engines. You could build progressive web AMPs, too, by reusing AMPs as a data source for your PWA. Downside? Obviously, a presence in a walled garden places developers in a position to produce and maintain a separate version of their content, and in case of Instant Articles and Apple News without actual URLs(thanks Addy, Jeremy!).
Choose your CDN wisely. Depending on how much dynamic data you have, you might be able to “outsource” some part of the content to a static site generator, pushing it to a CDN and serving a static version from it, thus avoiding database requests. You could even choose a static-hosting platform based on a CDN, enriching your pages with interactive components as enhancements (JAMStack). In fact, some of those generators (like Gatsby on top of React) are actually website compilers with many automated optimizations provided out of the box. As compilers add optimizations over time, the compiled output gets smaller and faster over time.
Notice that CDNs can serve (and offload) dynamic content as well. So, restricting your CDN to static assets is not necessary. Double-check whether your CDN performs compression and conversion (e.g. image optimization in terms of formats, compression and resizing at the edge), support for servers workers, edge-side includes, which assemble static and dynamic parts of pages at the CDN’s edge (i.e. the server closest to the user), and other tasks.
Note: based on research by Patrick Meenan and Andy Davies, HTTP/2 is effectively broken on many CDNs, so we shouldn’t be too optimistic about the performance boost there.
Assets Optimizations
Use Brotli or Zopfli for plain text compression. In 2015, Google introducedBrotli, a new open-source lossless data format, which is now supported in all modern browsers. In practice, Brotli appears to be muchmore effective than Gzip and Deflate. It might be (very) slow to compress, depending on the settings, but slower compression will ultimately lead to higher compression rates. Still, it decompresses fast. You can also estimate Brotli compression savings for your site.
Browsers will accept it only if the user is visiting a website over HTTPS though. What’s the catch? Brotli still doesn’t come preinstalled on some servers today, and it’s not straightforward to set up without self-compiling Nginx. Still, it’s not that difficult, and its support is coming, e.g. it’s available since Apache 2.4.26. Brotli is widely supported, and many CDNs support it (Akamai, AWS, KeyCDN, Fastly, Cloudlare, CDN77) and you can enable Brotli even on CDNs that don’t support it yet (with a service worker).
At the highest level of compression, Brotli is so slow that any potential gains in file size could be nullified by the amount of time it takes for the server to begin sending the response as it waits to dynamically compress the asset. With static compression, however, higher compression settings are preferred.
Alternatively, you could look into using Zopfli’s compression algorithm, which encodes data to Deflate, Gzip and Zlib formats. Any regular Gzip-compressed resource would benefit from Zopfli’s improved Deflate encoding because the files will be 3 to 8% smaller than Zlib’s maximum compression. The catch is that files will take around 80 times longer to compress. That’s why it’s a good idea to use Zopfli on resources that don’t change much, files that are designed to be compressed once and downloaded many times.
If you can bypass the cost of dynamically compressing static assets, it’s worth the effort. Both Brotli and Zopfli can be used for any plaintext payload — HTML, CSS, SVG, JavaScript, and so on.
The strategy? Pre-compress static assets with Brotli+Gzip at the highest level and compress (dynamic) HTML on the fly with Brotli at level 1–4. Make sure that the server handles content negotiation for Brotli or gzip properly. If you can’t install/maintain Brotli on the server, use Zopfli.
Use responsive images and WebP. As far as possible, use responsive images with srcset, sizes and the element. While you’re at it, you could also make use of the WebP format (supported in Chrome, Opera, Firefox 65, Edge 18) by serving WebP images with the element and a JPEG fallback (see Andreas Bovens’ code snippet) or by using content negotiation (using Accept headers). Ire Aderinokun has a very detailed tutorial on converting images to WebP, too.
It’s important to note that while WebP image file sizes compared to equivalent Guetzli and Zopfli, the format doesn’t support progressive rendering like JPEG, which is why users might see an actual image faster with a good ol’ JPEG although WebP images might get faster through the network. With JPEG, we can serve a “decent” user experience with the half or even quarter of the data and load the rest later, rather than have a half-empty image as it is in the case of WebP. Your decision will depend on what you are after: with WebP, you’ll reduce the payload, and with JPEG you’ll improve perceived performance.
On Smashing Magazine, we use the postfix -opt for image names — for example, brotli-compression-opt.png; whenever an image contains that postfix, everybody on the team knows that the image has already been optimized. And — shameless plug! — Jeremy Wagner even published a Smashing book on WebP.
Are images properly optimized? When you’re working on a landing page on which it’s critical that a particular image loads blazingly fast, make sure that JPEGs are progressive and compressed with mozJPEG (which improves the start rendering time by manipulating scan levels) or Guetzli, Google’s new open-source encoder focusing on perceptual performance, and utilizing learnings from Zopfli and WebP. The only downside: slow processing times (a minute of CPU per megapixel). For PNG, we can use Pingo, and for SVG, we can use SVGO or SVGOMG. And if you need to quickly preview and copy or download all the SVG assets from a website, svg-grabber can do that for you, too.
Every single image optimization article would state it, but keeping vector assets clean and tight is always worth reminding. Make sure to clean up unused assets, remove unnecessary metadata and reduces the amount of path points in artwork (and thus SVG code). (Thanks, Jeremy!)
There are more advanced options though. You could:
Use Squoosh to compress, resize and manipulate images at the optimal compression levels (lossy or lossless),
To check the efficiency of your responsive markup, you can use imaging-heap, a command line tool that measure the efficiency across viewport sizes and device pixel ratios.
Lazy load images and iframes with lazysizes, a library that detects any visibility changes triggered through user interaction (or IntersectionObserver which we’ll explore later).
Watch out for images that are loaded by default, but might never be displayed — e.g. in carousels, accordions and image galleries.
Consider Swapping Images with the Sizes Attribute by specifying different image display dimensions depending on media queries, e.g. to manipulate sizes to swap sources in a magnifier component.
If you feel adventurous, you could chop and rearrange HTTP/2 streams using Edge workers, basically a real-time filter living on the CDN, to send images faster through the network. Edge workers use JavaScript streams that use chunks which you can control (basically they are JavaScript that runs on the CDN edge that can modify the streaming responses), so you can control the delivery of images. With service worker it’s too late as you can’t control what’s on the wire, but it does work with Edge workers. So you can use them on top of static JPEGs saved progressively for a particular landing page.
The future of responsive images might change dramatically with the adoption of client hints. Client hints are HTTP request header fields, e.g. DPR, Viewport-Width, Width, Save-Data, Accept (to specify image format preferences) and others. They are supposed to inform the server about the specifics of user’s browser, screen, connection etc. As a result, the server can decide how to fill in the layout with appropriately sized images, and serve only these images in desired formats. With client hints, we move the resource selection from HTML markup and into the request-response negotiation between the client and server.
As Ilya Grigorik noted, client hints complete the picture — they aren’t an alternative to responsive images. “The element provides the necessary art-direction control in the HTML markup. Client hints provide annotations on resulting image requests that enable resource selection automation. Service Worker provides full request and response management capabilities on the client.” A service worker could, for example, append new client hints headers values to the request, rewrite the URL and point the image request to a CDN, adapt response based on connectivity and user preferences, etc. It holds true not only for image assets but for pretty much all other requests as well.
For clients that support client hints, one could measure 42% byte savings on images and 1MB+ fewer bytes for 70th+ percentile. On Smashing Magazine, we could measure 19-32% improvement, too. Unfortunately, client hints still have to gain some browser support. Under
consideration in Firefox and Edge. However, if you supply both the normal responsive images markup and the tag for Client Hints, then the browser will evaluate the responsive images markup and request the appropriate image source using the Client Hints HTTP headers.
Not good enough? Well, you can also improve perceived performance for images with the multiplebackgroundimagestechnique. Keep in mind that playing with contrast and blurring out unnecessary details (or removing colors) can reduce file size as well. Ah, you need to enlarge a small photo without losing quality? Consider using Letsenhance.io.
These optimizations so far cover just the basics. Addy Osmani has published a very detailed guide on Essential Image Optimization that goes very deep into details of image compression and color management. For example, you could blur out unnecessary parts of the image (by applying a Gaussian blur filter to them) to reduce the file size, and eventually you might even start removing colors or turn the picture into black and white to reduce the size even further. For background images, exporting photos from Photoshop with 0 to 10% quality can be absolutely acceptable as well. Ah, and don’t use JPEG-XR on the web — “the processing of decoding JPEG-XRs software-side on the CPU nullifies and even outweighs the potentially positive impact of byte size savings, especially in the context of SPAs”.
Are videos properly optimized? We covered images so far, but we’ve avoided a conversation about good ol’ GIFs. Frankly, instead of loading heavy animated GIFs which impact both rendering performance and bandwidth, it’s a good idea to switch either to animated WebP (with GIF being a fallback) or replace them with looping HTML5 videos altogether. Yes, the browser performance is slow with , and, unlike with images, browsers do not preload content, but they tend to be lighter and smaller than GIFs. Not an option? Well, at least we can add lossy compression to GIFs with Lossy GIF, gifsicle or giflossy.
In the land of good news though, video formats have been advancing massively over the years. For a long time, we had hoped that WebM would become the format to rule them all, and WebP (which is basically one still image inside of the WebM video container) will become a replacement for dated image formats. But despite WebP and WebM gainingsupport these days, the breakthrough didn’t happen.
In 2018, the Alliance of Open Media has released a new promising video format called AV1. AV1 has compression similar to H.265 codec (the evolution of H.264) but unlike the latter, AV1 is free. The H.265 license pricing pushed browser vendors to adopting a comparably performant AV1 instead: AV1 (just like H.265) compress twice as good as WebP.
In fact, Apple currently uses HEIF format and HEVC (H.265), and all the photos and videos on the latest iOS are saved in these formats, not JPEG. While HEIF and HEVC (H.265) aren’t properly exposed to the web (yet?), AV1 is — and it’s gaining browser support. So adding the AV1 source in your tag is reasonable, as all browser vendors seem to be on board.
For now, the most widely used and supported encoding is H.264, served by MP4 files, so before serving the file, make sure that your MP4s are processed with a multipass-encoding, blurred with the frei0r iirblur effect (if applicable) and moov atom metadata is moved to the head of the file, while your server accepts byte serving. Boris Schapira provides exact instructions for FFmpeg to optimize videos to the maximum. Of course, providing WebM format as an alternative would help, too.
Video playback performance is a story on its own, and if you’d like to dive into it in details, take a look at Doug Sillar’s series on The Current State of Video and Video Delivery Best Practices that include details on video delivery metrics, video preloading, compression and streaming.
Are web fonts optimized? The first question that’s worth asking if you can get away with using UI system fonts in the first place. If it’s not the case, chances are high that the web fonts you are serving include glyphs and extra features and weights that aren’t being used. You can ask your type foundry to subset web fonts or if you are using open-source fonts, subset them on your own with Glyphhanger or Fontsquirrel. You can even automate your entire workflow with Peter Müller’s subfont, a command line tool that statically analyses your page in order to generate the most optimal web font subsets, and then inject them into your page.
WOFF2 support is great, and you can use WOFF as fallback for browsers that don’t support it — after all, legacy browsers would probably be served well enough with system fonts. There are many, many, many options for web font loading, and you can choose one of the strategies from Zach Leatherman’s “Comprehensive Guide to Font-Loading Strategies,” (code snippets also available as Web font loading recipes).
Probably the better options to consider today are Critical FOFT with preload and “The Compromise” method. Both of them use a two-stage render for delivering web fonts in steps — first a small supersubset required to render the page fast and accurately with the web font, and then load the rest of the family async. The difference is that “The Compromise” technique loads polyfill asynchronously only if font load events are not supported, so you don’t need to load the polyfill by default. Need a quick win? Zach Leatherman has a quick 23-min tutorial and case study to get your fonts in order.
In general, it’s a good idea to use the preload resource hint to preload fonts, but in your markup include the hints after the link to critical CSS and JavaScript. Otherwise, font loading will cost you in the first render time. Still, it might be a good idea to be selective and choose files that matter most, e.g. the ones that are critical for rendering or that would help you avoiding visible and disruptive text reflows. In general, Zach advises to preload one or two fonts of each family — it also makes sense to delay some font loading if they are less-critical.
Nobody likes waiting for the content to be displayed. With the font-display CSS descriptor, we can control the font loading behavior and enable content to be readable immediately (font-display: optional) or almost immediately (font-display: swap). However, if you want to avoid text reflows, we still need to use the Font Loading API, specifically to group repaints, or when you are using third party hosts. Unless you can use Google Fonts with Cloudflare Workers, of course. Talking about Google Fonts: consider using google-webfonts-helper, a hassle-free way to self-host Google Fonts. Always self-host your fonts for maximum control if you can.
In general, if you use font-display: optional, it might not be a good idea to also use preload as it it’ll trigger that web font request early (causing network congestion if you have other critical path resources that need to be fetched). Use preconnect for faster cross-origin font requests, but be cautious with preload as preloading fonts from a different origin wlll incur network contention. All of these techniques are covered in Zach’s Web font loading recipes.
Also, it might be a good idea to opt out of web fonts (or at least second stage render) if the user has enabled Reduce Motion in accessibility preferences or has opted in for Data Saver Mode (see Save-Data header). Or when the user happens to have slow connectivity (via Network Information API).
To measure the web font loading performance, consider the All Text Visible metric (the moment when all fonts have loaded and all content is displayed in web fonts), as well as Web Font Reflow Count after first render. Obviously, the lower both metrics are, the better the performance is. It’s important to notice that variablefonts might require a significant performance consideration. They give designers a much broader design space for typographic choices, but it comes at the cost of a single serial request opposed to a number of individual file requests. That single request might be slow blocking the entire typographic appearance on the page. On the good side though, with a variable font in place, we’ll get exactly one reflow by default, so no JavaScript will be required to group repaints.
Now, what would make a bulletproof web font loading strategy? Subset fonts and prepare them for the 2-stage-render, declare them with a font-display descriptor, use Font Loading API to group repaints and store fonts in a persistent service worker’s cache. You could fall back to Bram Stein’s Font Face Observer if necessary. And if you’re interested in measuring the performance of font loading, Andreas Marschke explores performance tracking with Font API and UserTiming API.
Finally, don’t forget to include unicode-range to break down a large font into smaller language-specific fonts, and use Monica Dinculescu’s font-style-matcher to minimize a jarring shift in layout, due to sizing discrepancies between the fallback and the web fonts.
Build Optimizations
Set your priorities straight. It’s a good idea to know what you are dealing with first. Run an inventory of all of your assets (JavaScript, images, fonts, third-party scripts and “expensive” modules on the page, such as carousels, complex infographics and multimedia content), and break them down in groups.
Set up a spreadsheet. Define the basic core experience for legacy browsers (i.e. fully accessible core content), the enhanced experience for capable browsers (i.e. the enriched, full experience) and the extras (assets that aren’t absolutely required and can be lazy-loaded, such as web fonts, unnecessary styles, carousel scripts, video players, social media buttons, large images). A while back, we published an article on “Improving Smashing Magazine’s Performance,” which describes this approach in detail.
When optimizing for performance we need to reflect our priorities. Load the core experience immediately, then enhancements, and then the extras.
Revisit the good ol’ “cutting-the-mustard” technique. These days we can still use the cutting-the-mustard technique to send the core experience to legacy browsers and an enhanced experience to modern browsers. An updated variant of the technique would use ES2015+ . Modern browsers would interpret the script as a JavaScript module and run it as expected, while legacy browsers wouldn’t recognize the attribute and ignore it because it’s unknown HTML syntax.
These days we need to keep in mind that feature detection alone isn’t enough to make an informed decision about the payload to ship to that browser. On its own, cutting-the-mustard deduces device capability from browser version, which is no longer something we can do today.
For example, cheap Android phones in developing countries mostly run Chrome and will cut the mustard despite their limited memory and CPU capabilities. Eventually, using the Device Memory Client Hints Header, we’ll be able to target low-end devices more reliably. At the moment of writing, the header is supported only in Blink (it goes for client hints in general). Since Device Memory also has a JavaScript API which is already available in Chrome, one option could be to feature detect based on the API, and fall back to “cutting the mustard” technique only if it’s not supported (thanks, Yoav!).
Parsing JavaScript is expensive, so keep it small. When dealing with single-page applications, we need some time to initialize the app before we can render the page. Your setting will require your custom solution, but you could watch out for modules and techniques to speed up the initial rendering time. For example, here’s how to debug React performance and eliminate common React performance issues, and here’s how to improve performance in Angular. In general, most performance issues come from the initial parsing time to bootstrap the app.
JavaScript has a cost, but it’s rarely the file size alone that drains on performance. Parsing and executing times vary significantly depending on the hardware of a device. On an average phone (Moto G4), a parsing time alone for 1MB of (uncompressed) JavaScript will be around 1.3–1.4s, with 15–20% of all time on mobile spent on parsing. With compiling in play, just prep work on JavaScript takes 4s on average, with around 11s before First Meaningful Paint on mobile. Reason: parse and execution times can easily be 2–5x times higher on low-end mobile devices.
To guarantee high performance, as developers, we need to find ways to write and deploy less JavaScript. That’s why it pays off to examine every single JavaScript dependency in detail.
There are many tools to help you make an informed decision about the impact of your dependencies and viable alternatives:
An interesting way of avoiding parsing costs is to use binary templates that Ember has introduced in 2017. With them, Ember replaces JavaScript parsing with JSON parsing, which is presumably faster. (Thanks, Leonardo, Yoav!)
Bottom line: while size matters, it isn’t everything. Parse and compiling times don’t necessarily increase linearly when the script size increases.
Are you using tree-shaking, scope hoisting and code-splitting? Tree-shaking is a way to clean up your build process by only including code that is actually used in production and eliminate unused imports in Webpack. With Webpack and Rollup, we also have scope hoisting that allows both tools to detect where import chaining can be flattened and converted into one inlined function without compromising the code. With Webpack, we can also use JSON Tree Shaking as well.
Code-splitting is another Webpack feature that splits your code base into “chunks” that are loaded on demand. Not all of the JavaScript has to be downloaded, parsed and compiled right away. Once you define split points in your code, Webpack can take care of the dependencies and outputted files. It enables you to keep the initial download small and to request code on demand when requested by the application. Alexander Kondrov has a fantastic introduction to code-splitting with Webpack and React.
Consider using preload-webpack-plugin that takes routes you code-split and then prompts browser to preload them using or . Webpack inline directives also give some control over preload/prefetch
Where to define split points? By tracking which chunks of CSS/JavaScript are used, and which aren’t used. Umar Hansa explains how you can use Code Coverage from Devtools to achieve it.
If you aren’t using Webpack, note that Rollup shows significantly better results than Browserify exports. While we’re at it, you might want to check out rollup-plugin-closure-compiler and Rollupify, which converts ECMAScript 2015 modules into one big CommonJS module — because small modules can have a surprisingly high performance cost depending on your choice of bundler and module system.
Can you offload JavaScript into a Web Worker? To reduce the negative impact to Time-to-Interactive, it might be a good idea to look into offloading heavy JavaScript into a Web Worker or caching via a Service Worker.
As the code base keeps growing, the UI performance bottlenecks will show up, slowing down the user’s experience. That’s because DOM operations are running alongside your JavaScript on the main thread. With web workers, we can move these expensive operations to a background process that’s running on a different thread. Typical use cases for web workers are prefetching data and Progressive Web Apps to load and store some data in advance so that you can use it later when needed. And you could use Comlink to streamline the communication between the main page and the worker. Still some work to do, but we are getting there.
Workerize allows you to move a module into a Web Worker, automatically reflecting exported functions as asynchronous proxies. And if you’re using Webpack, you could use workerize-loader. Alternatively, you could use worker-plugin as well.
Note that Web Workers don’t have access to the DOM because the DOM is not “thread-safe”, and the code that they execute needs to be contained in a separate file.
In real-world scenarios, JavaScript seems to perform better than WebAssembly on smaller array sizes and WebAssembly performs better than JavaScript on larger array sizes. For most web apps, JavaScript is a better fit, and WebAssembly is best used for computationally intensive web apps, such as web games. However, it might be worth investigating if a switch to WebAssembly would result in noticeable performance improvements.
If you’d like to learn more about WebAssembly:
Lin Clark has written a thorough series to WebAssembly and Milica Mihajlija provides a general overview of how to run native code in the browser, why would you do that, and what it all means for JavaScript and the future of web development.
Google Codelabs provides an Introduction to WebAssembly, a 60 min course in which you’ll learn how to take native code—in C and compile it to WebAssembly, and then call it directly from JavaScript.
Note that these days we can write module-based JavaScript that runs natively in the browser, without transpilers or bundlers. header provides a way to initiate early (and high-priority) loading of module scripts. Basically, it’s a nifty way to help in maximizing bandwidth usage, by telling the browser about what it needs to fetch so that it’s not stuck with anything to do during those long roundtrips. Also, Jake Archibald has published a detailed article with gotchas and things t keep in mind with ES Modules that’s worth reading.
Are you using differential serving for JavaScript? We want to send just the necessary JavaScript through the network, yet it means being slightly more focused and granular about the delivery of those assets. A while back Philip Walton introduced the idea of differential serving. The idea is to compile and serve two separate JavaScript bundles: the “regular” build, the one with Babel-transforms and polyfills and serve them only to legacy browsers that actually need them, and another bundle (same functionality) that has no transforms or polyfills.
As a result, we help reduce blocking of the main thread by reducing the amount of scripts the browser needs to process. Jeremy Wagner has published a comprehensive article on differential serving and how to set it up in your build pipeline in 2019, from setting up Babel, to what tweaks you’ll need to make in Webpack, as well as the benefits of doing all this work.
Identify and rewrite legacy code with incremental decoupling. Long-living projects have a tendency to gather dust and dated code. Revisit your dependencies and assess how much time would be required to refactor or rewrite legacy code that has been causing trouble lately. Of course, it’s always a big undertaking, but once you know the impact of the legacy code, you could start with incremental decoupling.
First, set up metrics that tracks if the ratio of legacy code calls is staying constant or going down, not up. Publicly discourage the team from using the library and make sure that your CI alerts developers if it’s used in pull requests. polyfills could help transition from legacy code to rewritten codebase that uses standard browser features.
Identify and remove unused CSS/JS. CSS and JavaScript code coverage in Chrome allows you to learn which code has been executed/applied and which hasn’t. You can start recording the coverage, perform actions on a page, and then explore the code coverage results. Once you’ve detected unused code, find those modules and lazy load with import() (see the entire thread). Then repeat the coverage profile and validate that it’s now shipping less code on initial load.
Furthermore, purgecss, UnCSS and Helium can help you remove unused styles from CSS. And if you aren’t certain if a suspicious piece of code is used somewhere, you can follow Harry Roberts’ advice: create a 1×1px transparent GIF for a particular class and drop it into a dead/ directory, e.g. /assets/img/dead/comments.gif. After that, you set that specific image as a background on the corresponding selector in your CSS, sit back and wait for a few months if the file is going to appear in your logs. If there are no entries, nobody had that legacy component rendered on their screen: you can probably go ahead and delete it all.
For the I-feel-adventurous-department, you could even automate gathering on unused CSS through a set of pages by monitoring DevTools using DevTools.
Trim the size of your JavaScript bundles. As Addy Osmani noted, there’s a high chance you’re shipping full JavaScript libraries when you only need a fraction, along with dated polyfills for browsers that don’t need them, or just duplicate code. To avoid the overhead, consider using webpack-libs-optimizations that removes unused methods and polyfills during the build process.
Add bundle auditing into your regular workflow as well. There might be some lightweight alternatives to heavy libraries you’ve added years ago, e.g. Moment.js could be replaced with date-fns or Luxon. Benedikt Rötsch’s research showed that a switch from Moment.js to date-fns could shave around 300ms for First paint on 3G and a low-end mobile phone.
Feeling adventurous? You could look into Prepack. It compiles JavaScript to equivalent JavaScript code, but unlike Babel or Uglify, it lets you write normal JavaScript code, and outputs equivalent JavaScript code that runs faster.
Alternatively to shipping the entire framework, you could even trim your framework and compile it into a raw JavaScript bundle that does not require additional code. Svelte does it, and so does Rawact Babel plugin which transpiles React.js components to native DOM operations at build-time. Why? Well, as maintainers explain, “react-dom includes code for every possible component/HTMLElement that can be rendered, including code for incremental rendering, scheduling, event handling, etc. But there are applications which do not need all these features (at initial page load). For such applications, it might make sense to use native DOM operations to build the interactive user interface.”
Are you using predictive prefetching for JavaScript chunks? We could use heuristics to decide when to preload JavaScript chunks. Guess.js is a set of tools and libraries that use Google Analytics data to determine which page a user is mostly likely to visit next from a given page. Based on user navigation patterns collected from Google Analytics or other sources, Guess.js builds a machine-learning model to predict and prefetch JavaScript that will be required in each subsequent page.
Hence, every interactive element is receiving a probability score for engagement, and based on that score, a client-side script decides to prefetch a resource ahead of time. You can integrate the technique to your Next.js application, Angular and React, and there is a Webpack plugin which automates the setup process as well.
Obviously, you might be prompting the browser to consume unneeded data and prefetch undesirable pages, so it’s a good idea to be quite conservative in the number of prefetched requests. A good use case would be prefetching validation scripts required in the checkout, or speculative prefetch when a critical call-to-action comes into the viewport.
Need something less sophisticated? Quicklink is a small library that automatically prefetches links in the viewport during idle time in attempt to make next-page navigations load faster. However, it’s also data-considerate, so it doesn’t prefetch on 2G or if Data-Saver is on.
Take advantage of optimizations for your target JavaScript engine. Study what JavaScript engines dominate in your user base, then explore ways of optimizing for them. For example, when optimizing for V8 which is used in Blink-browsers, Node.js runtime and Electron, make use of script streaming for monolithic scripts. It allows async or defer scripts to be parsed on a separate background thread once downloading begins, hence in some cases improving page loading times by up to 10%. Practically, use in the , so that the browsers can discover the resource early and then parse it on the background thread.
Caveat: Opera Mini doesn’t support script deferment, so if you are developing for India or Africa,deferwill be ignored, resulting in blocking rendering until the script has been evaluated (thanks Jeremy!).
Client-side rendering or server-side rendering? In both scenarios, our goal should be to set up progressive booting: Use server-side rendering to get a quick first meaningful paint, but also include some minimal necessary JavaScript to keep the time-to-interactive close to the first meaningful paint. If JavaScript is coming too late after the First Meaningful Paint, the browser might lock up the main thread while parsing, compiling and executing late-discovered JavaScript, hence handcuffing the interactivity of site or application.
To avoid it, always break up the execution of functions into separate, asynchronous tasks, and where possible use requestIdleCallback. Consider lazy loading parts of the UI using WebPack’s dynamic import() support, avoiding the load, parse, and compile cost until the users really need them (thanks Addy!).
In its essence, Time to Interactive (TTI) tells us the time between navigation and interactivity. The metric is defined by looking at the first five-second window after the initial content is rendered, in which no JavaScript tasks take longer than 50ms. If a task over 50ms occurs, the search for a five-second window starts over. As a result, the browser will first assume that it reached Interactive, just to switch to Frozen, just to eventually switch back to Interactive.
Once we reached Interactive, we can then — either on demand or as time allows — boot non-essential parts of the app. Unfortunately, as Paul Lewis noticed, frameworks typically have no concept of priority that can be surfaced to developers, and hence progressive booting is difficult to implement with most libraries and frameworks. If you have the time and resources, use this strategy to ultimately boost performance.
Constrain the impact of third-party scripts. With all performance optimizations in place, often we can’t control third-party scripts coming from business requirements. Third-party-scripts metrics aren’t influenced by end-user experience, so too often one single script ends up calling a long tail of obnoxious third-party scripts, hence ruining a dedicated performance effort. To contain and mitigate performance penalties that these scripts bring along, it’s not enough to just load them asynchronously (probably via defer) and accelerate them via resource hints such as dns-prefetch or preconnect.
As Yoav Weiss explained in his must-watch talk on third-party scripts, in many cases these scripts download resources that are dynamic. The resources change between page loads, so we don’t necessarily know which hosts the resources will be downloaded from and what resources they would be.
What options do we have then? Consider using service workers by racing the resource download with a timeout and if the resource hasn’t responded within a certain timeout, return an empty response to tell the browser to carry on with parsing of the page. You can also log or block third-party requests that aren’t successful or don’t fulfill certain criteria. If you can, load the 3rd-party-script from your own server rather than from the vendor’s server.
Another option is to establish a Content Security Policy (CSP) to restrict the impact of third-party scripts, e.g. disallowing the download of audio or video. The best option is to embed scripts via so that the scripts are running in the context of the iframe and hence don’t have access to the DOM of the page, and can’t run arbitrary code on your domain. Iframes can be further constrained using the sandbox attribute, so you can disable any functionality that iframe may do, e.g. prevent scripts from running, prevent alerts, form submission, plugins, access to the top navigation, and so on.
For example, it’s probably going to be necessary to allow scripts to run with . Each of the limitations can be lifted via various allow values on the sandbox attribute (supported almost everywhere), so constrain them to the bare minimum of what they should be allowed to do.
Consider using Intersection Observer; that would enable ads to be iframed while still dispatching events or getting the information that they need from the DOM (e.g. ad visibility). Watch out for new policies such as Feature policy, resource size limits and CPU/Bandwidth priority to limit harmful web features and scripts that would slow down the browser, e.g. synchronous scripts, synchronous XHR requests, document.write and outdated implementations.
Set HTTP cache headers properly. Double-check that expires, max-age, cache-control, and other HTTP cache headers have been set properly. In general, resources should be cacheable either for a very short time (if they are likely to change) or indefinitely (if they are static) — you can just change their version in the URL when needed. Disable the Last-Modified header as any asset with it will result in a conditional request with an If-Modified-Since-header even if the resource is in cache. Same with Etag.
Use Cache-control: immutable, designed for fingerprinted static resources, to avoid revalidation (as of December 2018, supported in Firefox, Edge and Safari; in Firefox only on https:// transactions). In fact “across all of the pages in the HTTP Archive, 2% of requests and 30% of sites appear to include at least 1 immutable response. Additionally, most of the sites that are using it have the directive set on assets that have a long freshness lifetime.”
Remember the stale-while-revalidate? As you probably know, we specify the caching time with the Cache-Control response header, e.g. Cache-Control: max-age=604800. After 604800 seconds have passed, the cache will re-fetch the requested content, causing the page to load slower. This slowdown can be avoided by using stale-while-revalidate; it basically defines an extra window of time during which a cache can use a stale asset as long as it revalidates it async in the background. Thus, it “hides” latency (both in the network and on the server) from clients.
In October 2018, Chrome published an intent to ship handling of stale-while-revalidate in HTTP Cache-Control header, so as a result, it should improve subsequent page load latencies as stale assets are no longer in the critical path. Result: zero RTT for repeat views.
Also, double-check that you aren’t sending unnecessary headers (e.g. x-powered-by, pragma, x-ua-compatible, expires and others) and that you include useful security and performance headers (such as Content-Security-Policy, X-XSS-Protection, X-Content-Type-Options and others). Finally, keep in mind the performance cost of CORS requests in single-page applications.
Delivery Optimizations
Do you load all JavaScript libraries asynchronously? When the user requests a page, the browser fetches the HTML and constructs the DOM, then fetches the CSS and constructs the CSSOM, and then generates a rendering tree by matching the DOM and CSSOM. If any JavaScript needs to be resolved, the browser won’t start rendering the page until it’s resolved, thus delaying rendering. As developers, we have to explicitly tell the browser not to wait and to start rendering the page. The way to do this for scripts is with the defer and async attributes in HTML.
In practice, it turns out we should prefer defer toasync (at a cost to users of Internet Explorer up to and including version 9, because you’re likely to break scripts for them). According to Steve Souders, once async scripts arrive, they are executed immediately. If that happens very fast, for example when the script is in cache aleady, it can actually block HTML parser. With defer, browser doesn’t execute scripts until HTML is parsed. So, unless you need JavaScript to execute before start render, it’s better to use defer.
Lazy load expensive components with IntersectionObserver. In general, it’s a good idea to lazy-load all expensive components, such as heavy JavaScript, videos, iframes, widgets, and potentially images. The most performant way to do so is by using the Intersection Observer API that provides a way to asynchronously observe changes in the intersection of a target element with an ancestor element or with a top-level document’s viewport. Basically, you need to create a new IntersectionObserver object, which receives a callback function and a set of options. Then we add a target to observe.
The callback function executes when the target becomes visible or invisible, so when it intercepts the viewport, you can start taking some actions before the element becomes visible. In fact, we have a granular control over when the observer’s callback should be invoked, with rootMargin (margin around the root) and threshold (a single number or an array of numbers which indicate at what percentage of the target’s visibility we are aiming).
Also, watch out for the lazyload attribute that will allow us to specify which images and iframes should be lazy loaded, natively. Feature policy: LazyLoad will provide a mechanism that allows us to force opting in or out of LazyLoad functionality on a per-domain basis (similar to how Content Security Policies work). Bonus: once shipped, priority hints will allow us to specify importance on scripts and preloads in the header as well (currently in Chrome Canary).
Load images progressively. You could even take lazy loading to the next level by adding progressive image loading to your pages. Similarly to Facebook, Pinterest and Medium, you could load low quality or even blurry images first, and then as the page continues to load, replace them with the full quality versions by using the LQIP (Low Quality Image Placeholders) technique proposed by Guy Podjarny.
Opinions differ if these techniques improve user experience or not, but it definitely improves time to first meaningful paint. We can even automate it by using SQIP that creates a low quality version of an image as an SVG placeholder, or Gradient Image Placeholders with CSS linear gradients. These placeholders could be embedded within HTML as they naturally compress well with text compression methods. In his article, Dean Hume has described how this technique can be implemented using Intersection Observer.
Browser support? Decent, with Chrome, Firefox, Edge and Samsung Internet being on board. WebKit status is currently supported in preview. Fallback? If the browser doesn’t support intersection observer, we can still lazy load a polyfill or load the images immediately. And there is even a library for it.
Want to go fancier? You could trace your images and use primitive shapes and edges to create a lightweight SVG placeholder, load it first, and then transition from the placeholder vector image to the (loaded) bitmap image.
Do you send critical CSS? To ensure that browsers start rendering your page as quickly as possible, it’s become a common practice to collect all of the CSS required to start rendering the first visible portion of the page (known as “critical CSS” or “above-the-fold CSS”) and add it inline in the of the page, thus reducing roundtrips. Due to the limited size of packages exchanged during the slow start phase, your budget for critical CSS is around 14 KB.
With HTTP/2, critical CSS could be stored in a separate CSS file and delivered via a server push without bloating the HTML. The catch is that server pushing is troublesome with many gotchas and race conditions across browsers. It isn’t supported consistently and has some caching issues (see slide 114 onwards of Hooman Beheshti’s presentation). The effect could, in fact, be negative and bloat the network buffers, preventing genuine frames in the document from being delivered. Also, it appears that server pushing is much more effective on warm connections due to the TCP slow start.
Even with HTTP/1, putting critical CSS in a separate file on the root domain has benefits, sometimes even more than inlining due to caching. Chrome speculatively opens a second HTTP connection to the root domain when requesting the page, which removes the need for a TCP connection to fetch this CSS (thanks, Philip!)
A few gotchas to keep in mind: unlike preload that can trigger preload from any domain, you can only push resources from your own domain or domains you are authoritative for. It can be initiated as soon as the server gets the very first request from the client. Server pushed resources land in the Push cache and are removed when the connection is terminated. However, since an HTTP/2 connection can be re-used across multiple tabs, pushed resources can be claimed by requests from other tabs as well (thanks, Inian!).
At the moment, there is no simple way for the server to know if pushed resources are already in one of the user’s caches, so resources will keep being pushed with every user’s visit. You may then need to create a cache-aware HTTP/2 server push mechanism. If fetched, you could try to get them from a cache based on the index of what’s already in the cache, avoiding secondary server pushes altogether.
Keep in mind, though, that the new cache-digest specification negates the need to manually build such “cache-aware” servers, basically declaring a new frame type in HTTP/2 to communicate what’s already in the cache for that hostname. As such, it could be particularly useful for CDNs as well.
For dynamic content, when a server needs some time to generate a response, the browser isn’t able to make any requests since it’s not aware of any sub-resources that the page might reference. For that case, we can warm up the connection and increase the TCP congestion window size, so that future requests can be completed faster. Also, all inlined assets are usually good candidates for server pushing. In fact, Inian Parameshwaran did remarkable research comparing HTTP/2 Push vs. HTTP Preload, and it’s a fantastic read with all the details you might need. Server Push or Not Server Push? Colin Bendell’s Should I Push? might point you in the right direction.
Bottom line: As Sam Saccone noted, preload is good for moving the start download time of an asset closer to the initial request, while Server Push is good for cutting out a full RTT (or more, depending on your server think time) — if you have a service worker to prevent unnecessary pushing, that is.
Experiment with regrouping your CSS rules. We’ve got used to critical CSS, but there are a few optimizations that could go beyond that. Harry Roberts conducted a remarkable research with quite surprising results. For example, it might be a good idea to split the main CSS file out into its individual media queries. That way, the browser will retrieve critical CSS with high priority, and everything else with low priority — completely off the critical path.
Also, avoid placing before async snippets. If scripts don’t depend on stylesheets, consider placing blocking scripts above blocking styles. If they do, split that JavaScript in two and load it either side of your CSS.
Scott Jehl solved another interesting problem by caching an inlined CSS file with a service worker, a common problem familiar if you’re using critical CSS. Basically, we add an ID attribute onto the style element so that it’s easy to find it using JavaScript, then a small piece of JavaScript finds that CSS and uses the Cache API to store it in a local browser cache (with a content type of text/css) for use on subsequent pages. To avoid inlining on subsequent pages and instead reference the cached assets externally, we then set a cookie on the first visit to a site. Voilà!
Do you stream responses? Often forgotten and neglected, streams provide an interface for reading or writing asynchronous chunks of data, only a subset of which might be available in memory at any given time. Basically, they allow the page that made the original request to start working with the response as soon as the first chunk of data is available, and use parsers that are optimized for streaming to progressively display the content.
We could create one stream from multiple sources. For example, instead of serving an empty UI shell and letting JavaScript populate it, you can let the service worker construct a stream where the shell comes from a cache, but the body comes from the network. As Jeff Posnick noted, if your web app is powered by a CMS that server-renders HTML by stitching together partial templates, that model translates directly into using streaming responses, with the templating logic replicated in the service worker instead of your server. Jake Archibald’s The Year of Web Streams article highlights how exactly you could build it. Performance boost is quite noticeable.
One important advantage of streaming the entire HTML response is that HTML rendered during the initial navigation request can take full advantage of the browser’s streaming HTML parser. Chunks of HTML that are inserted into a document after the page has loaded (as is common with content populated via JavaScript) can’t take advantage of this optimization.
Consider making your components connection-aware. Data can be expensive and with growing payload, we need to respect users who choose to opt into data savings while accessing our sites or apps. The Save-Data client hint request header allows us to customize the application and the payload to cost- and performance-constrained users. In fact, you could rewrite requests for high DPI images to low DPI images, remove web fonts, fancy parallax effects, preview thumbnails and infinite scroll, turn off video autoplay, server pushes, reduce the number of displayed items and downgrade image quality, or even change how you deliver markup. Tim Vereecke has published a very detailed article on data-s(h)aver strategies featuring many options for data saving.
The header is currently supported only in Chromium, on the Android version of Chrome or via the Data Saver extension on a desktop device. Finally, you can also use the Network Information API to deliver low/high resolution images and videos based on the network type. Network Information API and specifically navigator.connection.effectiveType (Chrome 62+) use RTT, downlink, effectiveType values (and a few others) to provide a representation of the connection and the data that users can handle.
In this context, Max Stoiber speaks of connection-aware components. For example, with React, we could write a component that renders different elements for different connection types. As Max suggested, a component in a news article might output:
Offline: a placeholder with alt text,
2G / save-data mode: a low-resolution image,
3G on non-Retina screen: a mid-resolution image,
3G on Retina screens: high-res Retina image,
4G: an HD video.
Dean Hume provides a practical implementation of a similar logic using a service worker. For a video, we could display a video poster by default, and then display the “Play” icon as well as the video player shell, meta-data of the video etc. on better connections. As a fallback for non-supporting browsers, we could listen to canplaythrough event and use Promise.race() to timeout the source loading if the canplaythrough event doesn’t fire within 2 seconds.
Consider making your components device memory-aware. Network connection gives us only one perspective at the context of the user though. Going further, you could also dynamically adjust resources based on available device memory, with the Device Memory API (Chrome 63+). navigator.deviceMemory returns how much RAM the device has in gigabytes, rounded down to the nearest power of two. The API also features a Client Hints Header, Device-Memory, that reports the same value.
Warm up the connection to speed up delivery. Use resource hints to save time on dns-prefetch (which performs a DNS lookup in the background), preconnect (which asks the browser to start the connection handshake (DNS, TCP, TLS) in the background), prefetch (which asks the browser to request a resource) and preload (which prefetches resources without executing them, among other things).
Most of the time these days, we’ll be using at least preconnect and dns-prefetch, and we’ll be cautious with using prefetch and preload; the former should only be used if you are confident about what assets the user will need next (for example, in a purchasing funnel).
Note that even with preconnect and dns-prefetch, the browser has a limit on the number of hosts it will look up/connect to in parallel, so it’s a safe bet to order them based on priority (thanks Philip!).
In fact, using resource hints is probably the easiest way to boost performance, and it works well indeed. When to use what? As Addy Osmani has explained, we should preload resources that we have high-confidence will be used in the current page. Prefetch resources likely to be used for future navigations across multiple navigation boundaries, e.g. Webpack bundles needed for pages the user hasn’t visited yet.
Addy’s article on “Loading Priorities in Chrome” shows how exactly Chrome interprets resource hints, so once you’ve decided which assets are critical for rendering, you can assign high priority to them. To see how your requests are prioritized, you can enable a “priority” column in the Chrome DevTools network request table (as well as Safari Technology Preview).
A few gotchas to keep in mind: preload is good for moving the start download time of an asset closer to the initial request, but preloaded assets land in the memory cache which is tied to the page making the request. preload plays well with the HTTP cache: a network request is never sent if the item is already there in the HTTP cache.
Hence, it’s useful for late-discovered resources, a hero image loaded via background-image, inlining critical CSS (or JavaScript) and pre-loading the rest of the CSS (or JavaScript). Also, a preload tag can initiate a preload only after the browser has received the HTML from the server and the lookahead parser has found the preload tag.
Preloading via the HTTP header is a bit faster since we don’t to wait for the browser to parse the HTML to start the request. Early Hints will help even further, enabling preload to kick in even before the response headers for the HTML are sent and Priority Hints (coming soon) will help us indicate loading priorities for scripts.
Use service workers for caching and network fallbacks. No performance optimization over a network can be faster than a locally stored cache on a user’s machine. If your website is running over HTTPS, use the “Pragmatist’s Guide to Service Workers” to cache static assets in a service worker cache and store offline fallbacks (or even offline pages) and retrieve them from the user’s machine, rather than going to the network. Also, check Jake’s Offline Cookbook and the free Udacity course “Offline Web Applications.”
Browser support? As stated above, it’s widely supported (Chrome, Firefox, Safari TP, Samsung Internet, Edge 17+) and the fallback is the network anyway. Does it help boost performance? Oh yes, it does. And it’s getting better, e.g. with Background Fetch allowing background uploads/downloads from a service worker. Shipped in Chrome 71.
There are a few gotchas to keep in mind though. With a service worker in place, we need to beware range requests in Safari (if you are using Workbox for a service worker it has a range request module). If you ever stumbled upon DOMException: Quota exceeded. error in the browser console, then look into Gerardo’s article When 7KB equals 7MB.
A good starting point for using service workers would be Workbox, a set of service worker libraries built specifically for building progressive web apps.
Are you using service workers on the CDN/Edge, e.g. for A/B testing? At this point, we are quite used to running service workers on the client, but with CDNs implementing them on the server, we could use them to tweak performance on the edge as well.
Optimize rendering performance. Isolate expensive components with CSS containment — for example, to limit the scope of the browser’s styles, of layout and paint work for off-canvas navigation, or of third-party widgets. Make sure that there is no lag when scrolling the page or when an element is animated, and that you’re consistently hitting 60 frames per second. If that’s not possible, then at least making the frames per second consistent is preferable to a mixed range of 60 to 15. Use CSS’ will-change to inform the browser of which elements and properties will change.
Have you optimized rendering experience? While the sequence of how components appear on the page, and the strategy of how we serve assets to the browser matter, we shouldn’t underestimate the role of perceived performance, too. The concept deals with psychological aspects of waiting, basically keeping customers busy or engaged while something else is happening. That’s where perception management, preemptive start, early completion and tolerance management come into play.
What does it all mean? While loading assets, we can try to always be one step ahead of the customer, so the experience feels swift while there is quite a lot happening in the background. To keep the customer engaged, we can test skeleton screens (implementation demo) instead of loading indicators, add transitions/animations and basically cheat the UX when there is nothing more to optimize. Beware though: skeleton screens should be tested before deploying as some tests showed that skeleton screens can perform the worst by all metrics.
HTTP/2
Migrate to HTTPS, then turn on HTTP/2. With Google moving towards a more secure web and eventual treatment of all HTTP pages in Chrome as being “not secure,” a switch to HTTP/2 environment is unavoidable. HTTP/2 is supported very well; it isn’t going anywhere; and, in most cases, you’re better off with it. Once running on HTTPS already, you can get a major performance boost with service workers and server push (at least long term).
The most time-consuming task will be to migrate to HTTPS, and depending on how large your HTTP/1.1 user base is (that is, users on legacy operating systems or with legacy browsers), you’ll have to send a different build for legacy browsers performance optimizations, which would require you to adapt to a different build process. Beware: Setting up both migration and a new build process might be tricky and time-consuming. For the rest of this article, I’ll assume that you’re either switching to or have already switched to HTTP/2.
Properly deploy HTTP/2. Again, serving assets over HTTP/2 requires a partial overhaul of how you’ve been serving assets so far. You’ll need to find a fine balance between packaging modules and loading many small modules in parallel. At the end of the day, still the best request is no request, however, the goal is to find a fine balance between quick first delivery of assets and caching.
On the one hand, you might want to avoid concatenating assets altogether, instead breaking down your entire interface into many small modules, compressing them as a part of the build process, referencing them via the “scout” approach and loading them in parallel. A change in one file won’t require the entire style sheet or JavaScript to be re-downloaded. It also minimizes parsing time and keeps the payloads of individual pages low.
On the other hand, packaging still matters. First, compression will suffer. The compression of a large package will benefit from dictionary reuse, whereas small separate packages will not. There’s standard work to address that, but it’s far out for now. Secondly, browsers have not yet been optimized for such workflows. For example, Chrome will trigger inter-process communications (IPCs) linear to the number of resources, so including hundreds of resources will have browser runtime costs.
Still, you can try to load CSS progressively. In fact, since Chrome 69, in-body CSS no longer blocks rendering for Chrome. Obviously, by doing so, you are actively penalizing HTTP/1.1 users, so you might need to generate and serve different builds to different browsers as part of your deployment process, which is where things get slightly more complicated. You could get away with HTTP/2 connection coalescing, which allows you to use domain sharding while benefiting from HTTP/2, but achieving this in practice is difficult, and in general, it’s not considered to be good practice.
What to do? Well, if you’re running over HTTP/2, sending around 6–10 packages seems like a decent compromise (and isn’t too bad for legacy browsers). Experiment and measure to find the right balance for your website.
Do your servers and CDNs support HTTP/2? Different servers and CDNs are probably going to support HTTP/2 differently. Use Is TLS Fast Yet? to check your options, or quickly look up how your servers are performing and which features you can expect to be supported.
Is OCSP stapling enabled? By enabling OCSP stapling on your server, you can speed up your TLS handshakes. The Online Certificate Status Protocol (OCSP) was created as an alternative to the Certificate Revocation List (CRL) protocol. Both protocols are used to check whether an SSL certificate has been revoked. However, the OCSP protocol does not require the browser to spend time downloading and then searching a list for certificate information, hence reducing the time required for a handshake.
Have you adopted IPv6 yet? Because we’re running out of space with IPv4 and major mobile networks are adopting IPv6 rapidly (the US has reached a 50% IPv6 adoption threshold), it’s a good idea to update your DNS to IPv6 to stay bulletproof for the future. Just make sure that dual-stack support is provided across the network — it allows IPv6 and IPv4 to run simultaneously alongside each other. After all, IPv6 is not backwards-compatible. Also, studies show that IPv6 made those websites 10 to 15% faster due to neighbor discovery (NDP) and route optimization.
Is HPACK compression in use? If you’re using HTTP/2, double-check that your servers implement HPACK compression for HTTP response headers to reduce unnecessary overhead. Because HTTP/2 servers are relatively new, they may not fully support the specification, with HPACK being an example. H2spec is a great (if very technically detailed) tool to check that. HPACK’s compression algorithm is quite impressive, and it works.
Have you optimized your auditing workflow? It might not sound like a big deal, but having the right settings in place at your fingertips might save you quite a bit of time in testing. Consider using Tim Kadlec’s Alfred Workflow for WebPageTest for submitting a test to the public instance of WebPageTest.
And if you need to debug something quickly but your build process seems to be remarkably slow, keep in mind that “whitespace removal and symbol mangling accounts for 95% of the size reduction in minified code for most JavaScript?—?not elaborate code transforms. You can simply disable compression to speed up Uglify builds by 3 to 4 times.”
Have you tested in proxy browsers and legacy browsers? Testing in Chrome and Firefox is not enough. Look into how your website works in proxy browsers and legacy browsers. UC Browser and Opera Mini, for instance, have a significant market share in Asia (up to 35% in Asia). Measure average Internet speed in your countries of interest to avoid big surprises down the road. Test with network throttling, and emulate a high-DPI device. BrowserStack is fantastic, but test on real devices as well.
Have you tested the accessibility performance? When the browser starts to load a page, it builds a DOM, and if there is an assistive technology like a screen reader running, it also creates an accessibility tree. The screen reader then has to query the accessibility tree to retrieve the information and make it available to the user — sometimes by default, and sometimes on demand. And sometimes it takes time.
When talking about fast Time to Interactive, usually we mean an indicator of how soon a user can interact with the page by clicking or tapping on links and buttons. The context is slightly different with screen readers. In that case, fast Time to Interactive means how much time passes by until the screen reader can announce navigation on a given page and a screen reader user can actually hit keyboard to interact.
Léonie Watson has given an eye-opening talk on accessibility performance and specifically the impact slow loading has on screen reader announcement delays. Screen readers are used to fast-paced announcements and quick navigation, and therefore might potentially be even less patient than sighted users.
Large pages and DOM manipulations with JavaScript will cause delays in screen reader announcements. A rather unexplored area that could use some attention and testing as screen readers are available on literally every platform (Jaws, NVDA, Voiceover, Narrator, Orca).
Is continuous monitoring set up? Having a private instance of WebPagetest is always beneficial for quick and unlimited tests. However, a continuous monitoring tool — like Sitespeed, Calibre and SpeedCurve — with automatic alerts will give you a more detailed picture of your performance. Set your own user-timing marks to measure and monitor business-specific metrics. Also, consider adding automated performance regression alerts to monitor changes over time.
Look into using RUM-solutions to monitor changes in performance over time. For automated unit-test-alike load testing tools, you can use k6 with its scripting API. Also, look into SpeedTracker, Lighthouse and Calibre.
Quick Wins
This list is quite comprehensive, and completing all of the optimizations might take quite a while. So, if you had just 1 hour to get significant improvements, what would you do? Let’s boil it all down to 12 low-hanging fruits. Obviously, before you start and once you finish, measure results, including start rendering time and Speed Index on a 3G and cable connection.
Measure the real world experience and set appropriate goals. A good goal to aim for is First Meaningful Paint < 1 s, a Speed Index value < 1250, Time to Interactive < 5s on slow 3G, for repeat visits, TTI < 2s. Optimize for start rendering time and time-to-interactive.
Prepare critical CSS for your main templates, and include it in the of the page. (Your budget is 14 KB). For CSS/JS, operate within a critical file size budget of max. 170KB gzipped (0.7MB decompressed).
Trim, optimize, defer and lazy-load as many scripts as possible, check lightweight alternatives and limit the impact of third-party scripts.
Serve legacy code only to legacy browsers with .
Experiment with regrouping your CSS rules and test in-body CSS.
Add resource hints to speed up delivery with faster dns-lookup, preconnect, prefetch and preload.
Subset web fonts and load them asynchronously, and utilize font-display in CSS for fast first rendering.
Optimize images, and consider using WebP for critical pages (such as landing pages).
Check that HTTP cache headers and security headers are set properly.
Enable Brotli or Zopfli compression on the server. (If that’s not possible, don’t forget to enable Gzip compression.)
If HTTP/2 is available, enable HPACK compression and start monitoring mixed-content warnings. Enable OCSP stapling.
Cache assets such as fonts, styles, JavaScript and images in a service worker cache.
Download The Checklist (PDF, Apple Pages)
With this checklist in mind, you should be prepared for any kind of front-end performance project. Feel free to download the print-ready PDF of the checklist as well as an editable Apple Pages document to customize the checklist for your needs:
Some of the optimizations might be beyond the scope of your work or budget or might just be overkill given the legacy code you have to deal with. That’s fine! Use this checklist as a general (and hopefully comprehensive) guide, and create your own list of issues that apply to your context. But most importantly, test and measure your own projects to identify issues before optimizing. Happy performance results in 2019, everyone!
A huge thanks to Guy Podjarny, Yoav Weiss, Addy Osmani, Artem Denysov, Denys Mishunov, Ilya Pukhalski, Jeremy Wagner, Colin Bendell, Mark Zeman, Patrick Meenan, Leonardo Losoviz, Andy Davies, Rachel Andrew, Anselm Hannemann, Patrick Hamann, Andy Davies, Tim Kadlec, Rey Bango, Matthias Ott, Peter Bowyer, Phil Walton, Mariana Peralta, Philipp Tellis, Ryan Townsend, Ingrid Bergman, Mohamed Hussain S. H., Jacob Groß, Tim Swalling, Bob Visser, Kev Adamson, Adir Amsalem, Aleksey Kulikov and Rodney Rehm for reviewing this article, as well as our fantastic community which has shared techniques and lessons learned from its work in performance optimization for everybody to use. You are truly smashing!
While everyone is talking about big-picture trends such as designing for voice and virtual reality, there are more immediate design elements that you can see (and deploy) right now for a more on-trend website.
From websites without images above the scroll, to ecommerce that disguises itself as content, to bright blue everything, here’s a look at what’s trending this month.
1. No “Art” Above the Scroll
Have you noticed how many websites don’t have images or video above the scroll? This no “art” design style used to be reserved for coming soon or construction pages that didn’t have images, but it’s trending even for website designs with plenty of other imagery.
If you have a message or statement that is the most important thing for users to know right away, this can be an effective design technique. It works because there’s nothing else to see. (Unless the user refuses to read the words and abandons the design, which can be a risk with this style.)
Make the most of a no art design with beautiful typography and strong color choices.
These design elements can serve as art on their own and help add visual interest to the words on the screen.
Each of the three examples below does this in a slightly different way.
We Are Crowd uses a strong serif-sans serif typography pair on a bright colored background. Users are enticed to delve into the design thanks to an animated scroller on the homepage. The no “art” design actually alternates between image and non-image panels, showing users there is something to look at.
Easys uses a simple serif in the center of the screen to draw the eye. Bright red text on a stark background draws you right to the lettering. There’s also a call to scroll, where the design fills with more color, images and interesting shapes.
David Pacheco’s portfolio takes a more minimal approach to the no art aesthetic. With simple, but oversized sans serif typography and a black background, there’s no way not to know what the website is about. The stark nature of the design makes the small, animated scroller at the bottom left more obvious, and it’s worth the effort with great images of his projects.
2. Bright Blue
It might not come as a surprise that a bright color is trending. But the hue might surprise you. Even though a bright coral was named color of the year by Pantone, bright blue is trending.
Bright blue backgrounds, text elements, overlays and more are everywhere. The color, which has roots in the Material Design palette, is rather cheerful and works with almost any content type. These are reasons why it might be so popular.
Blue is traditionally one of the most used colors among all website designs. Mostly because of associations that are harmonious, pleasing and trustworthy. This brighter blue adds to that with a somewhat lighter feel.
The best part of a website design trend that’s rooted in color is that it’s easy to use. Designers don’t have to overhaul a brand color palette to incorporate bright blue into the design. Sneak it in for accents or in images.
This is also a trend that you can add to a project quickly and without a lot of planning for a modern flair that can be deployed (and even removed) with minimal effort.
Like the blues you see in the examples below? Try these color mixes to replicate the hues:
Matt Downey uses a bright blue with an almost purple undertone; hex #263d83.
Output scrolls trough plenty of bright blue options on the homepage with more purple (hex #4d34d8) and brighter sky blue (hex #3a63d8) options.
Florent Biffi puts a new spin on navy with a brighter tone; hex #142877.
3. Ecommerce That Looks Like Content
Content is king…even when it comes to online sales.
Maybe as a design trend that’s a carryover from social media or maybe just to try something that’s a little more engaging, more ecommerce websites are deploying site architectures that look less like pages of items for sale and more like integrated content elements with a “buy it” button included.
This idea makes sales more about lifestyle and uses a product placement philosophy to sell items. This design concept can take a lot of time to plan out and design effectively, but it can be worthwhile if it resonates with a new customer or shopper base.
It’s likely that it takes a certain type of product as well. Ecommerce websites that use a lot of content and that look more like content that product sales or information tend to be lifestyle brands such as clothing or home textiles. This is because these items don’t need a lot of explanation–most shoppers know what size shirt they wear–and create a sense of urgency in the desire to buy. A customer sees the shirt and want it because of the associated sense of style or connection to brand.
It’s an interesting way to create a more authentic connection with users–a key marketing concept, particularly among millennial shoppers and audiences. It’s one of those design trends that has a lot of potential to grow, but only if designers and developers are willing to commit the time and resources to overhaul their ecommerce projects in this manner.
Conclusion
When it comes to using website design trends, the ones that come and go the quickest are the ones that are most deployable. Some of those–such as color–are on this list. Others, like ecommerce that looks like content, takes a lot more strategy to do well. Which type of trend are you most likely to try?
What trends are you loving (or hating) right now? I’d love to see some of the websites that you are fascinated with. Drop me a link on Twitter; I’d love to hear from you.
Every week users submit a lot of interesting stuff on our sister site Webdesigner News, highlighting great content from around the web that can be of interest to web designers.
The best way to keep track of all the great stories and news being posted is simply to check out the Webdesigner News site, however, in case you missed some here’s a quick and useful compilation of the most popular designer news that we curated from the past week.
Note that this is only a very small selection of the links that were posted, so don’t miss out and subscribe to our newsletter and follow the site daily for all the news.
8 Undoubtably True Predictions for UX in 2019
Design Style Guides to Learn from in 2019
A Collection of Great UI Designs
Site Design: Coding is Fun!
8 Examples of How to Effectively Break Out of the Grid
The 15 Coolest Interfaces of the Year
4 Useless Things You Shouldn’t Have Put in your Design Portfolio
Meet Twill: An Open Source CMS Toolkit for Laravel
The Grumpy Designer’s Bold Predictions for 2019
This is not User Experience
Branding Design – What You Need to Know Before Creating a Brand Identity
The Year that Was: 2018 in Web Design
Flat Design Vs. Traditional Design: Comparative Experimental Study
Users Don’t Read
Writing Copy for Landing Pages
The Elements of UI Engineering
Motion Design Looks Hard, but it Doesn’t Have to Be
Merge by UXPin
Responsive Design, and the Role of Development in Design
Material Design Colors Listed
Designing a Great User Onboarding Experience
How to Name UI Components
Is Design Valuable?
40+ Best Bootstrap Admin Templates of 2019
UI Design: Look Back at 12 Top Interface Design Trends in 2018
Want more? No problem! Keep track of top design news from around the web with Webdesigner News.
Last year, the team here at CSS-Tricks compiled a list of our favorite posts, trends, topics, and resources from around the world of front-end development. We had a blast doing it and found it to be a nice recap of the industry as we saw it over the course of the year. Well, we’re doing it again this year!
With that, here’s everything that Sarah, Robin, Chris and I saw and enjoyed over the past year.
There are a few themes that cross languages, and one of them is good code review. Even though Nina Zakharenko gives talks and makes resources about Python, her talk about code review skills is especially notable because it applies across many disciplines. She’s got a great arc to this talk and I think her deck is an excellent resource, but you can take this a step even further and think critically about your own team, what works for it, and what practices might need to be reconsidered.
I also enjoyed this sarcastic tweet that brings up a good point:
When reviewing a PR, it’s essential that you leave a comment. Any comment. Even the PR looks great and you have no substantial feedback, find something trivial to nitpick or question. This communicates intelligence and mastery, and is widely appreciated by your colleagues.
I’ve been guilty myself of commenting on a really clean pull request just to say something, and it’s healthy for us as a community to revisit why we do things like this.
Sophie Alpert, manager of the React core team, also wrote a great post along these lines right at the end of the year called Why Review Code. It’s a good resource to turn to when you’d like to explain the need for code reviews in the development process.
The year of (creative) code
So many wonderful creative coding resources were made this year. Creative coding projects might seem frivolous but you can actually learn a ton from making and playing with them. Matt DesLauriers recently taught a course called Creative Coding with Canvas & WebGL for Frontend Masters that serves as a good example.
CodePen is always one of my favorite places to check out creative work because it provides a way to reverse-engineer the work of other people and learn from their source code. CodePen has also started coding challenges adding yet another way to motivate creative experiments and collective learning opportunities. Marie Mosley did a lot of work to make that happen and her work on CodePen’s great newsletter is equally awesome.
You should also consider checking out Monica Dinculescu‘s work because she has been sharing some amazing work. There’s not one, not two, but three (!) that use machine learning alone. Go see all of her Glitch projects. And, for what it’s worth, Glitch is a great place to explore creative code and remix your own as well.
GitHub Actions
I think hands-down one of the most game-changing developments this year is GitHub Actions. The fact that you can manage all of your testing, deployments, and project issues as containers chained in a unified workflow is quite amazing.
Containers are a great for actions because of their flexibility — you’re not limited to a single kind of compute and so much is possible! I did a writeup about GitHub Actions covering the feature in full. And, if you’re digging into containers, you might find the dive repo helpful because it provides a way to explore a docker image and layer contents.
Actions are still in beta but you can request access — they’re slowly rolling out now.
UI property generators
I really like that we’re automating some of the code that we need to make beautiful front-end experiences these days. In terms of color there’s color by Adobe, coolors, and uiGradients. There are even generators for other things, like gradients, clip-path, font pairings, and box-shadow. I am very much all for this. These are the kind of tools that speed up development and allow us to use advanced effects, no matter the skill level.
Ire has been writing a near constant stream of wondrous articles about front-end development on her blog, Bits of Code, over the past year, and it’s been super exciting to keep up with her work. It seems like she’s posting something I find useful almost every day, from basic stuff like when hover, focus and active states apply to accessibility tips like the aria-live attribute.
“The All Powerful Front-end Developer”
Chris gave a talk this year about the ways the role of front-end development are changing… and for the better. It was perhaps the most inspiring talk I saw this year. Talks about front-end stuff are sometimes pretty dry, but Chris does something else here. He covers a host of new tools we can use today to do things that previously required a ton of back-end skills. Chris even made a website all about these new tools which are often categorized as “Serverless.”
Even if none of these tools excite you, I would recommend checking out the talk – Chris’s enthusiasm is electric and made me want to pull up my sleeves and get to work on something fun, weird and exciting.
Future Fonts
The Future Fonts marketplace turned out to be a great place to find new and experimental typefaces this year. Obviously is a good example of that. But the difference between Future Fonts and other marketplaces is that you can buy fonts that are in beta and still currently under development. If you get in on the ground floor and buy a font for $10, then that shows the developer the interest in a particular font which may spur more features for it, like new weights, widths or even OpenType features.
It’s a great way to support type designers while getting a ton of neat and experimental typefaces at the same time.
React Conf 2018
The talks from React Conf 2018 will get you up to speed with the latest React news. It’s interesting to see how React Hooks let you “use state and other React features without writing a class.”
It’s also worth calling out that a lot of folks really improved our Guide to React here on CSS-Tricks so that it now contains a ton of advice about how to get started and how to level up on both basic and advanced practices.
The Victorian Internet
This is a weird recommendation because The Victorian Internet is a book and it wasn’t published this year. But! It’s certainly the best book I’ve read this year, even if it’s only tangentially related to web stuff. It made me realize that the internet we’re building today is one that’s much older than I first expected. The book focuses on the laying of the Transatlantic submarine cables, the design of codes and the codebreakers, fraudsters that used the telegraph to find their marks, and those that used it to find the person they’d marry. I really can’t recommend this book enough.
Figma
The browser-based design tool Figma continued to release a wave of new features that makes building design systems and UI kits easier than ever before. I’ve been doing a ton of experiments with it to see how it helps designers communicate, as well as how to build more resilient components. It’s super impressive to see how much the tools have improved over the past year and I’m excited to see it improve in the new year, too.
It seems there was a lot of chatter this year about the impact of third party scripts. Whether it’s the growing ubiquity of all-things-JavaScript or whatever, this topic covers a wide and interesting ground, including performance, security and even hard costs, to name a few.
My personal favorite post about this was Paulo Mioni’s deep dive into the anatomy of a malicious script. Sure, the technical bits are a great learning opportunity, but what really makes this piece is the way it reads like a true crime novel.
Gutenberg, Gutenberg and more Gutenberg
There was so much noise leading up to the new WordPress editor that the release of WordPress 5.0 containing it felt anti-climactic. No one was hurt or injured amid plenty of concerns, though there is indeed room for improvement.
Lara Schneck and Andy Bell teamed up for a hefty seven-party series aimed at getting developers like us primed for the changes and it’s incredible. No stone is left unturned and it perfectly suitable for beginners and experts alike.
Solving real life issues with UX
I like to think that I care a lot about users in the work I do and that I do my best to empathize so that I can anticipate needs or feelings as they interact with the site or app. That said, my mind was blown away by a study Lucas Chae did on the search engine experience of people looking for a way to kill themselves. I mean, depression and suicide are topics that are near and dear to my heart, but I never thought about finding a practical solution for handling it in an online experience.
So, thanks for that, Lucas. It inspired me to piggyback on his recommendations with a few of my own. Hopefully, this is a conversation that goes well beyond 2018 and sparks meaningful change in this department.
The growing gig economy
Freelancing is one of my favorite things to talk about at great length with anyone and everyone who is willing to talk shop and that’s largely because I’ve learned a lot about it in the five years I’ve been in it.
But if you take my experience and quadruple it, then you get a treasure trove of wisdom like Adam Coti shared in his collection of freelancing lessons learned over 20 years of service.
Freelancing isn’t for everyone. Neither is remote work. Adam’s advice is what I wish I had going into this five years ago.
Browser ecology
I absolutely love the way Rachel Nabors likens web browsers to a biological ecosystem. It’s a stellar analogy and leads into the long and winding history of browser evolution.
Speaking of history, Jason Hoffman’s telling of the history about browsers and web standards is equally interesting and a good chunk of context to carry in your back pocket.
These posts were timely because this year saw a lot of movement in the browser landscape. Microsoft is dropping EdgeHTML for Blink and Google ramped up its AMP product. 2018 felt like a dizzying year of significant changes for industry giants!
“Don’t tell me how to build a front end!” we, front-end developers, cry out. We are very powerful now. We like to bring our own front-end stack, then use your back-end data and APIs. As this is happening, we’re seeing healthy things happen like content management systems evolving to headless frameworks and focus on what they are best at: content management. We’re seeing performance and security improvements through the power of static and CDN-backed hosting. We’re seeing hosting and server usage cost reductions.
But we’re also seeing unhealthy things we need to work through, like front-end developers being spread too thin. We have JavaScript-focused engineers failing to write clean, extensible, performant, accessible markup and styles, and, on the flip side, we have UX-focused engineers feeling left out, left behind, or asked to do development work suddenly quite far away from their current expertise.
GraphQL
Speaking of powerful front-end developers, giving us front-end developers a well-oiled GraphQL setup is extremely empowering. No longer do we need to be roadblocked by waiting for an API to be finished or data to be massaged into some needed format. All the data you want is available at your fingertips, so go get and use it as you will. This makes building and iterating on the front end faster, easier, and more fun, which will lead us to building better products. Apollo GraphQL is the thing to look at here.
While front-end is having a massive love affair with JavaScript, there are plenty of front-end developers happily focused elsewhere
This is what I was getting at in my first section. There is a divide happening. It’s always been there, but with JavaScript being absolutely enormous right now and showing no signs of slowing down, people are starting to fall through the schism. Can I still be a front-end developer if I’m not deep into JavaScript? Of course. I’m not going to tell you that you shouldn’t learn JavaScript, because it’s pretty cool and powerful and you just might love it, but if you’re focused on UX, UI, animation, accessibility, semantics, layout, architecture, design patterns, illustration, copywriting, and any combination of that and whatever else, you’re still awesome and useful and always will be. Hugs. ?
Just look at the book Refactoring UI or the course Learn UI Design as proof there is lots to know about UI design and being great at it requires a lot of training, practice, and skill just like any other aspect of front-end development.
Shamelessly using grid and custom properties everywhere
I remember when I first learned flexbox, it was all I reached for to make layouts. I still love flexbox, but now that we have grid and the browser support is nearly just as good, I find myself reaching for grid even more. Not that it’s a competition; they are different tools useful in different situations. But admittedly, there were things I would have used flexbox for a year ago that I use grid for now and grid feels more intuitive and more like the right tool.
Massive discussions around CSS-in-JS and approaches, like Tailwind
These discussions can get quite heated, but there is no ignoring the fact that the landscape of CSS-in-JS is huge, has a lot of fans, and seems to be hitting the right notes for a lot of folks. But it’s far from settled down. Libraries like Vue and Angular have their own framework-prescribed way of handling it, whereas React has literally dozens of options and a fast-moving landscape with libraries popping up and popular ones spinning down in favor of others. It does seem like the feature set is starting to settle down a little, so this next year will be interesting to watch.
Then there is the concept of atomic CSS on the other side of the spectrum, and interesting in that doesn’t seem to have slowed down at all either. Tailwind CSS is perhaps the hottest framework out there, gaining enough traction that Adam is going full time on it.
What could really shake this up is if the web platform itself decides to get into solving some of the problems that gave rise to these solutions. The shadow DOM already exists in Web Components Land, so perhaps there are answers there? Maybe the return of ? Maybe new best practices will evolve that employ a single-stylesheet-per-component? Who knows.
I’ve heard of multiple agencies where design systems are literally what they make for their clients. Not websites, design systems. I get it. If you give a team a really powerful and flexible toolbox to build their own site with, they will do just that. Giving them some finished pages, as polished as they might be, leaves them needing to dissect those themselves and figure out how to extend and build upon them when that need inevitably arrives. I think it makes sense for agencies, or special teams, to focus on extensible component-driven libraries that are used to build sites.
Machine Learning
Stuff like this blows me away:
I made a music sequencer! In JavaScript! It even uses Machine Learning to try to match drums to a synth melody you create!
Having open source libraries that help with machine learning and that are actually accessible for regular ol’ developers to use is a big deal.
Stuff like this will have real world-bettering implications:
? I think I used machine learning to be nice to people! In this proof of concept, I’m creating dynamic alt text for screenreaders with Azure’s Computer Vision API. ?https://t.co/Y21AHbRT4Ypic.twitter.com/KDfPZ4Sue0
You gotta check out the Unicode Pattern work (more) that Yuan Chuan does. He even shared some of his work and how he does it right here on CSS-Tricks. And follow that name link to CodePen for even more. This thing they have created is fantastic.
We’ve released the Most Hearted Pens, Posts, and Collections on CodePen for 2018! Just absolutely incredible work on here — it’s well worth exploring.
Remember CodePen has a three-tiered hearting system, so while the number next to the heart reflects the number of users who hearted the item, each of those could be worth 1, 2, or 3 hearts total. This list is a great place to find awesome people to follow on CodePen as well, and we’re working on ways to make following people a lot more interesting in 2019.