Archive

Archive for August, 2023

5 Best AI Text To Image Generators

August 3rd, 2023 No comments

As a web designer, you constantly need images that match ideas, fit with themes, or conform to your client’s whims. Trawling stock sites for images that might not exist is tedious and unproductive. Wouldn’t it be awesome if you could conjure up graphics just by describing what you need?

Categories: Designing, Others Tags:

How We Optimized Performance To Serve A Global Audience

August 3rd, 2023 No comments

I work for Bookaway, a digital travel brand. As an online booking platform, we connect travelers with transport providers worldwide, offering bus, ferry, train, and car transfers in over 30 countries. We aim to eliminate the complexity and hassle associated with travel planning by providing a one-stop solution for all transportation needs.

A cornerstone of our business model lies in the development of effective landing pages. These pages serve as a pivotal tool in our digital marketing strategy, not only providing valuable information about our services but also designed to be easily discoverable through search engines. Although landing pages are a common practice in online marketing, we were trying to make the most of it.

SEO is key to our success. It increases our visibility and enables us to draw a steady stream of organic (or “free”) traffic to our site. While paid marketing strategies like Google Ads play a part in our approach as well, enhancing our organic traffic remains a major priority. The higher our organic traffic, the more profitable we become as a company.

We’ve known for a long time that fast page performance influences search engine rankings. It was only in 2020, though, that Google shared its concept of Core Web Vitals and how it impacts SEO efforts. Our team at Bookaway recently underwent a project to improve Web Vitals, and I want to give you a look at the work it took to get our existing site in full compliance with Google’s standards and how it impacted our search presence.

SEO And Web Vitals

In the realm of search engine optimization, performance plays a critical role. As the world’s leading search engine, Google is committed to delivering the best possible search results to its users. This commitment involves prioritizing websites that offer not only relevant content but also an excellent user experience.

Google’s Core Web Vitals is a set of performance metrics that site owners can use to evaluate performance and diagnose performance issues. These metrics provide a different perspective on user experience:

  • Largest Contentful Paint (LCP)
    Measures the time it takes for the main content on a webpage to load.
  • First Input Delay (FID)
    Assesses the time it takes for a page to become interactive.
    Note: Google plans to replace this metric with another one called Interaction to Next Paint (INP) beginning in 2024.
  • Cumulative Layout Shift (CLS)
    Calculates the visual stability of a page.

While optimizing for FID and CLS was relatively straightforward, LCP posed a greater challenge due to the multiple factors involved. LCP is particularly vital for landing pages, which are predominantly content and often the first touch-point a visitor has with a website. A low LCP ensures that visitors can view the main content of your page sooner, which is critical for maintaining user engagement and reducing bounce rates.

Largest Contentful Paint (LCP)

LCP measures the perceived load speed of a webpage from a user’s perspective. It pinpoints the moment during a page’s loading phase when the primary — or “largest” — content has been fully rendered on the screen. This could be an image, a block of text, or even an embedded video. LCP is an essential metric because it gives a real-world indication of the user experience, especially for content-heavy sites.

However, achieving a good LCP score is often a multi-faceted process that involves optimizing several stages of loading and rendering. Each stage has its unique challenges and potential pitfalls, as other case studies show.

Here’s a breakdown of the moving pieces.

Time To First Byte (TTFB)

This is the time it takes for the first piece of information from the server to reach the user’s browser. You need to beware that slow server response times can significantly increase TTFB, often due to server overload, network issues, or un-optimized logic on the server side.

Download Time of HTML

This is the time it takes to download the page’s HTML file. You need to beware of large HTML files or slow network connections because they can lead to longer download times.

HTML Processing

Once a web page’s HTML file has been downloaded, the browser begins to process the contents line by line, translating code into the visual website that users interact with. If, during this process, the browser encounters a or tag that lacks either an async or deferred attribute, the rendering of the webpage comes to a halt.

The browser must then pause to fetch and parse the corresponding files. These files can be complex and potentially take a significant amount of time to download and interpret, leading to a noticeable delay in the loading and rendering of the webpage. This is why the async and deferred attributes are crucial, as they ensure an efficient, seamless web browsing experience.

Fetching And Decoding Images

This is the time taken to fetch, download, and decode images, particularly the largest contentful image. You need to look out for large image file sizes or improperly optimized images that can delay the fetching and decoding process.

First Contentful Paint (FCP)

This is the time it takes for the browser to render the first bit of content from the DOM. You need to beware of slow server response times, particularly render-blocking JavaScript or CSS, or slow network connections, all of which can negatively affect FCP.

Rendering the Largest Contentful Element

This is the time taken until the largest contentful element (like a hero image or heading text) is fully rendered on the page. You need to watch out for complex design elements, large media files, or slow browser rendering can delay the time it takes for the largest contentful element to render.

Understanding and optimizing each of these stages can significantly improve a website’s LCP, thereby enhancing the user experience and SEO rankings.

I know that is a lot of information to unpack in a single sitting, and it definitely took our team time to wrap our minds around what it takes to achieve a low LCP score. But once we had a good understanding, we knew exactly what to look for and began analyzing the analytics of our user data to identify areas that could be improved.

Analyzing User Data

To effectively monitor and respond to our website’s performance, we need a robust process for collecting and analyzing this data.

Here’s how we do it at Bookaway.

Next.js For Performance Monitoring

Many of you reading this may already be familiar with Next.js, but it is a popular open-source JavaScript framework that allows us to monitor our website’s performance in real-time.

One of the key Next.js features we leverage is the reportWebVitals function, a hook that allows us to capture the Web Vitals metrics for each page load. We can then forward this data to a custom analytics service. Most importantly, the function provides us with in-depth insights into our user experiences in real-time, helping us identify any performance issues as soon as they arise.

Storing Data In BigQuery For Comprehensive Analysis

Once we capture the Web Vitals metrics, we store this data in BigQuery, Google Cloud’s fully-managed, serverless data warehouse. Alongside the Web Vitals data, we also record a variety of other important details, such as the date of the page load, the route, whether the user was on a mobile or desktop device, and the language settings. This comprehensive dataset allows us to examine our website’s performance from multiple angles and gain deeper insights into the user experience.

The screenshot features an SQL query from a data table, focusing on the LCP web vital. It shows the retrieval of LCP values (in milliseconds) for specific visits across three unique page URLs that, in turn, represent three different landing pages we serve:

These values indicate how quickly major content items on these pages become fully visible to users.

Visualizing Data with Looker Studio

We visualize performance data using Google’s Looker Studio (formerly called Data Studio). By transforming our raw data into interactive dashboards and reports, we can easily identify trends, pinpoint issues, and monitor improvements over time. These visualizations empower us to make data-driven decisions that enhance our website’s performance and, ultimately, improve our users’ experience.

Looker Studio offers a few key advantages:

  • Easy-to-use interface
    Looker Studio is intuitive and user-friendly, making it easy for anyone on our team to create and customize reports.
  • Real-time data
    Looker Studio can connect directly to BigQuery, enabling us to create reports using real-time data.
  • Flexible and customizable
    Looker Studio enables us to create customized reports and dashboards that perfectly suit our needs.

Here are some examples:

This screenshot shows a crucial functionality we’ve designed within Looker Studio: the capability to filter data by specific groups of pages. This custom feature proves to be invaluable in our context, where we need granular insights about different sections of our website. As the image shows, we’re honing in on our “Route Landing Page” group. This subset of pages has experienced over one million visits in the last week alone, highlighting the significant traffic these pages attract. This demonstration exemplifies how our customizations in Looker Studio help us dissect and understand our site’s performance at a granular level.

The graph presents the LCP values for the 75th percentile of our users visiting the Route Landing Page group. This percentile represents the user experience of the “average” user, excluding outliers who may have exceptionally good or poor conditions.

A key advantage of using Looker Studio is its ability to segment data based on different variables. In the following screenshot, you can see that we have differentiated between mobile and desktop traffic.

Understanding The Challenges

In our journey, the key performance data we gathered acted as a compass, pointing us toward specific challenges that lay ahead. Influenced by factors such as global audience diversity, seasonality, and the intricate balance between static and dynamic content, these challenges surfaced as crucial areas of focus. It is within these complexities that we found our opportunity to refine and optimize web performance on a global scale.

Seasonality And A Worldwide Audience

As an international platform, Bookaway serves a diverse audience from various geographic locations. One of the key challenges that come with serving a worldwide audience is the variation in network conditions and device capabilities across different regions.

Adding to this complexity is the effect of seasonality. Much like physical tourism businesses, our digital platform also experiences seasonal trends. For instance, during winter months, our traffic increases from countries in warmer climates, such as Thailand and Vietnam, where it’s peak travel season. Conversely, in the summer, we see more traffic from European countries where it’s the high season for tourism.

The variation in our performance metrics, correlated with geographic shifts in our user base, points to a clear area of opportunity. We realized that we needed to consider a more global and scalable solution to better serve our global audience.

This understanding prompted us to revisit our approach to content delivery, which we’ll get to in a moment.

Layout Shifts From Dynamic And Static Content

We have been using dynamic content serving, where each request reaches our back-end server and triggers processes like database retrievals and page renderings. This server interaction is reflected in the TTFB metric, which measures the duration from the client making an HTTP request to the first byte being received by the client’s browser. The shorter the TTFB, the better the perceived speed of the site from the user’s perspective.

While dynamic serving provides simplicity in implementation, it imposes significant time costs due to the computational resources required to generate the pages and the latency involved in serving these pages to users at distant locations.

We recognize the potential benefits of serving static content, which involves delivering pre-generated HTML files like you would see in a Jamstack architecture. This could significantly improve the speed of our content delivery as it eliminates the need for on-the-fly page generation, thereby reducing TTFB. It also opens up the possibility for more effective use of caching strategies, potentially enhancing load times further.

As we envisage a shift from dynamic to static content serving, we anticipate it to be a crucial step toward improving our LCP metrics and providing a more consistent user experience across all regions and seasons.

In the following sections, we’ll explore the potential challenges and solutions we could encounter as we consider this shift. We’ll also discuss our thoughts on implementing a Content Delivery Network (CDN), which could allow us to fully leverage the advantages of static content serving.

Leveraging A CDN For Content Delivery

I imagine many of you already understand what a CDN is, but it is essentially a network of servers, often referred to as “edges.” These edge servers are distributed in data centers across the globe. Their primary role is to store (or “cache”) copies of web content — like HTML pages, JavaScript files, and multimedia content — and deliver it to users based on their geographic location.

When a user makes a request to access a website, the DNS routes the request to the edge server that’s geographically closest to the user. This proximity significantly reduces the time it takes for the data to travel from the server to the user, thus reducing latency and improving load times.

A key benefit of this mechanism is that it effectively transforms dynamic content delivery into static content delivery. When the CDN caches a pre-rendered HTML page, no additional server-side computations are required to serve that page to the user. This not only reduces load times but also reduces the load on our origin servers, enhancing our capacity to serve high volumes of traffic.

If the requested content is cached on the edge server and the cache is still fresh, the CDN can immediately deliver it to the user. If the cache has expired or the content isn’t cached, the CDN will retrieve the content from the origin server, deliver it to the user, and update its cache for future requests.

This caching mechanism also improves the website’s resilience to distributed denial-of-service (DDoS) attacks. By serving content from edge servers and reducing the load on the origin server, the CDN provides an additional layer of security. This protection helps ensure the website remains accessible even under high-traffic conditions.

CDN Implementation

Recognizing the potential benefits of a CDN, we decided to implement one for our landing pages. As our entire infrastructure is already hosted by Amazon Web Services (AWS), choosing Amazon AWS CloudFront as our CDN solution was an immediate and obvious choice. Its robust infrastructure, scalability, and a wide network of edge locations around the world made it a strong candidate.

During the implementation process, we configured a key setting known as max-age. This determines how long a page remains “fresh.” We set this property to three days, and for those three days, any visitor who requests a page is quickly served with the cached version from the nearest edge location. After the three-day period, the page would no longer be considered “fresh.” The next visitor requesting that page wouldn’t receive the cached version from the edge location but would have to wait for the CDN to reach our origin servers and generate a fresh page.

This approach offered an exciting opportunity for us to enhance our web performance. However, transitioning to a CDN system also posed new challenges, particularly with the multitude of pages that were rarely visited. The following sections will discuss how we navigated these hurdles.

Addressing Many Pages With Rare Visits

Adopting the AWS CloudFront CDN significantly improved our website’s performance. However, it also introduced a unique problem: our “long tail” of rarely visited pages. With over 100,000 landing pages, each available in seven different languages, we managed a total of around 700,000 individual pages.

Many of these pages were rarely visited. Individually, each accounted for a small percentage of our total traffic. Collectively, however, they made up a substantial portion of our web content.

The infrequency of visits meant that our CDN’s max-age setting of three days would often expire without a page being accessed in that timeframe. This resulted in these pages falling out of the CDN’s cache. Consequently, the next visitor requesting that page would not receive the cached version. Instead, they would have to wait for the CDN to reach our origin server and fetch a fresh page.

To address this, we adopted a strategy known as stale-while-revalidate. This approach allows the CDN to serve a stale (or expired) page to the visitor, while simultaneously validating the freshness of the page with the origin server. If the server’s page is newer, it is updated in the cache.

This strategy had an immediate impact. We observed a marked and continuous enhancement in the performance of our long-tail pages. It allowed us to ensure a consistently speedy experience across our extensive range of landing pages, regardless of their frequency of visits. This was a significant achievement in maintaining our website’s performance while serving a global audience.

I am sure you are interested in the results. We will examine them in the next section.

Performance Optimization Results

Our primary objective in these optimization efforts was to reduce the LCP metric, a crucial aspect of our landing pages. The implementation of our CDN solution had an immediate positive impact, reducing LCP from 3.5 seconds to 2 seconds. Further applying the stale-while-revalidate strategy resulted in an additional decrease in LCP, bringing it down to 1.7 seconds.

A key component in the sequence of events leading to LCP is the TTFB, which measures the time from the user’s request to the receipt of the first byte of data by the user’s browser. The introduction of our CDN solution prompted a dramatic decrease in TTFB, from 2 seconds to 1.24 seconds.

Stale-While-Revalidate Improvement

This substantial reduction in TTFB was primarily achieved by transitioning to static content delivery, eliminating the need for back-end server processing for each request, and by capitalizing on CloudFront’s global network of edge locations to minimize network latency. This allowed users to fetch assets from a geographically closer source, substantially reducing processing time.

Therefore, it’s crucial to highlight that

The significant improvement in TTFB was one of the key factors that contributed to the reduction in our LCP time. This demonstrates the interdependent nature of web performance metrics and how enhancements in one area can positively impact others.

The overall LCP improvement — thanks to stale-while-revalidate — was around 15% for the 75th percentile.

User Experience Results

The “Page Experience” section in Google Search Console evaluates your website’s user experience through metrics like load times, interactivity, and content stability. It also reports on mobile usability, security, and best practices such as HTTPS. The screenshot below illustrates the substantial improvement in our site’s performance due to our implementation of the stale-while-revalidate strategy.

Conclusion

I hope that documenting the work we did at Bookaway gives you a good idea of the effort that it takes to tackle improvements for Core Web Vitals. Even though there is plenty of documentation and tutorials about them, I know it helps to know what it looks like in a real-life project.

And since everything I have covered in this article is based on a real-life project, it’s entirely possible that the insights we discovered at Bookaway will differ from yours. Where LCP was the primary focus for us, you may very well find that another Web Vital metric is more pertinent to your scenario.

That said, here are the key lessons I took away from my experience:

  • Optimize Website Loading and Rendering.
    Pay close attention to the stages of your website’s loading and rendering process. Each stage — from TTFB, download time of HTML, and FCP, to fetching and decoding of images, parsing of JavaScript and CSS, and rendering of the largest contentful element — needs to be optimized. Understand potential pitfalls at each stage and make necessary adjustments to improve your site’s overall user experience.
  • Implement Performance Monitoring Tools.
    Utilize tools such as Next.js for real-time performance monitoring and BigQuery for storing and analyzing data. Visualizing your performance data with tools like Looker Studio can help provide valuable insights into your website’s performance, enabling you to make informed, data-driven decisions.
  • Consider Static Content Delivery and CDN.
    Transitioning from dynamic to static content delivery can greatly reduce the TTFB and improve site loading speed. Implementing a CDN can further optimize performance by serving pre-rendered HTML pages from edge servers close to the user’s location, reducing latency and improving load times.

Further Reading On SmashingMag

Categories: Others Tags:

Amac’s Creative Phone Repair Advertisement Leaves the Design Community in Awe

August 2nd, 2023 No comments

A Dutch Redditor posted a phone repair company’s creative advertisement last week, and designers were blown away by its ingenious double meaning.

Categories: Designing, Others Tags:

Apple is Running One-Day Coding Labs to Allow Developers to Test their Apps on Apple Vision Pro

August 2nd, 2023 No comments

Apple began accepting applications for one-day Vision Pro developer labs earlier this week. The sessions will allow developers to test and optimize their apps for the Apple Vision Pro.

Categories: Designing, Others Tags:

Colors of Positivity: Uplifting Your Website Design

August 2nd, 2023 No comments

Color plays a fundamental role in our lives. It sets the tone, evokes emotions, and influences our mood. Consequently, the colors you use for a website can have a tremendous impact on how a brand is perceived. Today, we will delve into the world of positive colors, their effect on web design, and how they can infuse your design with a sense of optimism.

Categories: Designing, Others Tags:

Reddit Updates its Design to Improve the Logged-out User Experience

August 1st, 2023 No comments

Reddit is taking critical steps to improve the user experience for logged-out visitors. Updates include a streamlined UI, more helpful search results, and improved performance.

Categories: Designing, Others Tags:

Stack Overflow Unveils Overflow AI – Its New Generative AI Program for Developers

August 1st, 2023 No comments

Stack Overflow just unveiled a roadmap detailing its plan for a new AI tool aimed at developers. OverflowAI promises semantic search functions, personalized answers, and seamless integration with developer utilities.

Categories: Designing, Others Tags:

Smashing Podcast Episode 64 With Alvin Bryan: What Is A Headless CMS?

August 1st, 2023 No comments

We’re talking about headless content management systems. What are they, and how do they differ from more traditional systems? Drew McLellan talks to Alvin Bryan to find out.

Show Notes

Weekly Update

Transcript

Drew: He’s a developer advocate with the content management platform company, Contentful. Before that, he used to be a lead engineer for Dow Jones in the Wall Street Journal and has had various front end roles. He’s very UX driven and happiest when collaborating with designers and pushing boundaries as a team. And these days, he’s learning a lot about DEVREL and loving it. So we know he’s an experienced developer, but did you know he once taught Catherine Zeta Jones to do a cartwheel? My Smashing friends, please welcome Alvin Brian. Hi Alvin, how are you?

Alvin: I’m smashing, thank you so much for having me here. It’s an honor.

Drew: Thanks for joining us. I wanted to talk to you today about one of the key technologies that’s really at the center of so many projects, but perhaps these days doesn’t get the spotlight shone on it so often because maybe it’s not so glamorous as front-end frameworks or any of these other things. It’s content management systems. We’re all using them, but I think sometimes the discussion isn’t there about it when it’s so important. I just — before we start — want to address the elephant in the room and that you’re a developer advocate for Contentful, and I know we have a really savvy audience here at Smashing, and they’d see right through anything that was a thinly veiled ad for your employer. So I just wanted to reassure the audience at this point that this is not that, rather it’s the fact that your work leads you to have some really great upstate knowledge of the space and that’s why you’re the perfect guest for this episode. That’s right, isn’t it?

Alvin: Oh yeah. I think that’s the difference between a developer advocate and a salesman. I’m not here to sell you anything, I’m here to help developers, whatever that looks like. At least this is how we approach DEVREL at Contentful. It varies, and this could be a podcast episode on its own.

Drew: It could be, couldn’t it? What is developer relations? Is it a function of sales? Is it a function of marketing? Is it support? What is it?

Alvin: Yeah.

Drew: So yes, that’s a whole can of worms. Just to give a bit of background on me in this context, I’ve got a lot of history with the content management space from years of building bespoke systems for clients and then distilling all that experience down into a CMS product, which I founded in 2009 and then sold in 2021. All the CMS solutions that I’ve developed have followed this traditional model of the CMS being the entire platform that delivered your website. So it’d be taking content and taking templates and merging all that together to create HTML pages essentially. Is that approach to content management still a valid thing in 2023, do you think?

Alvin: I think it’s valid. Well, it’s valid depending on what you’re trying to build. Squarespace is, I’m pretty sure they’re doing great. I’ve not looked at anything, any numbers, but they’ve been doing great for years and I’m sure they’ll continue. So yeah, it’s definitely a valid thing, but I think for the sort of place that would employ a developer, that may not be anymore.

Drew: It’s almost that market from a development point of view, it’s almost like a solved problem, isn’t it? There are so many good CMSs for rolling out, for example, small websites. I don’t know what the latest stats on how much of the web is powered by WordPress, but it’s approaching half, isn’t it?

Alvin: Yeah. I believe it kept increasing as well, right?

Drew: Right.

Alvin: Yeah, so definitely, it’s a thing for sure.

Drew: I think we’re here today to talk about headless CMS, which of course is a different approach to the same problem. I think most of us will have heard of Contentful in some capacity over the last few years as one of the rising stars in the headless CMS space. And you really can’t talk about content management. You can’t have a content management discussion these days without headless being a factor in that discussion. We mentioned WordPress, but even WordPress has a headless mode. Drupal have what they call a coupled mode, which I think is just the same thing. So getting down to brass tacks, what do we mean when we say a headless CMS? What sort of problem is it solving for us?

Alvin: The problem it’s solving is, it’s making a distinction between what the CMS manages and what you get out of it. With the traditional CMS, you’re tied to a website or a page where you’re made a page on WordPress and ended up being a page on your website. And it’s the approach that, as we said before, this is what Squarespace does, this is what they all do. With headless, you manage your content and you retrieve that content with an API call, so the way that looks on your website is completely decoupled. And this solves a lot of problems, especially with bigger companies. So you can imagine, with Contentful, one of our biggest clients is Ikea, and you can imagine that they don’t just have content on their website, they have physical catalogs, they have ads on the side of the road, so all of that. You really have to break away from this old, one page in the CMS equals one page on the website.

Drew: So you end up more with a multipurpose repository of content with an API that you can then access it. So if you’re Ikea, you can pull the same product description into your mobile app and on your website and into, what, any number. So yeah, I guess it is decoupling, isn’t it? It’s, rather than saying, this content is being produced on this HTML page, it’s saying, this is a system for managing content and here is an API for getting at that content and using it however you want. So it sounds like it makes a whole bunch of problems, especially around the reusing content space, it makes that a lot easier. Are there any things, do you think, that using this approach make more difficult?

Alvin: Well, it’s the time to iteration, because depending on how well your system is set up, you can go around this. But the beauty of it, what we developers love about it is, we have control. What other people in the organization tend to hate is that we have control. And as a result, if you want a brand new section on your website when you need designer to design it, a developer to make it work, you can work around it with templates and other things that we used to do. But in general, this can be a thing where people can be… I can just spin up a completely new section from scratch. Again, it might be, but there would need to be something that’s been set up previously.

Drew: Right. So you can put the content into the system, but you need something then to consume it in a targeted way to make use of it. Yeah, so that, as you say, the iteration speed could be slower. One thing that I sometimes see online is, people say, “Oh, if you use a headless CMS, it’s terrible for SEO.” But with my software engineering hat on, that sounds like a symptom of one possible implementation of using a headless CMS, and it’s not inherent to the overall solution, is it? You could be merging this content into a static website offline and then publishing it, or when people take a purely client side single page app approach to using that content, that might have SEO implications and that maybe be is the sort of naive initial implementation that someone might go with.

Drew: But yes, it’s funny how often that crops up almost, sort of one of these myths that drifts around people without maybe fully understanding the implications, just repeat it. One thing that Contentful talks about in a lot of their materials is composable content. What does that mean? What are we talking about with composable content?

Alvin: It’s the fancy new 12-23 thing, isn’t it? Yeah, just to come back on the SEO bit, I think, yeah, as you said, it’s just, anything that Google consumes is, at the end of the day, an HTML tag. It’s no different to the P tag, which you’ll use to display whatever. So it’s also up to you, the developer, to make sure that you create the OG tags that your content is there practically so the engines can crawl it. So it’s nothing… The headless provider will just give you an API, you can do whatever you want with it. To go back to the composable thing now, yeah, so a lot more people have started to move to it. You hear us talking about it, you hear some of our competitors talk about it, and that Defy is also doubling down with the acquisition of Gatsby, for example.

Alvin: The idea is to go headless CMS plus, right? So with headless you say, “Oh, I have this one API that takes care of all my content.” But now, what if you could plug other things to this API? What if you could say, “Oh, I want…” I’m just making stuff up here, but what if I want to connect my slack to it? What if we have a weather app or something like that? Any other types of dynamic data that we need to combine with our content to have this one API that gives us everything. And it’s the idea of, again, you’re composing what you need with your headless CMS. And that for us, that looks like an ecosystem of apps, meaning you can extend Contentful with different apps, which could be translation, it’s 2023, so it could be GPT, could be anything else. So that’s the idea. Your headless CMS also integrates with other data providers.

Drew: So it becomes like an aggregator of other content. So as well as having maybe a content editing team creating content, you might also be pulling stuff in from your Instagram feed.

Alvin: Yeah, exactly.

Drew: Or say, a dynamic feed from a third party provider and then making that all available under one API to all your different consumers. Okay, well that kind of makes sense. I think that makes sense.

Alvin: Especially with Instagram, we’ve all seen the horrible Instagram embeds or the Twitter ones. Twitter API is a thing of the past now but anyway, just to give you an example of, you have these horrible embeds, and what if you could get that data from the headless CMS as well and then render it statically? That makes a lot more sense.

Drew: Okay. So yes, it’s just an aggregation function on top of the standard as well as being the source of truth for your own content.

Alvin: Exactly.

Drew: Also then brings in other pieces of content. Now, I’ve personally always been interested in owning my own data where I can, and I’d usually pick a self-hosted solution for something rather than a service, given the choice. Although I have mellowed over the years. The trade-offs I make now are very different from what I would made in the past. But with a headless CMS being API driven, it seems like you’ve got a bit more flexibility there as to where it’s hosted. So you don’t necessarily need the CMS and the website to live on the same server or in the same environment. You could separate those out. So, is there added complexity there or is that an opportunity for simplification? Have you any thoughts?

Alvin: Yeah, for sure. It depends, because everything is from that one API. Depending on your needs, that might get a lot of traffic, which will make self-hosting a problem. As you said, you’ve mellowed as some sort of into self-hosting because it’s become easier to just set up something in the cloud, whereas managing servers, has that necessarily got easier? Tech has changed, but it’s still annoying.

Drew: It’s just got complicated in different directions.

Alvin: Right. Yeah. So there are self-hosted headless CMSs. We’re not one of them because, again, we tend to target bigger clients that have, again, these needs for these APIs, and our CDN takes in, it’s in the billions of requests per month. So we’re pretty for high traffic stuff. But yeah, there are solutions you can install, Strappy is one of them that is self-hosted. You can install it on your own server, and this will give you, as you said, headless CMS. You’ll own your content, you’ll get the API. But the drawback with that is, obviously, if you get a ton of traffic, then it’ll go to your server that you might not scale or you might not want to pay for it to scale that yet. That’s the one true — but it’s possible for sure.

Drew: And I guess you’ve got to manage it then if it’s on your end, you’ve got to keep it updated, keep it running, keep it backed up. I guess the decisions you’d make for a small community website would be different for the things you’d make if you were Ikea. IKEA probably isn’t going to be running Strappy on a VPS. That’s probably not a good solution for them in a lot of ways. So what are the things you should weigh up when picking a headless CMS solution? What are the things to be looking out for that are perhaps different from what we’re used to evaluating for a traditional CMS?

Alvin: I think it depends… Well, as a developer, you’ll know what you’ll be coding so you can look at the API and what it looks like, whether you like it or not. How easy, does it support GraphQL? Everyone does these days, but stuff like that. Then I think it depends on the people who are going to spend a lot of time in the CMS team. As much as it’s great for me if I like the API, but if the people who are going to write in the CMS hate it, then it’s probably not the right choice. So I definitely think you probably want to involve these people to the decision, right? Because they’re going to be the ones spending time. For us, we want to make sure that we can retrieve everything we want from the API as developers, but definitely wants your blog editor to give you the green light and make sure it has everything they need.

Drew: Yeah, you mentioned the API becomes really important, and I’ve seen headless CMSs with Rest APIs with GraphQL, and then various solutions have SDKs that you can import into your project that give you a language native way of interacting. Is there anything you would, from a development point of view, that would be useful to look out for when evaluating availability of SDKs or types of APIs?

Alvin: Yeah, for sure, if it’s something you like. At the end of the day, the beauty of it, is it’s still an API call. So any language under the sun will support that, hopefully. Yeah, so it’s also up to you, right? Do you want Python native SDK where it’s just like, okay, I’m typing three lines of code and I do client, get entry, get this idea whatever, or get these entries that are of this content type, they prefer that grade. But if you’re the kind of person who’s like, “Nope, I’m going to have complete control,” it also goes back to what you said about owning your data. The problem with relying on this on an SDK is what happens when there’s a security problem, versus if it’s you and you’re just using the bare bones HTTP client on your language, that there’s less risk. So it also depends on the kind of project you’re working on.

Drew: You talked about the user interface aspect, and it’s got to be one of the big factors, isn’t it? When you’ve got people entering content, creating content in a system and managing it, the user interface that’s provided is a big factor there and how the data is… There are all sorts of different approaches aren’t there? In content management to how you manage data, whether it’s just one big wizzywig block of junk, or whether things are broken down to a granular level for structured content. Presuming that most headless solutions still have some sort of user interface for editing content to get you started, does that reflect what you’ve seen in the marketplace?

Alvin: Yeah, everyone has a wizzywig. Some have varying degrees of support with Markdown, and again, the ecosystem of apps that I was talking about. So more and more players are having their own, and this helps also to extend it so it can help with this discussion of, if you’re talking to a blog editor, it’s like, “Ah, it’s kind of there, but I really wish there was a field that could do X,” and you could either extend it yourself or just look for another solution. But yeah, different teams have varying needs. And it could be small things, like for example, scheduling. Like, oh, I want to be able to have this campaign going, I want to make sure that from now, I can make sure a blog post goes on the Monday, another one goes on a Tuesday and another one goes on the Thursday. And if the interface for this is a nightmare versus something else that you might not need. It depends, right? It’s always… But yeah, it’s crucial. Absolutely.

Drew: And I guess, if you’ve got very specific needs, in theory, you could use a headless CMS purely as a content engine and have your own mechanisms for getting data in, and basically write your own interface for writing into that system as well. Is a right interface something that everyone supports? Or is that a feature to look out for when evaluating?

Alvin: I wouldn’t say… I can’t remember if there’s one in particular who doesn’t, but I know we definitely do because-

Drew: It’s a very broad question. With your extreme knowledge of a hundred percent of the marketplace, does every single one…

Alvin: Right. Yeah, I know we support it because, it’s also, I think, with DEVREL, you end up spending a lot more time in your product versus the others, you tend to have a good… And this is where it’s different to sales, what we were saying earlier. When someone comes up to me and say, “Oh, what is the one feature that is different?” I always say, “Well, it depends what you need.” What is the one reason I could choose Contentful? And this is where we very much differ from sales, which we’ll have a list right there in the ready. And then we’ll be like, “Oh, for sure you should choose us because A, B, C, D.” And for us as developer, is more like, “What do you need? What scale do you have? What do you like working on? How big is your team?” It’s a different question. But as far as writing, having an API that works both ways, we definitely support it, and I would be surprised if others do not.

Drew: It becomes quite crucial, doesn’t it? Because very few projects comparatively start with nothing. Most people have got some sort of system in place before, and taking the data that you’ve already got and migrating it into a new system can be a major project, and a deal breaker for a lot of bigger use cases. You’ve got to be able to get data in. So having a CMS that has an API that you can write code and interface with whatever the previous system is, get that code into a decent shape, and then inject it into the headless system, that’s a major advantage, isn’t it?

Alvin: And you can think in terms of reproducibility as well, because migration is great, but it’s even better if you can say, “Oh, this is the exact script that I ran for migration as opposed to just this collection of random commands that I did on my machine.” And it’s like, oh, beta’s over now. It’s much better to say, “Oh, we have this very defined way of transforming this data from this shape to that shape.” And having an API helps you with this for sure.

Drew: Hopefully gone are the days when you have two browser windows open and copy and paste content from one form to another. I’ve certainly been there in the distant past. But yeah, flew those days behind us. Is there anything that has particularly caught your attention?

Alvin: I think the composable thing is, it’s been going on for a few months now, but it is definitely a shift. Everyone is starting to think, oh, maybe there’s more than just being the headless CMS solution. And obviously AI, which every market is talking about now. But it’s also content, and especially written content is, the first, right now, at least, the very first industries to be impacted by it. So how do you integrate it? How do you make sure… How do you account for things like attribution? These are discussions that are happening in the content space for sure, definitely. At least right now, it’s the first industry that it’s really attacking.

Drew: Yes. Written content and things like images, there’s Photoshop, new version of Photoshop has come out this last couple of weeks that has completely generative fill in it, which is amazing. So it’s a brave new world, isn’t it? From a content point of view particular, it’s a brave new world. And you mentioned attribution, and that’s a minefield as well, isn’t it? Figuring out how all that works when content has been generated from a model trained on-

Alvin: We don’t know what.

Drew: Yeah, who knows what. So how can you attribute stuff? It’s going to be a very interesting time, going forward, figuring out how we do that. Is there anything else that, say I’m planning a project, I’m going to use a headless CMS. I’ve decided maybe I’m going to use Contentful, and I’m planning out this project. What should I be considering? What lies ahead of me? What should I be worrying about? What is there that I should be thinking about? On embarking on this?

Alvin: I think it’s your content model. So it’s the first thing to get right. We see sometimes people really over-complicating things and having content model for, if you think of an index page, having a separate content for a carousel, and then there may be a river type content. Do you know what I mean? In terms of where you have an image on the left and text on the right, and then it follows by image on the right text on the left. So you can really over-complicate things where it’s content model. You could think, “Oh, rivers is very different to a Carousel,” for example. But then you can think, “Oh wait, no, is it just an image with text with it?” And then at the end of the day, oh yeah it is. So it’s stuff like that where it’s easy to over-engineer things, and then having content models that are Carousel homepage one, and then about page Carousel two, and it’s like, well, no, these aren’t the same thing.

Alvin: So it’s trying to think in the abstract way, even if it might be more code originally because you’re building more flexibility into each of the content models, the content types as well. But in the long run, that could save you.

Drew: So it’s about, I guess, thinking of what content you’ve got and what different types it falls into?

Alvin: And what you might have in the future, which is complicated for sure because you can never know. But trying to build flexibility of, in terms of trying to think outside the box that, for example, what if there was a caption in this river? What if one of the images was actually a video? These little tweaks like this, which it will save you a lot of time in the long run, because it will prevent you from having to rethink everything later.

Drew: From a development point of view, from a developer point of view, one thing that always gives me reassurance in my work is having a good test suite that you can run to make sure that things aren’t broken before you deploy.

Alvin: Yeah.

Drew: Is there anything in terms of testing around content that we could make use of?

Alvin: For sure. It’s an API too. So if you can think of using something like Storybook, where you have your different components, you could say, right, so, as we said about, this is a generic component. What happens if there’s three instances of it? What happens if there’s five? What happens again, if one of them is a video? It’s this whole, the meme, cue engineer walks into a bar, orders zero beers, orders a thousand beers, stuff like that.

Drew: What is an elephant?

Alvin: Right. Yeah, exactly. And that type of thing. You can build into test suite and see what happens.

Drew: Yes, that’s quite interesting. I guess, again, it’s the decoupling of things that makes that really easy. And I guess then, if you’re running that Storybook, you can then run it through some visual regression testing and spot breakages or what have you. Yeah, that’s really fascinating.

Alvin: You can think of human errors too, right? Sometimes when you set up everything, you’re like, “Oh yeah, of course they’re going to put a date on the article,” but then sometimes you forget or sometimes there’s something else and you realize, oh, that the date you published is different than the date you had in the article, whatever. Or one of the author’s Twitter doesn’t work. Stuff like that is, it’s writing your test suite is a good time to think about these edge cases, right? Sorry, what was your question?

Drew: Yes, no, I was just jabbering on. In fact, I remember years ago writing a sort of system for blogging, and every one of the absolute required fields was a title for the blog post. When I created it, I never even questioned, would a title be optional? A title is fundamental. Every blog post has a title.

Alvin: H1.

Drew: Right. And then you use that to generate the nice slug for the URL and all these sorts of things. And in listings, it’s the title that appears. And then Tumblr came out, and you could create all sorts of posts on that, and you didn’t even need a date or a title.

Alvin: Oh yeah.

Drew: What madness is this? But it’s like you’re saying, it’s thinking outside of the box and thinking about the different types of content that you might have. And it turns out that fundamental assumption that I made early on in that system, that we absolutely a hundred percent always had a title, became a limitation of what we could do with the system, because then when content came along that didn’t have a title, I was stuffed.

Alvin: Yeah, Instagram is another example. Or as we said, the great thing about headless is that it can go anywhere, but if you’re also planning stuff that is written in Contentful, but that will go on your website, again, your catalog, but also on Instagram, we have this great promotion this week for half price of whatever. On Instagram, that might just be the image and nothing else, and the description or something else. Yeah. Or you want to make sure, definitely don’t pull the hashtag into the blog, stuff like that.

Drew: Yes. Yeah, and I guess just thinking about Instagram and using content in that way, having this API with your content opens up all sorts of possibilities for generating images. You could pull the content and render text onto an image and post it to Instagram and do all that sort of things, that imagining doing that with a traditional CMS would just be… It’d be a flight of fancy, it’d be difficult. You’d be fighting against the system rather than working with it.

Alvin: Yeah. And this is where other features, as I said, scheduling for example, can be very important, because you can say, I want to make sure that whenever this campaign launches, we also have the Instagram stuff going out, which are again, these images generated from the new publish in the CMS, and this is where the CMS itself can have a scheduler, you can use Zapier or you can use Zapier to capture it and run a script that will then generate the image. You can do all of this in a Chrome job somewhere. It depends, but this is where these features become important.

Drew: Your content then just sits as one piece in a big chain of loosely joined elements that are delivering your various digital products or what have you onto your customers.

Alvin: And the composable stuff is about this being less loose, is to make sure you have some kind of control that’s defined, right? That’s not like, oh, there’s this Chrome job here and this app here, and it’s all duct tape.

Drew: Yes. So yeah, it gives you a level of control and potentially then, quality control or moderation or any of those steps that you might want to put in between rather than just… Because it would be possible to federate content in a front-end JavaScript app, you could do that, but then you’re missing that potential gate-keeping or any of those steps that you might want to put in, that having a platform that does it for you or enables that as a feature. Sounds super useful. So we’ve been learning all about headless CMSs today. What have you been learning about lately, Alvin?

Alvin: I was on web rush last week, so I did a lot of work with Astro, the different web framework. It’s been out for a bit, but since 2.0, I feel like they’ve really stepped on the gas and started releasing so many things. So I’ve really been looking into it, and it’s great. I have a blog post coming out. But yeah, I’ve been looking at the docs and learning a lot about all the new features, references, which are amazing. If you’ve had to deal with Markdown before and the whole type safety that they’ve added to Markdown, it’s really interesting. And the fact that they support all the frameworks is just even better. So yeah, I’ve been learning a lot about Astro recently.

Drew: That’s great. I think we did an episode on Astro, maybe a couple of years ago now, so perhaps it’s time that the Smashing Podcast Revisited.

Alvin: Yeah, there’s a lot of new stuff that came out

Drew: That’s amazing. If you, dear Listener, would like to hear more from Alvin, you can find his personal website with links to his various projects and social profiles at alvin.codes. Thanks for joining us today, Alvin. Do you have any parting words?

Alvin: No, thank you for having me. I’ve been a reader of Smashing Mag for a long time. My first article, I think, came out last year, which was also a great honor. And yeah, thank you so much for having me.

Categories: Others Tags:

11 Best Video Editing Apps in 2023

August 1st, 2023 No comments

Ever feel overwhelmed by the sheer number of video editing options available to you? You’re not alone: beginners demand simplicity, professionals crave advanced features, and businesses need budget-friendly apps.

Categories: Designing, Others Tags:

CSS And Accessibility: Inclusion Through User Choice

August 1st, 2023 No comments

We make a series of choices every day. Get up early to work out or hit the snooze button? Double foam mocha latte or decaf green tea? Tabs or spaces? Our choices, even the seemingly insignificant ones, shape our identities and influence our perspectives on the world. In today’s modern landscape, we have come to expect a broad range of choices, regardless of the products or services we seek. However, this has not always been the case.

For example, there was a time when the world had only one font family. The first known typeface, a variant of Blackletter, graced Johannes Gutenberg’s pioneering printing press in 1440. The first set of publicly-available GUI colors shipped with the 10th version of the X Window System consisted of 69 primary shades and 138 entries to account for various color variations (e.g., “dark red”). In September 1995, a Netscape programmer, Brendan Eich, introduced “Mocha,” a scripting language that would later be renamed LiveScript and eventually JavaScript.

Fast forward to the present day, and we now have access to over 650,000 web fonts, a hexadecimal system capable of representing 16,777,216 colors, and over 100 public-facing JavaScript frameworks and libraries to choose from. While this is great news for professionals designing and building user interfaces, what choices are we giving actual users? Shouldn’t they have a say in their experience?

CSS Media Features

While designers and developers may have some insights into user needs, it is very challenging to understand the actual user preferences of 7.8 billion people at any given time. Supporting the needs of individuals with disabilities and assistive technology adds a layer of complexity to an already complex situation. Nonetheless, designers and developers are responsible for addressing these user needs as best we can by providing accessible choices. One promising solution is user-focused CSS media features that allow us to customize the user experience and cater to individual preferences.

Media Features For Color

Let’s first focus on CSS media features for color. Color plays a vital role in design, impacting how we perceive brands. Studies suggest that color alone can influence up to 90% of snap judgments about products. Considering the large number of people worldwide with visual deficiencies such as color blindness and low vision, developers and designers have a significant opportunity to improve accessibility with this element alone.

@prefers-color-scheme

The @prefers-color-scheme CSS media feature helps identify whether users prefer light or dark color themes. Users can indicate their preferences through settings in the operating system or user agent.

There are two values for this CSS media feature: light and dark. Typically, the default theme presented to users is the light version, even if the user expresses no preference. However, the opposite can also be true, and websites or apps default to a dark theme and switch to a light theme using the @media (prefers-color-scheme: light) media feature and corresponding code.

Users opting for a dark mode signifies their preference for a dark-themed page. Using @media (prefers-color-scheme: dark), various theme elements, such as text, links, and buttons, are adjusted to ensure they stand out against darker background colors.

In the past, there was also a no-preference value to indicate when users had no theme preference. However, user agents now treat light themes as the default, rendering the no-preference value obsolete.

@media (prefers-color-scheme: dark) {
  body {
    background-color: #282828;
  }

  .without [data-word="without"] .char:before,
  .without [data-word="without"] .char:after {
    color: #fff;
  }
}

The @prefers-color-scheme is one of the most widely used CSS media features today and it has a very large percentage of browser support at 94%. It is so popular that additional values may be introduced in the future to express more specific preferences or color schemes, such as sepia or grayscale.

Switching from the default light mode to dark mode is relatively straightforward. Consult the user setting guides for Mac and Windows operating systems to learn more (select the relevant hardware and operating system version), then navigate to a browser that supports this CSS media feature.

Pro-tip: A more sophisticated solution to demo user preference settings is using Chrome’s Rendering tab coupled with CSS media features emulator to easily switch from light to dark modes to emulate @prefers-color-scheme as users experience it. This solution is convenient for live demos where you need to show the user preference changes quickly or emulate media features not fully supported by your OS or browser.

@forced-colors

The @forced-colors CSS media feature enables the detection of the forced colors mode enforced by the user agent. This mode imposes a limited color palette the user chooses onto the page. This newer media feature provides an alternative approach to handle colors for non-Window devices, and we expect it will replace Windows High Contrast Mode in the future.

There are two values for the forced-colors media feature: none and active. The @media (forced-colors: none) value indicates that the forced colors mode is inactive and uses the default color scheme, while the @media (forced-colors: active) value means that the forced colors mode is active and the user agent enforces the user-selected limited color palette.

It’s worth noting that enabling @forced-colors mode does not necessarily imply a preference for higher contrast. The color adjustments align with the user’s choice, which may not strictly fit into the low or high-contrast categories.

Note: There are some properties affected by the forced-color mode that you need to be aware of when designing and testing your forced-colors theme. Check out Eric Bailey’s article “Windows High Contrast Mode, Forced Colors Mode And CSS Custom Properties” for more information about this media feature and its integration with CSS custom properties.

@media (forced-colors: active) {
  body {
    background-color: #fcba03;
  }

  .without [data-word="without"] .char:before,
  .without [data-word="without"] .char:after {
    color: #ac1663;
  }

  .without {
    color: #004a72;
  }
}

The @forced-colors CSS media feature is currently supported by 31% of the most popular browsers, including desktop versions of Chrome, Edge, and Firefox. Although the browser support for this feature is increasing, not all operating systems currently offer a setting to activate the forced colors mode. The Windows operating system is the only exception, as it provides the necessary functionality for users to create customized themes that override the default ones by utilizing the Windows High Contrast mode.

If you are using a non-Windows machine, you can emulate the behavior of this media feature by following the steps mentioned earlier in the @prefers-color-scheme section using Chrome’s Rendering tab and emulator, but with a focus on emulating @forced-colors instead.

@inverted-colors

The @inverted-colors CSS media feature determines whether to show the content in its standard colors or if it reverses the colors.

Two modes are available for the @inverted-colors media feature: none and inverted. The @media (inverted-colors: none) value indicates that the forced colors mode is not activated and uses the default color scheme. Using the @media (inverted-colors: inverted) value indicates that all pixels within the displayed area have been inverted and renders the inverted color theme when a user chooses this option.

When writing code for the @inverted-colors CSS media feature, one option is to write your code using the inverted value of what you want a user to see to ensure correct rendering after applying the user’s setting.

For example, you want your element’s background to be #e87b2d, which is a tangerine orange. In the theme code, you would write the opposite color, #1784d2, powder blue. Incorporating this inverse color into the code renders the intended tangerine orange instead of its inverse when users enable the @inverted-colors setting.

@media (inverted-colors: inverted) {
  body {
    background-color: #99cc66;
  }

  .without [data-word="without"] .char:before,
  .without [data-word="without"] .char:after {
    color: #ee1166;
  }

  .without {
    color: #111111;
  }
}

Current browser support for @inverted-colors is 20% for Safari desktop and iOS browsers. While Chrome’s Rendering tab and emulator do not work for this particular media feature, you can emulate @inverted-colors using Firefox (version 114 or newer).

  1. Open a new tab in Firefox and type or paste about:config in the address bar, and press Enter/Return. Click the button acknowledging that you will be careful.
  2. In the search box, enter layout.css.inverted-colors and wait for the list to be filtered.
  3. Use the toggle button to switch the preference from false to true.
  4. Enable the inverted colors setting in your operating system and navigate to a webpage or code example with the @inverted-colors theme to observe the inverted effect.

The setting for the @inverted-colors media feature is available on Mac and Windows operating systems.

Media Features For Contrast

Next, let’s talk about CSS media features related to contrast. Contrast plays a crucial role in conveying visual information to users, working hand in hand with color. When proper levels of color contrast are not implemented, it becomes difficult to distinguish essential elements such as text, icons, and important graphics. As a result, the design can become inaccessible not only to the 46 million people worldwide with low vision but also to older adults, individuals using monochrome displays, or those in specific situations like low lighting in a room.

@prefers-contrast

The @prefers-contrast CSS media feature detects the user’s preference for higher or lower contrast on a page. The feature uses the information to make appropriate adjustments, such as modifying the contrast ratio between colors nearby or altering the visual prominence of elements, such as adjusting their borders, to better suit the user’s contrast requirements.

There are four values for this CSS media feature: no-preference, less, more, and custom. The @media (prefers-contrast: no-preference) value indicates that the user has no preference (or did not choose one since it is the default setting), and the @media (prefers-contrast: less) value indicates a user’s preference for less contrast. Conversely, the @media (prefers-contrast: more) value indicates a user’s preference for stronger contrast.

The @media (prefers-contrast: custom) value is a bit more complex as it allows users to use a custom set of colors — which could be specific to contrast — or choose a palette. For example, a user could select a theme composed entirely of shades of blue, primary colors, or even a rainbow theme — anything they choose.

Note: When a user selects the custom contrast setting, it will align with the color palette defined by users of forced-colors: active value, so be sure to account for that in the code.

@media (prefers-contrast: more) {
  .title2 {
    color: var(--clr-6);
  }

  .aurora2__item:nth-of-type(1),
  .aurora2__item:nth-of-type(2),
  .aurora2__item:nth-of-type(3),
  .aurora2__item:nth-of-type(4) {
    background-color: var(--clr-6);
  }
}

@media (prefers-contrast: less) {
  .title {
    color: var(--clr-5);
  }

  .aurora__item:nth-of-type(1),
  .aurora__item:nth-of-type(2),
  .aurora__item:nth-of-type(3),
  .aurora__item:nth-of-type(4) {
    background-color: var(--clr-5);
  }
}

@media (prefers-contrast: custom) {
  .aurora2__item:nth-of-type(1) {
    background-color: var(--clr-1);
  }
  .aurora2__item:nth-of-type(2) {
    background-color: var(--clr-2);
  }
  .aurora2__item:nth-of-type(3) {
    background-color: var(--clr-3);
  }
  .aurora2__item:nth-of-type(4) {
    background-color: var(--clr-4);
  }
}

Currently, 91% of the most widely used browsers offer support for the @prefers-contrast media feature. However, the majority of this support is focused on enhancing contrast rather than reducing it or allowing for personalized contrast themes.

To effectively demo and test all the different contrast options for this CSS media feature, use the Chrome Rendering tab and emulator as described earlier, but with a specific emphasis on emulating the @prefers-contrast media feature this time.

@prefers-reduced-transparency

The @prefers-reduced-transparency CSS media feature determines if the user has requested the system to use fewer transparent or translucent layer effects.

It takes one of two possible values: no-preference and reduce. The @media (prefers-reduced-transparency: no-preference) value indicates that the user has not specified any preference for the system (this is also the default setting). On the other hand, the @media (prefers-reduced-transparency: reduce) value indicates that the user has informed the system about their preference for an interface that minimizes the application of transparent or translucent layer effects.

@media (prefers-reduced-transparency: reduce) {
  .title,
  .title2 {
    opacity: 0.7;
  }
}

The current browser support for @prefers-reduced-transparency stands at 0%. This CSS media feature is highly experimental and should not be utilized in production code at the time I’m writing this article.

However, if you wish to emulate the @prefers-reduced-transparency media feature behavior, you can follow these steps using Firefox (version 113 or newer).

  1. Open a new tab in Firefox and type or paste about:config in the address bar, and press Enter/Return. Click the button acknowledging that you will be careful.
  2. In the search box, type or paste layout.css.prefers-reduced-transparency and wait for the list to be filtered.
  3. Use the toggle button to switch the preference from the default state of false to true.
  4. Adjust your operating system’s transparency settings and navigate to a webpage or code example with the @prefers-reduced-transparency theme to observe the effect of reduced transparency.

Media Features For Motion

Lastly, let’s turn our focus to motion. Whether it involves videos, GIFs, or SVGs, movement can enrich our online experiences. However, this media type can also adversely affect many individuals. People with vestibular disabilities, seizure disorders, and migraine disorders can benefit from accessible media. CSS media features for motion allow us to incorporate both dynamic movement and static states for elements, enabling us to have the best of both worlds.

@prefers-reduced-motion

Using the @prefers-reduced-motion CSS media feature helps determine whether the user has requested the system to minimize the usage of non-essential motion.

This CSS media feature accepts one of two values: no-preference and reduce. The @media (prefers-reduced-motion: no-preference) value indicates that the user has not specified any preference for the system (this is also the default setting). Conversely, the @media (prefers-reduced-motion: reduce) value indicates that the user has informed the system about their preference for an interface that eliminates or substitutes motion-based animations that may cause discomfort or serve as distractions for them.

@media (prefers-reduced-motion: reduce) {
  .bg-rainbow {
    animation: none;
  }

  .perfection {
    .word {
      .char {
        animation: slide-down 5s cubic-bezier(0.75, 0, 0.25, 1) both;
        animation-delay: calc(#{$delay} + (0.5s * var(--word-index)));
      }
    }

    [data-word="perfection"] {
      animation: slide-over 4.5s cubic-bezier(0.5, 0, 0.25, 1) both;
      animation-delay: $delay;

      .char {
        animation: none;
        visibility: hidden;
      }

      .char:before,
      .char:after {
        animation: split-in 4.5s cubic-bezier(0.75, 0, 0.25, 1) both alternate;
        animation-delay: calc(
          3s + -0.2s * (var(--char-total) - var(--char-index))
        );
      }
    }
  }
}

You can compare the difference in the following videos and by viewing a live demo.

@prefers-reduced-data

Last but certainly not least, let’s examine the @prefers-reduced-data CSS media feature. This media feature determines whether the user prefers to receive alternate content that consumes less data when rendering the page.

This CSS media feature has two possible values: no-preference and reduce. The @media (prefers-reduced-motion: no-preference) value indicates that the user has not specified any preference for the system (which is also the default setting). On the other hand, the @media (prefers-reduced-data: reduce) value indicates that the user has expressed a preference for lightweight alternate content.

Unlike other CSS media features, a user’s preference for the @prefers-reduced-data media feature could vary. It may be a system-wide setting exposed by the operating system or settings controlled by the user agent. In the case of the user agent, they may determine this value based on the same user or system preference used for setting the Save-Data HTTP request header.

Note that the Save-Data network client request header is still considered experimental technology, but it has achieved a remarkable 72% browser support across various browsers, except Safari and Firefox on desktop and mobile.

@media (prefers-reduced-data: reduce) {
  .bg-rainbow {
    animation: none;
  }

  .perfection {
    .word {
      .char {
        animation: none;
      }
    }

    [data-word="perfection"] {
      animation: none;

      .char {
        animation: none;
        visibility: hidden;
      }

      .char:before,
      .char:after {
        animation: none;
      }
    }
  }
}

Similar to @prefers-reduced-transparency, the @prefers-reduced-data CSS media feature is highly experimental and should not be utilized in production code at this time as the current browser support for it stands at 0%.

However, if you wish to emulate the @prefers-reduced-data behavior, you can follow these steps using Chrome (version 85 or newer).

  1. Open a new tab in Chrome and type or paste chrome://flags in the address bar and press Enter/Return.
  2. In the search box, type or paste experimental-web-platform-features and wait for the list to be filtered.
  3. Use the dropdown option to switch the preference from the default state of disabled to enabled.
  4. Use the Chrome Rendering tab and choose the appropriate CSS media feature to emulate.

Note that you can also enable the @prefers-reduced-data feature in Edge, Opera, and Chrome Android (all behind the same experimental-web-platform-features flag), but it is less clear how you would emulate the media feature without the rendering tab and emulator found in the desktop version of Chrome.

Amplifying Inclusion Through User Choice

In the tech world, accessibility often receives criticism, particularly with aesthetics and advanced features. However, this negative perception can be changed. It is possible to incorporate stunning design and innovative functionality while prioritizing accessibility by leveraging CSS user-focused media features that address color, contrast, and motion.

Today, by incorporating all available options for each CSS media feature currently supported by browsers (with support exceeding 90%), you can provide users with 16 combinations of options. However, when the browsers and operating systems implement and support more experimental media features, the impact on user customization expands significantly to a staggering 256 combinations of options. A large number of possible options truly amplifies the potential impact designers and developers can have on user experiences.

As professionals within the technology industry, our goal should be to ensure that digital products are accessible to all individuals. By offering users the ability to personalize their experience, we can include an array of remarkable features in a responsible manner. Our job is to provide options and let people choose their own adventure.

Further Reading On SmashingMag

Categories: Others Tags: