While struggling being at the midst of competitive digital industry, marketers strive to come up with progressive marketing strategies.
Such strategies that can bring more customers to the site increase the rate of conversions and bring the site to the top charts.
What can be better than having an animated video when you have big goals to achieve? An Animated video can help you bring the change you want and to generate the performance you have desired. It can let you take a leap to success by investing a few bucks. All that is needed is a bit of determination to get the best quality video created by an animated video company.
Engaging
An animated video is a package full of creativity and appeal. It can effectively deliver gems of information using simplistic tricks. An animated video sketches a deep story that indulges the attention of viewers. If you want to succeed in the online world, you have to make sure that your video is captivating enough to entertain the viewers and to sustain their attention for longer.
Spread Brand Awareness
An animated video is a perfect asset to increase your brand awareness. The video can incorporate rich brand messages along with your core values. Having the appropriate use of colors and aesthetics that can reflect your level of professionalism, you can accelerate your brand’s reach. Your content present in the video will be able to trigger the emotions and create a strong bond among your customers.
Versatile
One of the biggest advantages of incorporating animated videos in your marketing campaign is approaching customers with a touch of versatility. You do not have to become a slave of any template, you can create a video from scratch and amuse your viewers. You add your own color, features, and graphics that can help in capturing attention.
Boost Online Visibility
Animated videos due to being able to deliver information effectively and quickly, entertains viewers expertly. As a viewer becomes interested to know more about a site he shows a greater likelihood to get converted and even if he does not he finds the site helpful. Due to which the search engines rank such sites among the leading one. So, just by having a simple animated video you can gain enhanced online visibility. You can be able to reach out to even those customers you never thought about dealing with.
Mobile Friendly
As a huge proportion of online users, prefer streaming and browsing through their smartphones, the need to have mobile responsive sites keep on ringing the bell of urgency. You have to get a highly responsive site or any sort of branding asset if you want to maximize your customer reach. As per the research, around 65% customers view Facebook post through cellphones and similarly, around 50% video streaming on YouTube is made through phones.
Interactive
Animated videos have the power to interact with viewers. It is something that is hard to find elsewhere. Interaction is important if you want to amuse your customers and connect them with your brand. To build a strong connection and to develop trust it is important to present you as a human. Appealing visuals along with interactive animation turns into a power-pack dose for the potential customers who cannot stop falling in love with your company and services.
It Humanizes
You can humanize your brand by using an animated video. The video gives a face and voice to your firm and allows it to stabilize the clientele. You can efficiently gather huge traffic on your site and address the common issues that your customers encounter.
Accelerates Sales
Why a customer refrains from counting on a new business or brand to shop? The prime reason lies in oblivion. The customer does not know what that brand does good and the traits of its services. Due to which it takes months to spread brand awareness and to convince customers to lay their trust in a company. However, with an animated video the thing, which takes months, can be achieved within days. You can educate your viewers and tell them in details what your company is and all about your services.
In a Nut Shell
When making an animated video the first thing to consider is to stay targeted to customer’s needs and demands. So, stay conscious and follow the inspiration and guidelines stated by experts to bring a massive productivity change in your creation.
I admit I’m quite intrigued by frameworks that allow you write apps in web frameworks because they do magic to make them into native apps for you. There are loads of players here. You’ve got NativeScript, Cordova, PhoneGap, Tabris, React Native, and Flutter. For deskop apps, we’ve got Electron.
What’s interesting now is to see what’s important to these frameworks by honing in on their focus. Hummingbird is Flutter for the web. (There is a fun series on Flutter over on the Bendworks blog in addition to a post we published earlier this year.) The idea being you get super high performance ,thanks to the framework, and you’ve theoretically built one app that runs both on the web and natively. I don’t know of any real success stories I can point to, but it does seem like an awesome possibility.
Working with data in React is relatively easy because React is designed to handle data as state. The hassle begins when the amount of data you need to consume becomes massive. For example, say you have to handle a dataset which is between 500-1,000 records. This can result in massive loads and lead performance problems. Well, we’re going to look at how we can make use of virtualized lists in React to seamlessly render a long list of data in your application.
We’re going to use the React Virtualized component to get what we need. It will allow us to take large sets of data, process them on the fly, and render them with little-to-no jank.
The setup
React Virtualized already has a detailed set of instructions to get it up and running, so please check out the repo to get started.
We’re going to want data to work with, so we will set up a function which uses faker to create a large data set.
function createRecord(count) {
let records = [];
for (let i = 0; i < count; i++) {
records.push({
username: faker.internet.userName(),
email: faker.internet.email()
});
}
return records;
}
Next, we will pass it the number of data records we want to create, like so:
const records = createRecord(1000);
Alright, now we have what we need to work on rendering a list of those records!
Creating a virtualized list
Here’s the list we want to create, sans styling. We could make use of the few presentational styles that the library includes by importing the included CSS file, but we’re going to leave that out in this post.
You might wonder what the heck React Virtualized is doing behind the scenes to make that happen. Turns out it’s a bunch of crazy and cool sizing, positioning, transforms and transitions that allow the records to scroll in and out of view. The data is already there and rendered. React Virtualized creates a window frame that allows records to slide in and out of view as the user scrolls through it.
To render a virtualized list in React Virtualized, we make use of its List component, which uses a Grid component internally to render the list.
First, we start by setting up rowRenderer, which is responsible for displaying a single row and sets up an index that assigns an ID to each record.
As you can see, this returns a single div node that contains two additional divs: one for the username and another for the email. You know, a common list pattern to display users.
rowRenderer accepts several parameters. Here’s what they are and what each one does:
index: The numeric ID of a record.
isScrolling: Indicates if the scrolling is occurring in the List component.
isVisible: Determines if a row is visible or out of view.
key: The records position in the array.
parent: Defines whether the list is a parent or a child of another list.
style: A style object to position the row.
Now that we know more about the rowRenderer function, let’s make put it to use in the List component:
You may have noticed a few new parameters. Here’s what they are:
rowCount: This takes the numbers of a row in a list that we pass to calculate the length of our list.
width: The width of the list.
height: The height of the list.
rowHeight: This can be a number or a function that returns a row height given its index.
rowRenderer: This is responsible for rendering the row. the list is not supposed to be passed directly, so we pass the rowRenderer function that we created in this tutorial.
overscanRowCount: This is used to render additional rows in the direction the user scrolls. It reduces the chances of the user scrolling faster than the virtualized content is rendered.
At the end, your code should look something like this;
According to the documentation, a cell measurer is a higher-order component that is used to temporarily render a list. It’s not yet visible to the user at this point, but the data is held and ready to display.
Why should you care about this? The popular use case is a situation where the value of your rowHeight is dynamic. React Virtualized can render the height of the row on render then cache that height so it no longer needs to calculate as data scrolls out of view — it’s always the right height, no matter the content it contains!
First, we create our cache, which can be done in our component’s constructor using CellMeasurerCache:
The value passed to deferredMeasurementCache will be used to temporarily rendering the data, then — as the calculated value for rowHeight comes in — additional rows will flow in like they were always there.
Next, though, we will make use of React Virtualized’s CellMeasurer component inside our rowRenderer function instead of the div we initially set up as a placeholder:
Now the data is fetched, cached and ready to display in the virtual window at will!
Virtualized table
Yeah, so the main point of this post is to cover lists, but what if we actually want to render data to a table instead? React Virtualized has you covered on that front, too. In this case, we will make use of Table and Column components that come baked into React Virtualized.
Here’s how we would put those components to use in our primary App component:
The Table component accepts the following parameters:
width: The width of the table.
height: The height of the table.
headerHeight: The table header height.
rowHeight: The height of a row given its index.
rowCount: This is the initial number of rows we want in the table. It’s the same as the way we defined the number of records we wanted to start with in the List component example.
rowGetter: This returns the data of a specific row by its index.
If you take a look at the Column component, you will notice that we put a dataKey parameter to use. That passes the data for each column we called in the dataKey, which receives a unique identifier for that data. Remember that in the function where we create our random data, we make use of two keys; username and email. This is why we have the dataKey of one column set as username and the other set as email.
In conclusion
Hopefully, this walkthrough gives you a good idea of what React Virtualized is capable of doing, how it can make rendering large data sets into lists and tables super fast, and how to put it to use in a project.
Plus, the package is highly maintained. In fact, you can join the Slack group to keep up with the project, contribute to it, and generally get to connect with other folks.
It’s also worth noting that React Virtualized has it own tag in StackOverflow and that can be a good resource to find questions other people have asked about it, or even post your own questions.
Oh, and if you’ve put React Virtualized to use on a project, we’d love to know it! Share it with us in the comments with some notes on how you approached it or what you learned from it.
Automatically detect and diagnose JavaScript errors impacting your users with Bugsnag. Get comprehensive diagnostic reports, know immediately which errors are worth fixing, and debug in a fraction of the time.
Bugsnag detects every single error and prioritizes errors with the greatest impact on your users. Get support for 50+ platforms and integrate with the development and productivity tools your team already uses.
Bugsnag is used by the world’s top engineering teams including Airbnb, Slack, Pinterest, Lyft, Square, Yelp, Shopify, Docker, and Cisco. Start your free trial today.
Visuals have played a critical role in the marketing and advertising industry since their inception. For years, marketers have relied on images, videos, and infographics to better sell products and services. The importance of visual media has increased further with the rise of the Internet and consequently, of social media.
Lately, gifographics (animated infographics) have also joined the list of popular visual media formats. If you are a marketer, a designer, or even a consumer, you must have come across them. What you may not know, however, is how to make gifographics, and why you should try to add them to your marketing mix. This practical tutorial should give you answers to both questions.
In this tutorial, we’ll be taking a closer look at how a static infographic can be animated using Adobe Photoshop, so some Photoshop knowledge (at least the basics) is required.
What Is A Gifographic?
Some History
The word gifographic is a combination of two words: GIF and infographic. The term gifographic was popularized by marketing experts (and among them, by Neil Patel) around 2014. Let’s dive a little bit into history.
CompuServe introduced the GIF ( Graphics Interchange Format) on June 15, 1987, and the format became a hit almost instantly. Initially the use of the format remained somewhat restricted owing to patent disputes in the early years (related to the compression algorithm used in GIF files — LZW) but later, when most GIF patents expired, and owing to their wide support and portability, GIFs gained a lot in popularity which even lead the word “GIF” to become “Word of the year” in 2012. Even today, GIFs are still very popular on the web and on social media(*).
The GIF is a bitmap image format. It supports up to 8 bits per pixel so a single GIF can use a limited palette of up to 256 different colors (including — optionally — one transparent color). The Lempel–Ziv–Welch (LZW) is a lossless data compression technique that is used to compress GIF images, which in turn, reduces the file size without affecting their visual quality. What’s more interesting though, is that the format also supports animations and allows a separate palette of up to 256 colors for each animation frame.
Tracing back in history as to when the first infographic was created is much more difficult, but the definition is easy — the word “infographic” comes from “information” and “graphics,” and, as the name implies, an infographic serves the main purpose of presenting information (data, knowledge, etc.) quickly and clearly, in a graphical way.
In his 1983 book The Visual Display of Quantitative Information, Edward Tufte gives a very detailed definition for “graphical displays” which many consider today to be one of the first definitions of what infographics are, and what they do: to condense large amounts of information into a form where it will be more easily absorbed by the reader.
A Note On GIFs Posted On The Web (*)
Animated GIF images posted to Twitter, Imgur, and other services most often end up as H.264 encoded video files (HTML5 video), and are technically not GIFs anymore when viewed online. The reason behind this is pretty obvious — animated GIFs are perhaps the worst possible format to store video, even for very short clips, as unlike actual video files, GIF cannot use any of the modern video compression techniques. (Also, you can check this article: “Improve Animated GIF Performance With HTML5 Video” which explains how with HTML5 video you can reduce the size of GIF content by up to 98% while still retaining the unique qualities of the GIF format.)
On the other hand, it’s worth noting that gifographics most often remain in their original format (as animated GIF files), and are not encoded to video. While this leads to not-so-optimal file sizes (as an example, a single animated GIF in this “How engines work?” popular infographic page is between ~ 500 KB and 5 MB in size), on the plus side, the gifographics remain very easy to share and embed, which is their primary purpose.
Why Use Animated Infographics In Your Digital Marketing Mix?
Infographics are visually compelling media. A well-designed infographic not only can help you present a complex subject in a simple and enticing way, but it can also be a very effective mean of increasing your brand awareness as part of your digital marketing campaign.
Remember the popular saying, “A picture is worth a thousand words”? There is a lot of evidence that animated pictures can be even more successful and so recently motion infographics have witnessed an increase in popularity owing to the element of animation.
From Boring To Beautiful
They can breathe life into sheets of boring facts and mundane numbers with the help of animated charts and graphics. Motion infographics are also the right means to illustrate complex processes or systems with moving parts to make them more palatable and meaningful. Thus, you can easily turn boring topics into visually-engaging treats. For example, we created the gifographic “The Most Important Google Search Algorithm Updates Of 2015” elaborating the changes Google made to its search algorithm in 2015.
Cost-Effective
Gifographics are perhaps the most cost-effective alternative to video content. You don’t need expensive cameras, video editing, sound mixing software, and a shooting crew to create animated infographics. All it takes is a designer who knows how to make animations by using Photoshop or similar graphic design tools.
Works For Just About Anything
You can use a gifographic to illustrate just about anything in bite-sized sequential chunks. From product explainer videos to numbers and stats, you can share anything through a GIF infographic. Animated infographics can also be interactive. For example, you can adjust a variable to see how it affects the data in an animated chart.
As a marketer, you are probably aware that infographics can provide a substantial boost to your SEO. People love visual media. As a result, they are more likely to share a gifographic if they liked it. The more your animated infographics are shared, the higher will be the boost in site traffic. Thus, gifographics can indirectly help improve your SEO and, therefore, your search engine rankings.
How To Create A Gifographic From An Infographic In Photoshop
Now that you know the importance of motion in infographics, let’s get practical and see how you can create your first gifographic in Photoshop. And if you already know how to make infographics in Photoshop, it will be even easier for you to convert your existing static infographic into an animated one.
Step 1: Select (Or Prepare) An Infographic
The first thing you need to do is to choose the static infographic that you would like to transform into a gifographic. For learning purposes you can animate any infographic, but I recommend you to pick up an image that has elements that are suitable for animation. Explainers, tutorials, and process overviews are easy to convert into motion infographics.
If you are going to start from scratch, make sure you have first finished the static infographic to the last detail before proceeding to the animation stage as this will save you a lot of time and resources — if the original infographic keeps changing you will also need to rework your gifographic.
Next, once you have finalized the infographic, the next step is to decide which parts you are going to animate.
Step 2: Decide What The Animation Story Will Be
You can include some — or all — parts of the infographic in the animation. However, as there are different ways to create animations, you must first decide on the elements you intend to animate, and how. In my opinion, sketching (outlining) various animation case scenarios on paper is the best way to pick your storyline. It will save you a lot of time and confusion down the road.
Start by deciding which “frames” you would like to include in the animation. At this stage, frames will be nothing else but rough sketches made on sheets of paper. The higher the number of frames, the better the quality of your gifographic will be.
You may need to divide the animated infographic into different sections. So, be sure to choose an equal count of frames for all parts. If not, the gifographic will look uneven with each GIF file moving at a different speed.
Step 3: Create The Frames In Photoshop
Open Adobe Photoshop to create different frames for each section of the gifographic. You will need to cut, rotate, and move the images painstakingly. You will need to remember the ultimate change you made to the last frame. You can use Photoshop ruler for the same.
You will need to build your animation from Layers in Photoshop. But, in this case, you will be copying all Photoshop layers together and editing each layer individually.
You can check the frames one by one by hiding/showing different layers. Once you have finished creating all the frames, check them for possible errors.
You can also create a short Frame Animation using just the first and the last frame. You need to select both frames by holding the Ctrl/Cmd key (Windows/Mac). Now click on “Tween.” Select the number of frames you want to add in between. Select First frame if you want to add the new frames between the first and the last frames. Selecting “Previous Frame” option will add frames between your current selection and the one before it. Check the “All Layers” option to add all the layers from your selections.
Step 4: Save PNG (Or JPG) Files Into A New Folder
The next step is to export each animation frame individually into PNG or JPG format. (Note: JPG is a lossy format, so PNG would be usually a better choice.)
You should save these PNG files in a separate folder for the sake of convenience. I always number the saved images as per their sequence in the animation. It’s easy for me to remember that “Image-1” will be the first image in the sequence followed by “Image-2,” “Image-3,” and so on. Of course, you can save them in a way suitable for you.
Step 5: “Load Files Into Stack”
Next comes loading the saved PNG files to Photoshop.
Go to the Photoshop window and open File > Scripts > Load files into Stack…
A new dialog box will open. Click on the “Browse” button and open the folder where you saved the PNG files. You can select all files at once and click “OK.”
Note: You can check the “Attempt to Automatically Align Source Images” option to avoid alignment issues. However, if your source images are all the same size, this step is not needed. Furthermore, automatic alignment can also cause issues in some cases as Photoshop will move the layers around in an attempt to try to align them. So, use this option based on the specific situation — there is no “one size fits them all” recipe.
It may take a while to load the files, depending on their size and number. While Photoshop is busy loading these files, maybe you can grab a cup of coffee!
Step 6: Set The Frames
Once the loading is complete, go to Window > Layers (or you can press F7) and you will see all the layers in the Layers panel. The number of Layers should match the number of frames loaded into Photoshop.
Once you have verified this, go to Window > Timeline. You will see the Timeline Panel at the bottom (the default display option for this panel). Choose “Create Frame Animation” option from the panel. Your first PNG file will appear on the Timeline.
Now, Select “Make Frames from Layers” from the right side menu (Palette Option) of the Animation Panel.
Note: Sometimes the PNG files get loaded in reverse, making your “Image-1” appear at the end and vice versa. If this happens, select “Reverse Layers” from Animation Panel Menu (Palette Option) to get the desired image sequence.
Step 7: Set The Animation Speed
The default display time for each image is 0.00 seconds. Toggling this time will determine the speed of your animation (or GIF file). If you select all the images, you can set the same display time for all of them. Alternatively, you can also set up different display time for each image or frame.
I recommend going with the former option though as using the same animation time is relatively easy. Also, setting up different display times for each frame may lead to a not-so-smooth animation.
You can also set custom display time if you don’t want to choose from among the available choices. Click the “Other” option to set a customized animation speed.
You can also make the animation play in reverse. Copy the Frames from the Timeline Pallet and choose “Reverse Layers” option. You can drag frames with the Ctrl key (on Windows) or the Cmd key (on Mac).
You can set the number of times the animation should loop. The default option is “Once.” However, you can set a custom loop value using the “Other” option. Use the “Forever” option to keep your animation going in a non-stop loop.
To preview your GIF animation, press the Enter key or the “Play” button at the bottom of the Timeline Panel.
Step 8: Ready To Save/Export
If everything goes according to plan, the only thing left is to save (export) your GIF infographic.
To Export the animation as a GIF: Go to File > Export > Save for Web (Legacy)
Select “GIF 128 Dithered” from the “Preset” menu.
Select “256” from the “Colors” menu.
If you will be using the GIF online or want to limit the file size of the animation, change Width and Height fields in the “Image Size” options accordingly.
Select “Forever” from the “Looping Options” menu.
Click the “Preview” button in the lower left corner of the Export window to preview your GIF in a web browser. If you are happy with it, click “Save” and select a destination for your animated GIF file.
Note:There are lots of options that control the quality and file size of GIFs — number of colors, amount of dithering, etc. Feel free to experiment until you achieve the optimal GIF size and animation quality.
Your animated infographic is ready!
Step 9 (Optional): Optimization
Gifsicle (a free command-line program for creating, editing, and optimizing animated GIFs), and other similar GIF post-processing tools can help reduce the exported GIF file size beyond Photoshop’s abilities.
ImageOptim is also worth mentioning — dragging files to ImageOptim will directly run Gifsicle on them. (Note: ImageOptim is Mac-only but there are quite a few alternative apps available as well.)
Troubleshooting Tips
You are likely to run into trouble at two crucial stages.
Adding New Layers
Open the “Timeline Toolbar” drop-down menu and select the “New Layers Visible in all Frames” option. It will help tune your animation without any hiccups.
Layers Positioning
Sometimes, you may end up putting layers in the wrong frames. To fix this, you can select the same layer in a fresh frame and select “Match Layer Across Frames” option.
Gifographic Examples
Before wrapping this up, I would like to share a few good examples of gifographics. Hopefully, they will inspire you just as they did me.
Google’s Biggest Search Algorithm Updates Of 2016
This one is my personal favorite. Incorporating Google algorithm updates in a gifographic is difficult owing to its complexity. But, with the use of the right animations and some to-the-point text, you can turn a seemingly complicated subject into an engaging piece of content.
Virtual Reality: A Fresh Perspective For Marketers
This one turns a seemingly descriptive topic into a smashing gifographic. The gifographic breaks up the Virtual Reality topic into easy-to-understand numbers, graphs, and short paragraphs with perfect use of animation.
How Google Works
I enjoy reading blog posts by Neil Patel. Just like his post, this gifographic is also comprehensive. The only difference is Neil conveys the essential message through accurately placed GIFs instead of short paragraphs. He uses only the colors that Google’s logo comprises.
The Author Rank Building Machine
This one lists different tips to help you become an authoritative writer. The animation is simple with a motion backdrop of content creation factory. Everything else is broken down into static graphs, images, and short text paragraphs. But, the simple design works, resulting in a lucid gifographic.
How Car Engines Work
Beautifully illustrated examples of how car engines work (petrol internal combustion engines and hybrid gas/electric engines). Btw, it’s worth noting that in some articles, Wikipedia is also using animated GIFs for some very similar purposes.
Wrapping Things Up
As you can see, turning your static infographic into an animated one is not very complicated. Armed with Adobe Photoshop and some creative ideas, you can create engaging and entertaining animations, even from scratch.
Of course, your gifographic can have multiple animated parts and you’ll need to work on them individually, which, in turn, will require more planning ahead and more time. (Again, a good example of a rather complex gifographic would be the one shown in “How Car Engines Work?” where different parts of the engine are explained in a series of connected animated images.) But if you plan well, sketch, create, and test, you will succeed and you will be able to make your own cool gifographics.
If you have any questions, ask me in the comments and I’ll be happy to help.
(This article is kindly sponsored by Adobe.) When you design the information architecture, the navigation bars of an application, or the overall layout and visual design of a product, then you are focusing on macro design. When you design (one part of a page, one form, or one single task and interaction), then you are focusing on micro-moment design.
In my experience, designers often spend a lot of time on macro design issues, and sometimes less so on critical micro-moment design issues. That might be a mistake.
Here’s an example of how critical micro-moment design can be.
I read a lot of books. We are talking over a hundred books a year. I don’t even know for sure how many books I read, and because I read so many books, I am a committed library patron. Mainly for reading fiction for fun (and even sometimes for reading non-fiction), I rely on my library to keep my Kindle full of interesting things to read.
Luckily for me, the library system in my county and in my state is pretty good in terms of having books available for my Kindle. Unluckily, this statewide library website and app need serious UX improvements.
I was thrilled when my library announced that instead of using a (poorly designed) website (that did not have a mobile responsive design), the library was rolling out a brand new mobile app, designed specifically to optimize the experience on a mobile phone. “Yay!” I thought. “This will be great!”
Perhaps I spoke too soon.
Let me walk you through the experience of signing into the app. First, I downloaded the app and then went to log in:
I didn’t have my library card with me (I was traveling), and I wasn’t sure what “Sign in with OverDrive” was about, but I figured I could select my library from the list, so I pressed on the down arrow.
“Great,” I thought. Now I can just scroll to get to my library. I know that my library is in Marathon County here in Wisconsin. In fact, I know from using the website that they call my library: “Marathon County, Edgar Branch” or something similar, since I live in a village called Edgar, so I figured that would be what I should look for especially since I could see that the list went from B (Brown County) to F (Fond du Lac Public Library) with no E for Edgar showing. So I proceeded to scroll.
I scrolled for a while, looking for M (in hope of finding Marathon).
Hmmm. I see Lone Rock, and then the next one on the list is McCoy. I know that I am in Marathon County, and that in fact, there are several Marathon County libraries. Yet, we seem to have skipped Marathon in the list.
I keep scrolling.
Uh oh. We got to the end of the list (to the W‘s), but now we seem to be starting with A again. Well, then, perhaps Marathon will now appear if I keep scrolling.
You know how many libraries there are in Wisconsin and are on this list? I know because as I started to document this user experience I decided to count the number of entries on this list (only a crazy UX professional would take time to do this, I think).
There are 458 libraries on this list, and the list kept getting to the end of the alphabet and then for some reason starting over. I never did figure out why.
Finally, though, I did get to Marathon!
And then I discovered I was really in trouble since several libraries start with “Marathon County Public Library”. Since the app only shows the first 27 or so characters, I don’t know which one is mine.
You know what I did at this point?
I decided to give up. And right after I decided that, I got this screen (as “icing on the cake” so to speak):
Did you catch the “ID” that I’m supposed to reference if I contact support? Seriously?
This is a classic case of micro-moment design problems.
I can guess that by now some of you are thinking, “Well, that wouldn’t happen to (me, my team, an experienced UX person).” And you might be right. Especially this particular type of micro-moment design fail.
However, I can tell you that I see micro-moment design failures in all kinds of apps, software, digital products, websites, and from all kinds of companies and teams. I’ve seen micro-moment design failures from organizations with and without experienced UX teams, tech-savvy organizations, customer-centric organizations, large established companies and teams, and new start-ups.
Let’s pause for a moment and contrast micro-moment design with macro design.
Let’s say that you are hired to evaluate the user experience of a product. You gather data about the app, the users, the context, and then you start walking through the app. You notice a lot of issues that you want to raise with the team — some large, some small:
There are some inconsistencies from page-to-page/screen-to-screen in the app. You would like to see whether they have laid out pages on a grid and if that can be improved;
You have questions about whether the color scheme meets branding guidelines;
You suspect there are some information architecture issues. The organization of items in menus and the use of icons seems not quite intuitive;
One of the forms that users are supposed to fill out and submit is confusing, and you think people may not be able to complete the form and submit the information because it isn’t clear what the user is supposed to enter.
There are many ways to categorize user experience design factors, issues, and/or problems. Ask any UX professional and you will probably get a similar, but slightly different list. For example, UX people might think about the conceptual model, visual design, information architecture, navigation, content, typography, context of use, and more. Sometimes, though, it might be useful to think about UX factors, issues, and design in terms of just two main categories: macro design and micro-moment design.
In the example above, most of the factors on the list were macro design issues: inconsistencies in layout, color schemes, and information architecture. Some people talk about macro design issues as “high-level design” or “conceptual model design”. These are UX design elements that cross different screens and pages. These are UX design elements that give hints and cues about what the user can do with the app, and where to go next.
Macro design is critical if you want to design a product that people want to use. If the product doesn’t match the user’s mental model, if the product is not “intuitive” — these are often (not always, but often) macro design issues.
Which means, of course, that macro design is very important.
It’s not just micro-moment design problems that cause trouble. Macro design issues can result in massive UX problems, too. But macro design issues are more easily spotted by an experienced UX professional because they can be more obvious, and macro design usually gets time devoted to it relatively early in the design process.
If you want to make sure you don’t have macro design problems then do the following:
Do the UX research upfront that you need to do in order to have a good idea of the users’ mental models. What does the user expect to do with this product? What do they expect things to be called? Where do they expect to find information?
For each task that the user is going to do, make sure you have picked one or two “objects” and made them obvious. For instance, when the user opens an app for looking for apartments to rent the objects should be apartments, and the views of the objects should be what they expect: List, detail, photo, and map. If the user opens an app for paying an insurance bill, then the objects should be policy, bill, clinic visit, while the views should be a list, detail, history, and so on.
The reason you do all the UX-research-related things UXers do (such as personas, scenarios, task analyses, and so on) is so that you can design an effective, intuitive macro design experience.
It’s been my experience, however, that teams can get caught up in designing, evaluating, or fixing macro design problems, and not spend enough time on micro-moment design.
In the example earlier, the last issue is a micro-moment design issue:
One of the forms that users are supposed to fill out and submit is confusing, and you think people may not be able to complete the form and submit the information because it isn’t clear what the user is supposed to enter.
And the library example at the start of the article is also an example of micro-moment design gone awry.
Micro-moment design refers to problems with one very specific page/form/task that someone is trying to accomplish. It’s that “make-or-break” moment that decides not just whether someone wants to use the app, but whether they can even use the app at all, or whether they give up and abandon, or end up committing errors that are difficult to correct. Not being able to choose my library is a micro-moment design flaw. It means I can’t continue. I can’t use the app anymore. It’s a make-or-break moment for the app.
When we are designing a new product, we often focus on the macro design. We focus on the overall layout, information architecture, conceptual model, navigation model, and so on. That’s because we haven’t yet designed any micro-moments.
The danger is that we will forget to pay close attention to micro-moment design.
So, going back to our library example, and your possible disbelief that such a micro-moment design fail could happen on your watch. It can. Micro-moment design failures can happen for many reasons.
Here are a few common ones I’ve seen:
A technical change (for example, how many characters can be displayed in a field) is made after a prototype has been reviewed and tested. So the prototype worked well and did not have a UX problem, but the technical change occurred later, thereby causing a UX problem without anyone noticing.
Patterns and standards that worked well in one form or app are re-used in a different context/form/app, and something about the particular field for form in the new context means there is a UX issue.
Features are added later by a different person or team who does not realize the impact that particular feature, field, form has on another micro-moment earlier or later in the process.
User testing is not done, or it’s done on only a small part of the app, or it’s done early and not re-done later when changes are made.
If you want to make sure you don’t have micro-moment design problems then do the following:
Decide what are the critical make-or-break moments in the interface.
At each of these moments, decide what is it exactly that the user wants to do.
At each of these moments, decide what is it exactly that the product owner wants users to do.
Figure out exactly what you can do with design to make sure both of the above can be satisfied.
Make that something the highest priority of the interface.
Takeaways
Both macro and micro-moment design are critical to the user experience success of a product. Make sure you have a process for designing both, and that you are giving equal time and resources to both.
Identify the critical make-or-break micro-design moments when they finally do get designed, and do user testing on those as soon as you can. Re-test when changes are made.
Try talking about micro-moment design and macro design with your team. You may find that this categorization of design issues makes sense to them, perhaps more than whichever categorization scheme you’ve been using.
This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.
“People will forget what you said, people will forget what you did, but people will never forget how you made them feel.” (Maya Angelou)
Today when we think about what product we want to use, we have a lot of options to choose from. But there’s one thing that plays a crucial role in this decisions making process: emotions. Humans are emotion-driven species. We evaluate products not only based on utility but also on the feelings they evoke. We prefer to choose products that create a sense of excitement over dull products that only solve our problems.
A lot of articles have been written about how to design emotional interfaces. Most of them describe how to create such interfaces using fine microcopy, illustrations, animations, and visual effects; this article is different; here we’ll see how designers can follow different approaches to create genuinely innovative interfaces. The tips listed below will help you design interfaces of future.
Emotional Interfaces of the Future
People create bonds with the products they use. The emotion users feel (both positive and negative) stay with them even after they ended using a product. Peak–end rule states that people judge an experience largely based on how they felt at its peak, the effect occurs regardless of whether the experience is pleasant or unpleasant.
It’s evident that positive emotional stimuli can build better engagement with your users—people will forgive a product’s shortcomings if you reward them with positive emotions.
In order to influence emotions, designers should have a solid understanding of general factors that impact on users such as:
Human cognition – the way people consume information, learn or make decisions;
Human psychology – the factors that affect emotions, including colors, sounds, music, etc;
Cultural references;
Contextual factors – how the user feels at the time of using a particular product. For example, when a user wants to purchase a ticket in a ticket machine, they want to spend the least possible amount of time on this activity. The UI of this machine needs to reflect users’ desire for speed.
By focusing on those areas, it’s possible to create an experience that can change the way people perceive the world. Let’s see how it works how we can design beyond a screen.
Designing a Voice Interface That Feels Real
I probably don’t need to prove the fact that voice is the future. Garter research predicts that by the end of 2018, 30% of our interactions with technology will be through “conversations.” Even today many of us use Amazon Echo and Apple Siri for everyday activities—such as setting an alarm clock or make an appointment. But a majority of voice interaction systems have a natural limitation (Narrow AI). When we interact with products like Google Now or Apple Siri, we have a strong sense of communicating with a machine, not a real human being. It happens mainly because the system responds predictably, and the responses are too scripted. We can’t have a meaningful dialogue with such system.
But there are some completely different systems available on the market today. One of them is Xiaoice—an AI-driven system developed by Microsoft. The system is based on an emotional computing framework. When users interact with Xiaoice they have a strong sense of chatting with real human being. Some Xiaoice users even say that they consider the system as a friend.
The limitation of Xiaoice is that it’s text-based chat but it’s clear that it’s possible to achieve the much stronger effect by making voice-based interaction; voice can convey a powerful spectrum of emotions. Remember the film Her when the main character played by Joaquin Phoenix fell in love with Samantha (a sophisticated OS). The most interesting thing about this film is that Theodore (the main character) didn’t have a visual image of the Samantha, he only had her voice.
It’s possible to bake a broad spectrum of emotions in voice and tone. And suddenly our daily routine tasks become less dull and more entertaining when we use voice and visual input together.
Voice interfaces for Brain.ai
Evolution of AR Experience—From Mobile Screen to Glass
Augmented Reality (AR) is defined as a digital overlay on top of the real world. The beauty of AR is that it provides an extra layer of information over the existing objects in our environment. It transforms the objects around us into interactive digital experiences—the environment becomes more intelligent. The fact that users have an illusion of ‘tangible object’ on the tips of their fingers creates a deeper connection between a user and a product/content.
Early work in AR that we saw in the 90’s were focused mainly on the technology. Even when designers used content in their products, the goal was to demonstrate what AR technology is capable of. But the situation changed. AR is no longer a technology for the sake of technology. The success of Pokemon Go has proven the fact that AR can create a whole new level of engagement and people happy to adopt it. That leads designers to focus on content and the human experience.
AR can be used not just for entertainment, it can be a powerful tool for problem-solving. Here are just a few things it can help you:
Improve Online Shopping Experience
According to Retail Perceptions, a report that analyses the influence of AR on the retail sector, even today 61% of shoppers prefer to shop at stores that offer AR. The survey says that the most popular items to shop for with augmented reality are: Furniture, Clothing, Groceries, Shoes. People love AR because it allows them to see product properties in details and make the shopping experience fun.
Users can decide whether they like an item or not before buying. This is especially important for clothing or furniture. As a result, AR can reduce product return rates, saving money on returns.
Creates a New Level of Experience
AR helps us see an enhanced view of the world. For example, it might be a new level of in-flight experience.
AR in flight experience for Airbus A380
Or it can be rich contextual hints for your current location. The technology known as SLAM (Simultaneous localization and mapping) can be used for that. SLAM allows real-time mapping of environment. This technology is used in Google’s self-driving car but it can also be applied to AR experience to place multimedia content in the environment.
Provide additional information in context
Last but not least, you can completely reimagine existing concepts and use a new dimension (AR) to provide additional information.
The concept of interactive walls – a digital overlay on top of the real world
Today it becomes much easier to build AR experience. AR like ARKit and ARCore have enabled sophisticated computer algorithms to be available for anyone to use.
When it comes to technology, the vast majority of AR experiences are phone-based AR. The primary reason why phone-based AR becomes so powerful is obvious; 95% of global populations are smartphone owners. At the same time, AR experience on mobile devices has two natural limitations:
Field of view (Augmented Reality is still restricted to a physical space of your mobile screen)
Input (we still have to use touch and gestures on the screen to interact with the app)
It’s possible to create a much more engaging experience using Glass technology. Just imagine that you won’t need to take your phone from your pocket to have AR experience.
When you hear the word “AR Glass” most probably you think of Google Glass. It’s been nearly five years since the release of the Google Glass—a promising concept of standalone AR headset. Unfortunately, the first version of the product didn’t reach retail stores. The fact that Google Glass failed on the market led to the countless discussions on whether or not it’s a stillborn concept.
Many people believe that Glass is a dumb idea. As for me, I strongly believe that everything visionary looks stupid at first.
The way technology changes don’t look like a linear process; it looks more like waves. Each new wave can change completely the way we think about technology.
The key to innovation is building something that nobody did before. We need to experiment to get a winning formula—the one that will help us create a product people love. I remember when people said touchscreen phones were stupid because of Palm and Microsoft’s lousy implementation. Then along came Apple and now most of us use touchscreens. Once the product is done right, the technology will make people change their point of view.
One of the promising concepts of AR glasses is Rokid Glass. Rokid is envisioning its smart glasses as a kind of next-generation Google Glass. It’s a standalone headset (meaning it won’t require you to plug to a smartphone or a desktop), it will run on batteries and incorporate an internal processor for handling computing on its own. Rokid is just a part of a broader movement towards consumer AR glasses that bring concepts that we saw in science fiction movies to life.
However, at the time when AR Glass technology will be accepted, we might face a few problems of augmented hyper-reality. Augmented reality may recontextualize the functions of consumerism and change the way in which we operate within it. The fear is that it might change it in the worst way—making the environment overwhelming for users. As a result, the technology that was intended to bring only positive emotions might switch to the entirely negative spectrum.
Moving From Augmented Reality Towards Virtual Reality to Create an Immersive Experience
AR has a natural limitation—as users, we have a clear line between us and content; this line separates one world (AR) with another (real world). This line causes a sense that the AR world is not real.
You probably know what will be the answer for this AR design limitation—VR. Thanks to VR, we can have a truly immersive experience. This experience will remove the barrier between worlds—the real world and VR world can blur together.
With the recent technologies like Oculus Quest offering all-in-one device—location tracking, controllers, and processing in a standalone unit. No separate PC required. This makes VR device a separate unit, not just an extra feature for your mobile phone or desktop computer.
VR can work great both for entertainment: just imagine how you can experience movies in VR in 360 or how you as a user can use natural gestural interactions in games. And in office space: just imagine how video calls in your favorite messenger will evolve to VR calls where you’ll be able to actually interact with another person. This will help to establish a much deeper emotional connection with people.
VR will invite designers to think about important questions such as:
Rethink the process for creating digital products. It’s clear that the modern state of VR is way too skeuomorphic. We still in the process of defining the way we want our users to interact in a virtual space. But it’s an excellent task for designers.
Ethics of removing the line between content and UI (“Where does the line between content and UI should start and end?”)
Rise of VR addiction. Today we have a problem of Smartphone zombie, people who stick to their smartphones and don’t see the world around them. With VR we might face even more addictive behavior. People will hunt for the new level of experience, the powerful emotion VR technology will deliver. As a result, they might go too deep in VR experience and skip the reality.
Conclusion
When we think about the modern state of product design, it becomes evident that we are only at the tip of the iceberg. We’re witnessing a fundamental shift in Human-Computer Interaction (HCI)—rethinking the whole concept of digital experience.
In the next decade, designers will break the glass (the era of mobile devices as we know them today will over) and move to the interfaces of the future.
There is a sentiment that leaving math calculations in your CSS is a good idea that I agree with. This is for math that you could calculate at authoring time, but specifically chose not to. For instance, if you needed a 7-column float-based grid (don’t ask), it’s cleaner and more intuitive:
.col {
/* groan */
width: 14.2857142857%;
/* oh, I get it */
width: calc(100% / 7);
}
You could probably prove that the calc() takes the computer 0.0000001% longer, so explicitly defining the width is technically faster for performance reason — but that is about the equivalent of not using punctuation in sentences because it saves HTML weight.
That math can a little more complicated as you continue. For example, like in our use cases for calc() article, what about columns in that 7-column grid that span?
The readability of the math can be enhanced by comments if it gets too complicated. Say you are trying to account for a margin-based gutter with padding inside of an element:
Again, I’d say that’s pretty readable, but it’s also a good amount of repetition. This might call for using variables. We’ll do it with CSS custom properties for fun. You have to pick what is worthy of a variable and what isn’t. You might need fewer comments as the code becomes somewhat self-documenting:
Every single number has been given a variable in there. Too far? Maybe. It certainly makes those width declarations pretty hard to wrap your head around quickly. Ana Tudor does some serious stuff with calc(), as proof that everyone’s comfort level with this stuff is different.
One of the things that made me think of all this is a recent article from James Nash — “Hardcore CSS calc()” — where he builds this:
While the solution took a heavily math-y road to get there, it ends up being only sort of medium-level calculation on the ol’ complexity meter. And note that not everything gets a variable’ only the most re-used bits:
Flickr announced not long ago that they are limiting free accounts to 1,000 photos. I don’t particularly mind that (because it seems like sound business sense), although it is a bit sad that a ton of photos will be nuked from the internet. I imagine the Internet Archive will swoop in and get most of it. And oh hey, the Twitter account @FlickrJubilee is showcasing Flickr users that could really use a gifted pro account so their amazing photos are not lost, if you’re feeling generous and want to contribute.
This change doesn’t affect pro accounts. I’ve been pro forever on Flickr, so my photos were never at risk, but the big change has me thinking it’s about time to spin down Flickr for myself. I’ve been keeping all my photos on iCloud/Photos for years now anyway so it seems kind redundant to keep Flickr around.
I went into the Flickr settings and exported all my photos, got a bunch of gigabytes of exported photos, and loaded them into Photos. Sadly, the exported photos have zero metadata, so there will forever be this obnoxious chunk of thousands upon thousands of photos in my Photos collection that all look like they were taken on the same day and with no location.
Anyway, that was way too long of an intro to say: I found a bunch of old website screenshots! Not a ton, but it looks like I used Flickr to store a handful of web designs I found interesting in some way a number of years back. What’s interesting today is how dated they look when they were created not that long ago. Shows how fast things change.
Here they are.
It’s not terribly surprising to me to hear people push back on the same-ness of web design these days, and to blame things like frameworks, component-driven architecture, and design systems for it. It wasn’t long ago when it seemed like we were trying harder to be fancy and unique with our designs — things like shadow treatments, reflective images and skeuomorphic enhancements. I don’t mean to make sweeping generalizations here… merely a difference between what we considered to be boring and fancy work back than compared to now, of course.
One of the web platform features highlighted at the recent Chrome Dev Summit was Feature Policy, which aims to “allow site authors to selectively enable and disable use of various browser features and APIs.” In this article, I’ll take a look at what that means for web developers, with some practical examples.
In his introductory article on the Google Developers site, Eric Bidelman describes Feature Policy as the following:
“The feature policies themselves are little opt-in agreements between developer and browser that can help foster our goals of building (and maintaining) high-quality web apps.”
The specification has been developed at Google by as part of the Web Platform Incubator Group activity. The aim of Feature Policy is for us, as web developers, to be able to state our usage of a web platform feature, explicitly to the browser. By doing so, we make an agreement about our use, or non-use of this particular feature. Based on this the browser can act to block certain features, or report back to us that a feature it did not expect to see is being used.
Examples might include:
I am embedding an iframe and I do not want the embedded site to be able to access the camera of my visitor;
I want to catch situations where unoptimized images are deployed to my site via the CMS;
There are many developers working on my project, and I would like to know if they use outdated APIs such as document.write.
All of these things can be tracked, blocked or reported on as part of Feature Policy.
How To Use Feature Policy
In order to use Feature Policy, the browser needs to know two things: which feature you are creating a policy for, and how you want that feature to be handled.
Feature-Policy: <directive> <allowlist>
The is the name of the feature that you are setting the policy on.
The details how the feature can be used — if at all — and takes one or more of the following values.
*
The most liberal policy, stating that the feature will be allowed in this document, and any iframes whether from this domain or elsewhere. May only be used as a single value as it makes no sense to enable everything and also pass in a list of domains, for example.
self
The feature will be available in the document and any iframes, however, the iframes must have the same origin.
src
Only applicable when using an iframe allow attribute. This allows a feature as long as the document loaded into it comes from the same origin as the URL in the iframe’s src attribute.
none
Disables the feature for the document and any nested iframes. May only be used as a single value.
The feature is allowed for specific origins; this means that you can specify a list of domains where the feature is allowed. The list of domains is space separated.
There are two methods by which you can enable feature policies on your site: You can send an HTTP Header, or use the allow attribute on an iframe.
HTTP Header
Sending an HTTP Header means that you can enable a feature policy for the page or entire site setting that header, and also anything embedded in the site. Headers can be set for your entire site at the web server or can be sent from your application.
For example, if I wanted to prevent the use of the geolocation API and I was using the NGINX web server, I could edit the configuration files for my site in NGINX to add the following header, which would prevent any document in my site and any iframe embedded in from using the geolocation API.
add_header Feature-Policy "geolocation none;";
Multiple policies can be set in a single header. To prevent geolocation and vibrate but allow unsized-media from the domain example.com I could set the following:
If we are primarily concerned with what happens with the content in an iframe, we can use Feature Policy on the iframe itself; this benefits from slightly better browser support at the time of writing with Chrome and Safari supporting this use.
If I am embedding a site and do not want that site to use geolocation, camera or microphone APIs then my iframe would look like the following example:
<iframe allow="geolocation 'none'; camera 'none'; microphone 'none'">
You may already be familiar with the individual attributes which control the content of iframes allowfullscreen, allowpaymentrequest, and allowusermedia. These can be replaced by the Feature Policy allow attribute, and for browser compatibility reasons you can use both on an iframe. If you do use both attributes, then the most restrictive one will apply. The Google article shows an example of an iframe that uses allowfullscreen — meaning that the iframe is allowed to enter fullscreen, but then a conflicting Feature Policy of fullscreen none. These conflict, so the most restrictive policy wins and this iframe would not be allowed to enter fullscreen.
The iframe element also has a sandbox attribute designed to manage support for many features. This feature was also added to Content Security Policy with a sandbox value which disables all sandbox features, which can then be opted back into selectively. There is some crossover between sandbox features and those controlled by Feature Policy, and Feature Policy does not seek to duplicate those values already covered by sandbox. It does, however, address some of the limitations of sandbox by taking a more fine-grained approach to managing these policies, rather than one of turning everything off globally as one large policy set.
Feature Policy And Reporting API
Feature Policy violations can be reported via the Reporting API, which means that you could develop a comprehensive set of policies tracking feature usage across your site. This would be completely transparent to your users but give you a huge amount of information about how features were being used.
Browser Support For Feature Policy
Currently, browser support for Feature Policy is limited to Chrome, however, in many cases where you are using Feature Policy during development and when previewing sites this is not necessarily a problem.
Many of the use cases I will outline below are usable right now, without causing any impact to site visitors who are using browsers without support.
When To Use Feature Policy
I really like the idea of being able to use Feature Policy to help back up decisions made when developing the site. Decisions which may well be written up in documents such as a performance budget, or as part of a GDPR audit, but which then become something we have to remember to preserve through the life of the site. This is not always easy when multiple people work on a site; people who perhaps weren’t involved during that initial decision making, or may simply be unaware of the requirements. We think a lot about third parties managing to somehow impact our site, however, sometimes our sites need protecting from ourselves!
Keeping An Eye On Third Parties
You could prevent a third-party site from accessing the camera or microphone using a feature policy on the iframe with the allow attribute. If the reason for embedding that site has nothing to do with those features, then disabling them means that the site can never start to ask for those. This could then be linked with your processes for ensuring GDPR compliance. As you audit the privacy impact of your site, you can build in processes for locking down the access of third parties by way of feature policy — giving you and your visitors additional security and peace of mind.
This usage does rely on browser support for Feature Policy to block the usage. However, you could use Feature Policy reporting mode to inform you of usage of these APIs if the third party changed what they would be doing. This would give you a very quick heads-up — essentially as soon as the first person using Chrome hits the site.
Selectively Enabling Features
We also might want to selectively enable some features which are normally blocked. Perhaps we wish to allow an iframe loading content from another site to use the geolocation feature in the browser. Chrome by default blocks this, but if you are loading content from a trusted site you could enable the cross-origin request using Feature Policy. This means that you can safely turn on features when loading content from another domain that is under your control.
Catching Use Of Outdated APIs And Poorly Performing Features
Feature Policy can be run in a report-only mode. It can then track usage of certain features and let you know when they are found on the site. This can be useful in many scenarios. If you have a very large site with a lot of legacy code, enabling Feature Policy would help you to track down the places that need attention. If you work with a large team (especially if developers often pull in some third party libraries of code), Feature Policy can catch things that you would rather not see on the site.
Dealing With Poorly Optimized Images
While most of the articles I’ve seen about Feature Policy concentrate on the security and privacy aspects, the features around image optimization really appealed to me, as someone who deals with a lot of content generated by technical and non-technical users. Feature Policy can be used to help protect the user experience as well as the performance of your site by preventing overly large — or unoptimized images — being downloaded by visitors.
In an ideal world, your CMS would deal with image management, ensuring that images were sensibly resized, optimized for the web and the context they will be displayed in. Real life is rarely that ideal world, however, and so sometimes the job of resizing and optimizing images is left to content editors to ensure they are not uploading huge images to the web. This is particularly an issue if you are using a static CMS with no content management layer on top of it. Even as a technical person, it is very easy to forget to resize that giant screenshot or camera image you popped into a folder as a placeholder.
Currently behind a flag in Chrome are features which can help. The idea behind these features is to highlight the problematic images so that they can be fixed — without completely breaking the site.
The unsized-media feature policy looks for images or video which do not have a size set in the HTML or CSS. When an unsized media element loads, it can cause the content on the page to reflow.
In order to prevent any unsized media being added to the site, set the following header. Media will then be displayed with a default size of 300×150 pixels. You will see your site loading with small media, and realize you have a problem to fix.
Feature-Policy: unsized-media 'none'
See a demo (needs Chrome Canary with Experimental Web Platform Features on).
The oversized-images feature policy checks to see that images are not much large than their container. If they are, a placeholder will be shown instead. This policy is incredibly useful to check that you are not sending huge desktop images to your mobile users.
Feature-Policy: oversized-images 'none'
See a demo (needs Chrome Canary with Experimental Web Platform Features on).
The unoptimized-images feature policy checks to see if the data size of images in bytes is no more than 0.5x bigger than its rendering area in pixels. If this policy is enabled and images violate it, a placeholder will be shown instead of the image.
Feature-Policy: unoptimized-images 'none'
See a demo (needs Chrome Canary with Experimental Web Platform Features on).
Testing And Reporting On Feature Policy
Chrome DevTools will display a message to inform you that certain features have been blocked or enabled by a Feature Policy. If you have enabled Feature Policy on your site, you can check that this is working.
Support for Feature Policy has also been added to the Security Headers site, which means you can check for these along with headers such as Content Security Policy on your site — or other sites on the web.
There is a Chrome DevTools Extension which lets you toggle on and off different Feature Policies (also a great way to check your pages without needing to configure any headers).
If you would like to get into integrating your Feature Policies with the Reporting API, then there is further information in terms of how to do this here.
Further Reading And Resources
I have found a number of resources, many of which I used when researching this article. These should give you all that you need to begin implementing Feature Policy in your own applications. If you are already using Content Security Policy, this seems an additional logical step towards controlling the way your site works with the browser to help ensure the security and privacy of people using your site. You have the added bonus of being able to use Feature Policy to help you keep on top of performance-damaging elements being added to your site over time.