Basically, experts say there are five rules of a killer PowerPoint presentation.
You need to become personal, ask various questions, use charts and graphs, add diagrams, and don’t forget about professional images. But these recommendations are pretty general and are more about the essence of the message rather than writing skills…
PowerPoint is a common tool used for writing slide shows. The program is commonly referred to as comparatively intuitive, informative, and engaging. How to improve your skills to create slides that are prominent and deliver a single message? The guide we’ve created has several steps.
Always consider the audience.
Learn visual hierarchy rules.
Work with animations.
Think of the Big Idea.
Follow the 5/5/5 rule.
Rely on grids.
Fade to blank if you’re talking.
Improve the images.
Add videos.
Make the title slide stand out.
Let’s see what tips each of them hides.
Step 1
Both your words and information presented on slides are to be tailored to the people you are delivering it to. Always take into account the audience you will be talking to. Be intentional as well as specific about everything they are going to hear and see, think of things that will make them get engaged in the process. The one, who uses the same presentation for different audiences, succeeds in delivering a message to one of them only, while dozens of others get lost in unknown terms and useless information. It is better to spend time rearranging slides than losing a chance to show yourself.
Step 2
Visual hierarchy is what the development of any PowerPoint project should start with. It deals with the element arrangement in a way to show the importance of the presentation. The theory is rather helpful in defining the slide structure so that it becomes eye-catching. The hierarchy is normally achieved via the use of contrasting colors that make the most essential elements stand out and via the text or image sizes.
Step 3
Did you know that PowerPoint has a great variety of built-in animations? Each and every one of them is available for free and helps make slides look better and deliver the data gradually, pointing out the most vital info. However, we suggest you forget about using too many tools for the same presentation as multiple effects may distract the audience. Choose two or three appropriate effects and apply them to most of the slides when needed.
Step 4
Why do we call it big? It’s because it must include one overarching message. This sort of idea is unbelievably important because it depicts your unique point of view and puts something at stake. Write the idea down, think over its main components that will definitely support it (elements like words, images, and audio). If some elements don’t bolster the core takeaway, they must be removed.
Step 5
Along with thinking over numerous details, it is essential to not make the audience overwhelmed with the amount of new information. That is why it is recommended to keep the text short and right to the point. This is what the 5/5/5 rule is about. It stands for using not more than five words per one text line, only five lines per slide and just five text-loaded slides.
Step 6
What are grids? They are the hidden slide structures arranged in horizontal or vertical lines and are meant to aid in creating presentations with well-balanced slides. Grids are useful when they include margins or slide borders that are free from graphics and words, gutters that are blank spaces to separate columns, and the latter ones are vertical sections for the actual content. You can set own grids to suit your data best.
Step 7
You are the main point of the entire presentation, not the slides. Fade to blank to draw the audience’s attention to you. Consider this is a close-up as in a good movie when the director makes a viewer pay attention to what is essential at this very moment.
Step 8
A good presentation is always more about images, not words. PowerPoint has brand new photo-editing capabilities that enhance the quality of photos when needed. Press ‘Picture Tool – Format’ and get the options: convert colors, crop, focus, shape, remove the background.
Step 9
Yes, inserting videos to your PP presentation is vital. Some speakers choose to import a video of them talking while a demo is shown in their slides. In this case, the video is posted on the screen’s corner while the slides are playing. Just press ‘Record’ to record the screen as you’re demonstrating something important. The program will record you talking and showing.
Step 10
Believe it or not, you need to have a title slide that stands out whether you use it as a marketing asset for social media, email and other channels or not. Spend time to think over the design and create a really visually engaging title page that draws people in from the very start of your presentation.
Writing and designing great slides is the same as creating amazing content. You need to make your PowerPoint presentation easy-to-digest and appealing so that people would get excited while sitting through it. Limit the use of prose, add simplicity to illustrations and the message, don’t forget about audio and video, and have a clear objective to create something really worthy. If you are too busy for making your own presentation and you need assistance from expert writers, contact an academic writing service where you can easily buy professional PowerPoint presentations online.
Rik Schennink documents a system for being able to write CSS selectors that style a page when it has scrolled to a certain point. If you’re like me, you’re already on the lookout for document.addEventListener('scroll' ... and being terrified about performance. Rik gets to that right away by both debouncing the function as well as marking the event as passive.
The end result is a data-scroll attribute on the element that can be used in the CSS. Meaning if you’re scrolled to 640px down the page, you have and could write a selector like:
Unfortunately, we don’t have greater than (>) less than (<) selectors in CSS for things like numbered attributes, so the CSS styling potential is fairly limited here. You might ultimately need to update the JavaScript function such that it applies other classes or data attributes based on your math. But you’ll already be set up for good performance here.
“Apply styles when the user has scrolled away from the top” is a legit use case. It makes me think of a once function (like we have in jQuery) where any scroll event would only be triggered once and then not again. They scrolled! So, by definition, they aren’t at the top anymore! But that doesn’t deal with when they scroll back to the top.
I find it generally more useful to use IntersectionObserver for styling things based on scroll position. With it, you can do things like, “has this element been scrolled into view or beyond,” which is generically useful and can be used for scrolled-away-from-top stuff too.
Here’s an example that adds or removes a class if a user has scrolled past a hidden pixel positioned at 500px down the page.
It sometimes takes a quick 35 seconds for a concept to really sink in. Mikael Ainalem delivers that here, in the case that you haven’t quite grokked the concepts behind path-based CSS properties like clip-path and shape-outside.
Here are two of my favorites. The first demonstrates animating text into view using a polygon as a clip.
The second shows how the editor can help morph one shape into another.
Automatically detect and diagnose JavaScript errors impacting your users with Bugsnag. Get comprehensive diagnostic reports, know immediately which errors are worth fixing, and debug in a fraction of the time.
Bugsnag detects every single error and prioritizes errors with the greatest impact on your users. Get support for 50+ platforms and integrate with the development and productivity tools your team already uses.
Bugsnag is used by the world’s top engineering teams including Airbnb, Slack, Pinterest, Lyft, Square, Yelp, Shopify, Docker, and Cisco. Start your free trial today.
Automatically detect and diagnose JavaScript errors impacting your users with Bugsnag. Get comprehensive diagnostic reports, know immediately which errors are worth fixing, and debug in a fraction of the time.
Bugsnag detects every single error and prioritizes errors with the greatest impact on your users. Get support for 50+ platforms and integrate with the development and productivity tools your team already uses.
Bugsnag is used by the world’s top engineering teams including Airbnb, Slack, Pinterest, Lyft, Square, Yelp, Shopify, Docker, and Cisco. Start your free trial today.
(This article is sponsored by Adobe.) So it’s time to test the latest version of your app with users. You schedule your first user testing session. The participant enters the room; your lab partner puts velcro on the participant’s finger and fits a headband and head cap on before she sits down at a computer to start the user test session. What’s all this for? It’s biometrics and neuro-measurements.
In a “traditional” user test, you put a participant in front of your app, product, or software and give them tasks to do, ask them to “think aloud”, and observe and record what they say and what they do. You may ask them some questions before and after the session, too. I’ve done thousands of these sessions, and chances are that if you are a user researcher, you have to.
There’s nothing really wrong with user testing this way except that it relies on the participant telling you (either during or after the session) why they did what they did, and how they feel about the product or app. You can see that they clicked on a particular button or touched a link on the mobile app, but if they explain why, you are only getting the conscious reason why.
What if you could get their unconscious reactions? What if you could take a look inside your users’ brains and see what it is they aren’t saying, i.e. the things they themselves may not realize about their reactions to your product?
We know that most mental processing — including decision-making and emotional reactions — occurs unconsciously. So if people tell you how they feel and why they did something, it is possible that they believe what they are saying is the truth, but it’s also possible that they don’t know how they feel or why they did or did not take an action.
People filter their feelings, decisions and reasons consciously and by that time you aren’t necessarily getting real data. Add to that the fact that users aren’t always truthful during user tests. They may not want to offend you by telling you they think your product is hard to use or boring.
So that’s why user researchers are starting to use some other tools to get reactions and data directly from the body without the filtering of conscious thought. Hence, biometrics and neuro-measurements.
Some of these new tools are easy and inexpensive to use. Others may take more investment of your time and budget. Or you may want to bring in an outside firm that specializes in these tools. (Some suggestions for outside vendors are at the end of the article.)
GSR is also called “electrodermal activity” or EDA. A typical GSR measurement device is a relatively small, unobtrusive sensor that is connected to the skin of your finger or hand.
Sweat glands on the hands are very sensitive to changes in your emotional state. If you become emotionally aroused — either positively or negatively — then you will release more sweat in your hands. Sometimes, these are very small changes that you may not notice. This is what a GSR monitor is measuring.
The GSR monitor can’t tell if you are happy, sad, scared, and so on, but it can tell if you are becoming more or less emotional. And since the amount of sweat you release is not under conscious control, a GSR monitor can measure what you may not be consciously aware of.
GSR monitoring has been around for over a hundred years. The monitors are relatively inexpensive and easy to learn how to use. The price for a GSR monitor ranges from about $150 to $600, depending on the brand and model you get. If you want to buy your own, check out Carolina Supply. iMotions also has a great downloadable guide to GSR monitors that you can get for free.
It’s also relatively easy to measure respiration. When people are emotionally aroused they breathe faster. This can be detected in several ways — the easiest being to place a cloth band around the chest and/or stomach and measure the expansion of the chest or stomach as people breathe.
If/when they are using your product and they start breathing faster, you can deduce that something has (either positively or negatively) affected them emotionally.
Heart Rate
You can also use the band around the chest or even a simpler measurement on a finger to measure heart rate/pulse. When you are emotionally aroused, your heart beats faster and your pulse increases.
How would you use GSR, respiration, or heart rate data in a user test or study? Let’s say you are testing an app for getting an insurance quote. You ask the user what they think of the insurance quote app, and they answer:
“It was OK, it wasn’t too hard to use.”
But looking at their GSR, respiration, and/or heart rate might tell you that they were stressed. The data will also show you when and where in the process they had the most stress.
Like GSR monitors, heart-rate and respiration monitors are relatively inexpensive (under $100). What you may really want, however, is a total package that includes, a universal monitor that you can plug more than one measurement into.
For example, you can use GSR, heart rate, respiration and even EEG (discussed below), plus software that lets you monitor the data and combine it with actions your users are taking at specific moments during your user study. These packages will cost you a lot, however. A whole system may run as much as $7,000.
To get started, you may want to bring in a vendor who has the equipment to get your feet wet before you decide to buy these tools for your lab.
Eye Tracking
I am probably unusual in my criticisms of eye-tracking. A lot of people like eye tracking, but I think it has some problems. I’ll explain why.
Eye tracking involves having people look at a special monitor while wearing eye-tracking headsets/glasses. The eye tracker measures what you look at and how long you look at it. If you were doing user testing on a web page, then you could see (either for an individual or through aggregated data) where people looked most, how long they looked at it, and what people did not look at, and so on.
Eye tracking works just fine in measuring what it is measuring. But here’s my criticism: Eye tracking only measures where people are looking with their central vision. It doesn’t measure peripheral vision.
Recent research on peripheral vision shows that peripheral vision is more important than once thought for information process. For example, images of danger and emotion are processed faster in peripheral vision than in central vision. We also know now that people use peripheral vision to decide if they are the right place, or in the case of software and website design, if they are at the right page or screen. It’s possible for people to “see” something in peripheral vision, but not be consciously aware that they have. And what they see can influence the action they take.
Since eye tracking doesn’t track any peripheral vision data, I am not a big fan of it. Monitors with eye tracking built in, plus the software to analyze and report on the data can cost around $7,000 to $10,000.
Cameras can capture someone’s face as they use a product or watch a video. Algorithms can then analyze the facial expressions and tell you whether the person is confused, happy, scared, and so on.
Facial coding is also an “add-on” feature to eye tracking. You should assume similar pricing ($7,000 to $10,000) for facial coding as for eye tracking
fEMG
EMG stands for Electromyography, or muscle movement. Whenever a muscle contracts it generates a small amount of electricity which can be detected with some fairly simple electrodes. Muscle movement can be very small — you may not see the muscle move, but you can measure it.
This means that some of the most interesting EMG measurements come from the movement of muscles in the face or fEMG. Facial coding uses algorithms to take a good guess at what the person is feeling, but with fEMG you can actually measure the muscles in the face and thereby more accurately assess the emotion that the person is feeling. There is muscle activity in the face that a video won’t detect, but that the fEMG recordings will detect. This means that with fEMG you can pick up on emotions that are not being obviously displayed through just facial coding.
When would you use facial coding or fEMG?
Well, let’s say you have created some new videos for the careers/employment page of your company’s website. The videos have real people who work at the company talking about how they came to be an employee, and what it is they like about working at the company. You want to know if people like and resonate with the videos. Facial coding and, even better, fEMG, would help you measure what people are feeling, and even tell you which parts of the video are eliciting which emotions.
fEMG equipment and software are expensive and not easy to learn how to use. For this reason, you will probably want to start by bringing in a vendor rather than using this on your own.
EEG (Electroencephalography)
You can directly measure the electrical activity of the brain by placing electrodes on the scalp. EEG devices measure the electrical activity generated by neurons.
EEG measures electrical changes on the surface of the brain — not deep within particular brain structures. This means that EEG can’t tell you that a particular part of the brain is active. It can only tell you when there is more or less brain activity. You would need to use more sophisticated methods, such as fMRI (functional Magnetic Resonance Imaging) to study more specific brain activity. fMRI equipment is very large and very expensive, which is why only research and medical institutions use them. In contrast, EEG is inexpensive.
EEG measures whether a person is engaged and paying attention. EEG measurements are particularly good at showing you activity by seconds or even parts of a second. Let’s go back to the example of the user test to measure the impact of the employee story videos at the careers/jobs page of the corporate website. Are the videos interesting? Do people pay attention while watching them? Exactly which parts of the videos are engaging? EEG can tell you this.
When I was in graduate school and doing EEG research, we had to use electrodes and gel to get EEG readings, but now there are easier ways. You can place a cap on someone’s head, kind of like a swim cap, and the electrodes are built in to the cap.
Some devices are like headsets rather than swim caps:
EEG devices range from the inexpensive to the expensive. For example, Emotiv makes a $299 EEG headset. You will probably, however, want to get a higher end version for $799, and then you will need a subscription for the software ($99 a month).
It can take a while to learn how to accurately read EEG data, so, again, it might be better to start by bringing in a vendor who has all the equipment and know-how until you learn.
It is common to combine multiple methods of biometrics together to help with the accuracy and interpretation of the results.
Although biometrics and neuro-measurements don’t tell the whole story, the data that we get from biometrics and neuro-measurements is more accurate than self-reporting. As the tools become easier to use and researchers get used to using them, they will become more common. We may even get to the point where we stop using the think-aloud technique altogether, although I don’t think we are there yet!
Takeaways
If you haven’t already researched biometrics for your user testing projects, now is a good time to check out these measurements as an addition to your current testing.
Pick a modality and/or a vendor and do a trial project.
If you are in charge of user-testing budgets, add in some biometrics to your budgeting process for the next year or two so you can get started.
This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.
So you’re designing a new website or online store, and you need a web developer. You might need them to develop a site from scratch. Or maybe you just need them to work through some tweaks, changes, issues, or extra functionality.
Either way, your relationship with your web developer can be difficult to manage. I’m a developer, so I know that there are so, so many ways the relationship can fall apart:
Missed deadlines
Lack of communication
Slow communication
No communication
Developer over-promises
Developer under-delivers
Developer disappears
Loosely defined scope
Lack of protocol for small assumptions/decisions
Bugs or issues don’t get fixed
Practically every designer I work with has shared a horror story involving one of those things. In order to avoid becoming a horror story myself, I’ve developed a handy list of pre-kickoff discussion items to help us avoid these kinds of issues.
Before we get into it, let’s be clear: This is not a cure-all for all designer/developer relationships. At the end of the day, it’s still a human relationship—it’s complicated. But I’ve found that an open conversation about these items can start a project off on the right foot.
1. How Will We Communicate?
How will you communicate while working on the project? Slack? Phone calls? Texts? Emails? PM software? Just as importantly: How often will you communicate? Every day? Once a week? At kickoff, and then not again until QA? If you’re doing a daily check-in, will it be a two-sentence email, or a 15-minute phone call? What’s the plan in case of emergencies?
More communication does not always equal better communication
There are no wrong answers here, as long as you set expectations at beginning. But remember: More communication does not always equal better communication.
Why This Matters
You want to have a good rapport with your developer, and to accomplish that, you need an established mode of communication. Usually a phone call is helpful to develop an initial personal connection, and to make sure it’s a good personality fit.
During development, work to strike a balance between checking in too much and too little. Too much and you’re micro-managing. Too little and the developer might not stay on track. It’s best to set the expectations at the top and stick to them.
2. How Will You Manage the Project?
Where are the files and login credentials the developer will need? Where will you track tasks, milestones, and deadlines? What software will you use? Basecamp? Trello? Asana? A spreadsheet or Google Doc? Basically, define the central hub for everything related to the project.
Why This Matters
During the project, your project management and communication should be centralized and trackable. A lot of time can be lost in the back-and-forth of looking for files, check-ins, updates, progress, questions, decisions, etc. That’s why it’s important to establish where the developer can find everything they need.
3. Who’s Calling the Shots?
Are you the final decision maker on the project? Is there a UI/UX team involved? Is there anyone else who has input on decisions? Is there a marketing team or a manager who wants to weigh in on decisions? Is anyone else other than you going to be giving direction directly to the developer? When does the client come in, and how many decisions does the client get to make? Will the client have direct communication with the developer?
Why This Matters
You don’t want to backpedal on development, or have your developer redo work. To avoid that, it’s important that every stakeholder is aware of all relevant decisions—and that each decision is recorded in a single, central location.
4. How Should the Developer Handle Assumptions and Small Decisions?
How much freedom does the developer have while interpreting designs? Should they build the website pixel-perfect according to the designs, or should they make small assumptions around consistency and reusability of sections? If you’ve designed a responsive site, have you designed for all breakpoints? Have you provided notes regarding animations, transitions, and hover effects? Have you designed validation states for fields? (i.e. the popups: “Password invalid.” or “Username doesn’t exist”.) If you haven’t, is the developer free to make decisions or suggestions?
Why This Matters
Very often, designers are dissatisfied when a website doesn’t closely match the designs—or conversely, when the site too closely follows the designs, to the detriment of its performance or the project’s timeline. At the beginning, define your intended level of detail. It makes for a much smoother QA process.
5. What is the timeline?
What’s the hard deadline for the project, and what’s the soft deadline? Is there a major press hit happening that the site needs to be launched for? If the deadline is ambitious, is there a way to launch it in phases? What’s the expectation for responding to quick changes? One week turnaround? Less than an hour?
Why This Matters
it really doesn’t help to create artificial hard deadlines…honesty is the best policy
If there’s a hard deadline, make the developer aware of it, and make sure to leave time for proper testing. After the site launches, know that most developers can’t be on call at all hours to make changes. Waiting for a developer to make a fix can be frustrating, but even small requests require maintaining version control, launching the development environment, connecting to the server, deploying to the production site, etc. Determine ahead of time how long you expect fixes and changes to take, and take stock of the priority level of each task.
Also, it really doesn’t help to create artificial hard deadlines. Just be transparent with your developer and trust them to deliver accordingly. Again, you’re building a relationship, here. Honesty is the best policy.
6. What’s the Structure of the Scope, Contract, and Payment Structure?
What’s the project fee? What’s the benchmark for the end of the project? What is included in the scope of the project? When does payment go out? Are you hiring the developer to do the project at an hourly or fixed rate?
Why This Matters
The last thing you want is a developer getting a site 95% of the way there, and then not launching the project due to a discrepancy in the scope/contract/payment.
Conclusion
Overall, setting expectations and communication are the critical things here. It can feel a bit silly to discuss how you’re going to talk to each other during a project, especially if you already have a good rapport. But it’s always good to just set expectations ahead of time, so you don’t end up inside your own a horror story.
In a bid to have web applications serve needs for different types of users, it’s likely that more code is required than it would be for one type of user so the app can handle and adapt to different scenarios and use cases, which lead to new features and functionalities. When this happens, it’s reasonable to expect the performance of an app to dwindle as the codebase grows.
Code splitting is a technique where an application only loads the code it needs at the moment, and nothing more. For example, when a user navigates to a homepage, there is probably no need to load the code that powers a backend dashboard. With code splitting, we can ensure that the code for the homepage is the only code that loads, and that the cruft stays out for more optimal loading.
Code splitting is possible in a React application using React Loadable. It provides a higher-order component that can be set up to dynamically import specific components at specific times.
Component splitting
There are situations when we might want to conditionally render a component based on a user event, say when a user logs in to an account. A common way of handling this is to make use of state — the component gets rendered depending on the logged in state of the app. We call this component splitting.
We can have an openHello state in the App component with an initial value of false. Then we can have a button used to toggle the state, either display the component or hide it. We’ll throw that into a handleHello method, which looks like this:
Take a quick peek in DevTools and take note the Network tab:
Now, let’s refactor to make use of LoadableHello. Instead of importing the component straight up, we will do the import using Loadable. We’ll start by installing the react-loadable package:
## yarn, npm or however you roll
yarn add react-loadable
Now that’s been added to our project, we need to import it into the app:
import Loadable from 'react-loadable';
We’ll use Loadable to create a “loading” component which will look like this:
We pass a function as a value to loader which returns the Hello component we created earlier, and we make use of import() to dynamically import it. The fallback UI we want to render before the component is imported is returned by loading(). In this example, we are returning a div element, though we can also put a component in there instead if we want.
Now, instead of inputting the Hello component directly in the App component, we’ll put LoadableHello to the task so that the conditional statement will look like this:
Check this out — now our Hello component loads into the DOM only when the state is toggled by the button:
And that’s component splitting: the ability to load one component to load another asynchronously!
Route-based splitting
Alright, so we saw how Loadable can be used to load components via other components. Another way to go about it is us ing route-based splitting. The difference here is that components are loaded according to the current route.
So, say a user is on the homepage of an app and clicks onto a Hello view with a route of /hello. The components that belong on that route would be the only ones that load. It’s a fairly common way of handling splitting in many apps and generally works well, especially in less complex applications.
Here’s a basic example of defined routes in an app. In this case, we have two routes: (1) Home (/) and (2) Hello (/hello).
As it stands, all components will render when a use switches paths, even though we want to render the one Hello component based on that path. Sure, it’s not a huge deal if we’re talking a few components, but it certainly could be as more components are added and the application grows in size.
Using Loadable, we can import only the component we want by creating a loadable component for each:
Now, we serve the right code at the right time. Thanks, Loadable!
What about errors and delays?
If the imported component will load fast, there is no need to flash a “loading” component. Thankfully, Loadable has the ability to delay the loading component from showing. This is helpful to prevent it from displaying too early where it feels silly and instead show it after a notable period of time has passed where we would expect to have seen it loaded.
To do that, our sample Loadable component will look like this;
Here, we are passing the Hello component as a value to loading, which is imported via loader. By default, delay is set to 200ms, but we’ve set ours a little later to 300ms.
Now let’s add a condition to the Loader component that tells it to display the loader only after the 300ms delay we set has passed:
So the Loader component will only show if the Hello component does not show after 300ms.
react-loader also gives us an error prop which we can use to return errors that are encountered. And, because it is a prop, we can let it spit out whatever we want.
const Loader = (props) => {
if (props.error) {
return <div>Oh no, something went wrong!</div>;
} else if (props.delay) {
return <h2>Loading...</h2>
} else {
return null;
}
}
Note that we’re actually combining the delay and error handling together! If there’s an error off the bat, we’ll display some messaging. If there’s no error, but 300ms have passed, then we’ll show a loader. Otherwise, load up the Hello component, please!
That’s a wrap
Isn’t it great that we have more freedom and flexibility in how we load and display code these days? Code splitting — either by component or by route — is the sort of thing React was designed to do. React allows us to write modular components that contain isolated code and we can serve them whenever and wherever we want and allow them to interact with the DOM and other components. Very cool!
Hopefully this gives you a good feel for code splitting as a concept. As you get your hands dirty and start using it, it’s worth checking out more in-depth posts to get a deeper understanding of the concept.
HLS stands for HTTP Live Streaming. It’s an adaptive bitrate streaming protocol developed by Apple. One of those sentences to casually drop at any party. Äh. Back on track: HLS allows you to specify a playlist with multiple video sources in different resolutions. Based on available bandwidth these video sources can be switched and allow adaptive playback.
This is an interesting journey where the engineering team behind Kitchen Stories wanted to switch away from the Vimeo player (160 kB), but still use Vimeo as a video host because they provide direct video links with a Pro plan. Instead, they are using the native element, a library for handling HLS, and a wrapper element to give them a little bonus UX.
This video stuff is hard to keep up with! There is another new format called AV1 that is apparently a big deal as YouTube and Netflix are both embracing it. Andrey Sitnik wrote about it here:
Even though AV1 codec is still considered experimental, you can already leverage its high-quality, low-bitrate features for a sizable chunk for your web audience (users with current versions of Chrome and Firefox). Of course, you would not want to leave users for other browsers hanging, but the attributes for and tags make implementing this logic easy, and in pure HTML, you don’t need to go at length to detect user agents with JavaScript.
That doesn’t even mention HLS, but I suppose that’s because HSL is a streaming protocol, which still needs to stream in some sort of format.
I wouldn’t say the term “CSS algorithm” has widespread usage yet, but I think Lara Schenck might be onto something. She defines it as:
a well-defined declaration or set of declarations that produces a specific styling output
So a CSS algorithm isn’t really a component where there is some parent element and whatever it needs inside, but a CSS algorithm could involve components. A CSS algorithm isn’t just some tricky key/value pair or calculated output — but it could certainly involve those things.
The way I understand it is that they are little mini systems. In a recent post, she describes a situation involving essentially two fixed header bars and needing to deal with them in different situations. In this example, the page can be in different states (e.g. a logged-in state has a position: fixed; bar), and that affects not only the header but the content area as well. Dealing with all that together is a CSS algorithm. It’s probably the way we all work in CSS already, but now have a term to describe it. This particular example involves some CSS custom properties, a state-based class, two selectors, and a media query. Classic front-end developer stuff.