Converting Plain Text To Encoded HTML With Vanilla JavaScript
When copying text from a website to your device’s clipboard, there’s a good chance that you will get the formatted HTML when pasting it. Some apps and operating systems have a “Paste Special” feature that will strip those tags out for you to maintain the current style, but what do you do if that’s unavailable?
Same goes for converting plain text into formatted HTML. One of the closest ways we can convert plain text into HTML is writing in Markdown as an abstraction. You may have seen examples of this in many comment forms in articles just like this one. Write the comment in Markdown and it is parsed as HTML.
Even better would be no abstraction at all! You may have also seen (and used) a number of online tools that take plainly written text and convert it into formatted HTML. The UI makes the conversion and previews the formatted result in real time.
Providing a way for users to author basic web content — like comments — without knowing even the first thing about HTML, is a novel pursuit as it lowers barriers to communicating and collaborating on the web. Saying it helps “democratize” the web may be heavy-handed, but it doesn’t conflict with that vision!
We can build a tool like this ourselves. I’m all for using existing resources where possible, but I’m also for demonstrating how these things work and maybe learning something new in the process.
Defining The Scope
There are plenty of assumptions and considerations that could go into a plain-text-to-HTML converter. For example, should we assume that the first line of text entered into the tool is a title that needs corresponding
tags? Is each new line truly a paragraph, and how does linking content fit into this?
Again, the idea is that a user should be able to write without knowing Markdown or HTML syntax. This is a big constraint, and there are far too many HTML elements we might encounter, so it’s worth knowing the context in which the content is being used. For example, if this is a tool for writing blog posts, then we can limit the scope of which elements are supported based on those that are commonly used in long-form content:
,
,
, and ![]()
. In other words, it will be possible to include top-level headings, body text, linked text, and images. There will be no support for bulleted or ordered lists, tables, or any other elements for this particular tool.
The front-end implementation will rely on vanilla HTML, CSS, and JavaScript to establish a small form with a simple layout and functionality that converts the text to HTML. There is a server-side aspect to this if you plan on deploying it to a production environment, but our focus is purely on the front end.
Looking At Existing Solutions
There are existing ways to accomplish this. For example, some libraries offer a WYSIWYG editor. Import a library like TinyMCE with a single and you’re good to go. WYSIWYG editors are powerful and support all kinds of formatting, even applying CSS classes to content for styling.
But TinyMCE isn’t the most efficient package at about 500 KB minified. That’s not a criticism as much as an indication of how much functionality it covers. We want something more “barebones” than that for our simple purpose. Searching GitHub surfaces more possibilities. The solutions, however, seem to fall into one of two categories:
- The input accepts plain text, but the generated HTML only supports the HTML
and
- The input converts plain text into formatted HTML, but by ”plain text,” the tool seems to mean “Markdown” (or a variety of it) instead. The txt2html Perl module (from 1994!) would fall under this category.
Even if a perfect solution for what we want was already out there, I’d still want to pick apart the concept of converting text to HTML to understand how it works and hopefully learn something new in the process. So, let’s proceed with our own homespun solution.
Setting Up The HTML
We’ll start with the HTML structure for the input and output. For the input element, we’re probably best off using a . For the output element and related styling, choices abound. The following is merely one example with some very basic CSS to place the input
on the left and an output
See the Pen Base Form Styles [forked] by Geoff Graham.
You can further develop the CSS, but that isn’t the focus of this article. There is no question that the design can be prettier than what I am providing here!
Capture The Plain Text Input
We’ll set an onkeyup
event handler on the to call a JavaScript function called
convert()
that does what it says: convert the plain text into HTML. The conversion function should accept one parameter, a string, for the user’s plain text input entered into the element:
<textarea onkeyup='convert(this.value);'></textarea>
onkeyup
is a better choice than onkeydown
in this case, as onkeyup
will call the conversion function after the user completes each keystroke, as opposed to before it happens. This way, the output, which is refreshed with each keystroke, always includes the latest typed character. If the conversion is triggered with an onkeydown
handler, the output will exclude the most recent character the user typed. This can be frustrating when, for example, the user has finished typing a sentence but cannot yet see the final punctuation mark, say a period (.
), in the output until typing another character first. This creates the impression of a typo, glitch, or lag when there is none.
In JavaScript, the convert()
function has the following responsibilities:
- Encode the input in HTML.
- Process the input line-by-line and wrap each individual line in either a
or
- Process the output of the transformations as a single string, wrap URLs in HTML
tags, and replace image file names with
elements.
And from there, we display the output. We can create separate functions for each responsibility. Let’s name them accordingly:
html_encode()
convert_text_to_HTML()
convert_images_and_links_to_HTML()
Each function accepts one parameter, a string, and returns a string.
Encoding The Input Into HTML
Use the html_encode()
function to HTML encode/sanitize the input. HTML encoding refers to the process of escaping or replacing certain characters in a string input to prevent users from inserting their own HTML into the output. At a minimum, we should replace the following characters:
-
<
with<
-
>
with>
-
&
with&
-
'
with'
-
"
with"
JavaScript does not provide a built-in way to HTML encode input as other languages do. For example, PHP has htmlspecialchars()
, htmlentities()
, and strip_tags()
functions. That said, it is relatively easy to write our own function that does this, which is what we’ll use the html_encode()
function for that we defined earlier:
function html_encode(input) {
const textArea = document.createElement("textarea");
textArea.innerText = input;
return textArea.innerHTML.split("<br>").join("n");
}
HTML encoding of the input is a critical security consideration. It prevents unwanted scripts or other HTML manipulations from getting injected into our work. Granted, front-end input sanitization and validation are both merely deterrents because bad actors can bypass them. But we may as well make them work a little harder.
As long as we are on the topic of securing our work, make sure to HTML-encode the input on the back end, where the user cannot interfere. At the same time, take care not to encode the input more than once. Encoding text that is already HTML-encoded will break the output functionality. The best approach for back-end storage is for the front end to pass the raw, unencoded input to the back end, then ask the back-end to HTML-encode the input before inserting it into a database.
That said, this only accounts for sanitizing and storing the input on the back end. We still have to display the encoded HTML output on the front end. There are at least two approaches to consider:
-
Convert the input to HTML after HTML-encoding it and before it is inserted into a database.
This is efficient, as the input only needs to be converted once. However, this is also an inflexible approach, as updating the HTML becomes difficult if the output requirements happen to change in the future. -
Store only the HTML-encoded input text in the database and dynamically convert it to HTML before displaying the output for each content request.
This is less efficient, as the conversion will occur on each request. However, it is also more flexible since it’s possible to update how the input text is converted to HTML if requirements change.
Applying Semantic HTML Tags
Let’s use the convert_text_to_HTML()
function we defined earlier to wrap each line in their respective HTML tags, which are going to be either
or
. To determine which tag to use, we will split
the text input on the newline character (n
) so that the text is processed as an array of lines rather than a single string, allowing us to evaluate them individually.
function convert_text_to_HTML(txt) {
// Output variable
let out = '';
// Split text at the newline character into an array
const txt_array = txt.split("n");
// Get the number of lines in the array
const txt_array_length = txt_array.length;
// Variable to keep track of the (non-blank) line number
let non_blank_line_count = 0;
for (let i = 0; i < txt_array_length; i++) {
// Get the current line
const line = txt_array[i];
// Continue if a line contains no text characters
if (line === ''){
continue;
}
non_blank_line_count++;
// If a line is the first line that contains text
if (non_blank_line_count === 1){
// ...wrap the line of text in a Heading 1 tag
out += <h1>${line}</h1>
;
// ...otherwise, wrap the line of text in a Paragraph tag.
} else {
out += <p>${line}</p>
;
}
}
return out;
}
function convert_text_to_HTML(txt) {
// Output variable
let out = '';
// Split text at the newline character into an array
const txt_array = txt.split("n");
// Get the number of lines in the array
const txt_array_length = txt_array.length;
// Variable to keep track of the (non-blank) line number
let non_blank_line_count = 0;
for (let i = 0; i < txt_array_length; i++) {
// Get the current line
const line = txt_array[i];
// Continue if a line contains no text characters
if (line === ''){
continue;
}
non_blank_line_count++;
// If a line is the first line that contains text
if (non_blank_line_count === 1){
// ...wrap the line of text in a Heading 1 tag
out += <h1>${line}</h1>
;
// ...otherwise, wrap the line of text in a Paragraph tag.
} else {
out += <p>${line}</p>
;
}
}
return out;
}
In short, this little snippet loops through the array of split text lines and ignores lines that do not contain any text characters. From there, we can evaluate whether a line is the first one in the series. If it is, we slap a
tag on it; otherwise, we mark it up in a
tag.
This logic could be used to account for other types of elements that you may want to include in the output. For example, perhaps the second line is assumed to be a byline that names the author and links up to an archive of all author posts.
Tagging URLs And Images With Regular Expressions
Next, we’re going to create our convert_images_and_links_to_HTML()
function to encode URLs and images as HTML elements. It’s a good chunk of code, so I’ll drop it in and we’ll immediately start picking it apart together to explain how it all works.
function convert_images_and_links_to_HTML(string){
let urls_unique = [];
let images_unique = [];
const urls = string.match(/https*://[^s<),]+[^s<),.]/gmi) ?? [];
const imgs = string.match(/[^"'>s]+.(jpg|jpeg|gif|png|webp)/gmi) ?? [];
const urls_length = urls.length;
const images_length = imgs.length;
for (let i = 0; i < urls_length; i++){
const url = urls[i];
if (!urls_unique.includes(url)){
urls_unique.push(url);
}
}
for (let i = 0; i < images_length; i++){
const img = imgs[i];
if (!images_unique.includes(img)){
images_unique.push(img);
}
}
const urls_unique_length = urls_unique.length;
const images_unique_length = images_unique.length;
for (let i = 0; i < urls_unique_length; i++){
const url = urls_unique[i];
if (images_unique_length === 0 || !images_unique.includes(url)){
const a_tag = <a href="${url}" target="_blank">${url}</a>
;
string = string.replace(url, a_tag);
}
}
for (let i = 0; i < images_unique_length; i++){
const img = images_unique[i];
const img_tag = <img src="${img}" alt="">
;
const img_link = <a href="${img}">${img_tag}</a>
;
string = string.replace(img, img_link);
}
return string;
}
Unlike the convert_text_to_HTML()
function, here we use regular expressions to identify the terms that need to be wrapped and/or replaced with or
tags. We do this for a couple of reasons:
- The previous
convert_text_to_HTML()
function handles text that would be transformed to the HTML block-level elements
and
- On the other hand, URLs in the text input are often included in the middle of a sentence rather than on a separate line. Images that occur in the input text are often included on a separate line, but not always. While you could identify text that represents URLs and images by processing the input line-by-line — or even word-by-word, if necessary — it is easier to use regular expressions and process the entire input as a single string rather than by individual lines.
Regular expressions, though they are powerful and the appropriate tool to use for this job, come with a performance cost, which is another reason to use each expression only once for the entire text input.
Remember: All the JavaScript in this example runs each time the user types a character, so it is important to keep things as lightweight and efficient as possible.
I also want to make a note about the variable names in our convert_images_and_links_to_HTML()
function. images
(plural), image
(singular), and link
are reserved words in JavaScript. Consequently, imgs
, img
, and a_tag
were used for naming. Interestingly, these specific reserved words are not listed on the relevant MDN page, but they are on W3Schools.
We’re using the String.prototype.match()
function for each of the two regular expressions, then storing the results for each call in an array. From there, we use the nullish coalescing operator (??
) on each call so that, if no matches are found, the result will be an empty array. If we do not do this and no matches are found, the result of each match()
call will be null
and will cause problems downstream.
const urls = string.match(/https*://[^s<),]+[^s<),.]/gmi) ?? [];
const imgs = string.match(/[^"'>s]+.(jpg|jpeg|gif|png|webp)/gmi) ?? [];
Next up, we filter the arrays of results so that each array contains only unique results. This is a critical step. If we don’t filter out duplicate results and the input text contains multiple instances of the same URL or image file name, then we break the HTML tags in the output. JavaScript does not provide a simple, built-in method to get unique items in an array that’s akin to the PHP array_unique()
function.
The code snippet works around this limitation using an admittedly ugly but straightforward procedural approach. The same problem is solved using a more functional approach if you prefer. There are many articles on the web describing various ways to filter a JavaScript array in order to keep only the unique items.
We’re also checking if the URL is matched as an image before replacing a URL with an appropriate tag and performing the replacement only if the URL doesn’t match an image. We may be able to avoid having to perform this check by using a more intricate regular expression. The example code deliberately uses regular expressions that are perhaps less precise but hopefully easier to understand in an effort to keep things as simple as possible.
And, finally, we’re replacing image file names in the input text with
tags that have the src
attribute set to the image file name. For example, my_image.png
in the input is transformed into
in the output. We wrap each
tag with an tag that links to the image file and opens it in a new tab when clicked.
There are a couple of benefits to this approach:
- In a real-world scenario, you will likely use a CSS rule to constrain the size of the rendered image. By making the images clickable, you provide users with a convenient way to view the full-size image.
- If the image is not a local file but is instead a URL to an image from a third party, this is a way to implicitly provide attribution. Ideally, you should not rely solely on this method but, instead, provide explicit attribution underneath the image in a
,, or similar element. But if, for whatever reason, you are unable to provide explicit attribution, you are at least providing a link to the image source.
It may go without saying, but “hotlinking” images is something to avoid. Use only locally hosted images wherever possible, and provide attribution if you do not hold the copyright for them.
Before we move on to displaying the converted output, let’s talk a bit about accessibility, specifically the image alt
attribute. The example code I provided does add an alt
attribute in the conversion but does not populate it with a value, as there is no easy way to automatically calculate what that value should be. An empty alt
attribute can be acceptable if the image is considered “decorative,” i.e., purely supplementary to the surrounding text. But one may argue that there is no such thing as a purely decorative image.
That said, I consider this to be a limitation of what we’re building.
Displaying the Output HTML
We’re at the point where we can finally work on displaying the HTML-encoded output! We’ve already handled all the work of converting the text, so all we really need to do now is call it:
function convert(input_string) {
output.innerHTML = convert_images_and_links_to_HTML(convert_text_to_HTML(html_encode(input_string)));
}
If you would rather display the output string as raw HTML markup, use a
tag as the output element instead of a :
<pre id='output'></pre>
The only thing to note about this approach is that you would target the
element’s textContent
instead of innerHTML
:
function convert(input_string) {
output.textContent = convert_images_and_links_to_HTML(convert_text_to_HTML(html_encode(input_string)));
}
Conclusion
We did it! We built one of the same sort of copy-paste tool that converts plain text on the spot. In this case, we’ve configured it so that plain text entered into a
is parsed line-by-line and encoded into HTML that we format and display inside another element.
See the Pen Convert Plain Text to HTML (PoC) [forked] by Geoff Graham.
We were even able to keep the solution fairly simple, i.e., vanilla HTML, CSS, and JavaScript, without reaching for a third-party library or framework. Does this simple solution do everything a ready-made tool like a framework can do? Absolutely not. But a solution as simple as this is often all you need: nothing more and nothing less.
As far as scaling this further, the code could be modified to POST
what’s entered into the
using a PHP script or the like. That would be a great exercise, and if you do it, please share your work with me in the comments because I’d love to check it out.
References
- “How to HTML-encode a String” (W3Docs)
- “How to escape & unescape HTML characters in string in JavaScript” (Educative.io)
- “How to get all unique values (remove duplicates) in a JavaScript array?”” (GeeksforGeeks)
- “Getting Unique Array Values in Javascript and Typescript,” Chris Engelsma
- “Threats of Using Regular Expressions in JavaScript,” Dulanka Karunasena
Categories: Others
How to Write World-Beating Web Content
Writing for the web is different from all other formats. We typically do not read to any real depth on the web; we scan-read.
A WordPress Agency Advises On SEO Migrations
When you’ve invested a lot of time and resources in a new website design, the last thing you want to happen is to start optimizing the restructured site from scratch. In fact, nothing negates the enthusiasm of a new platform migration more than the danger of tanking SERPs rankings.
Most times, site owners are reluctant to migrate their already established websites. However, in case there is no workaround solution, each step needs to be well-thought and measured. Without proper planning and execution, the transition can lead to significant losses in organic traffic, revenue, and market share.
SEO migrations imply serious changes and you need a really, really good reason to bite the bullet. It is usually inevitable if you decide to change your business name, offerings, server, CMS platform, add a mobile version, or add a security certificate.
Common pitfalls include failing to create a migration plan, overlooking information architecture, and implementing sloppy 301 redirects. Content pruning, mobile responsiveness issues, slow page load times, and crawlability bugs are also common challenges. It’s crucial to set up analytics and tracking, prioritize XML sitemaps, and ensure that non-indexable content remains out of search engine indices. Post-migration monitoring is essential to spot and address issues promptly. By proactively addressing these risks, businesses can navigate website migrations successfully and maintain their SEO performance.
In this article, we will discuss the implications of site migration on SEO and illuminate the strategic steps that businesses can take, under the expert counsel of the WordPress agency DevriX, to ensure a seamless transition that not only preserves but enhances their digital visibility.
Site Migrations vs. SEO Migrations
Just so we are clear, these terms are not the same thing although often used interchangeably.
Site Migrations
Different types of site migration include domain migration, where the website’s domain name changes, design migration, involving visual updates while retaining content and URLs, platform migration, moving to a different CMS or e-commerce platform, URL structure migration, altering URL organization, content migration, transferring and updating site content, and server migration, shifting to a new hosting environment for better performance or security. Each type serves specific purposes, from rebranding efforts to technical optimizations, requiring careful planning and execution to ensure a smooth transition without compromising user experience or SEO.
SEO Migrations
SEO migrations are a core aspect of website migration, encompassing critical adjustments to maintain or enhance search engine visibility, traffic, and rankings. This process involves meticulously managing various elements such as URL structures, redirects, content optimization, and technical configurations to ensure seamless transitions while preserving or improving SEO performance.
How Does A Site Migration Affect SEO?
Embarking on a site migration is akin to orchestrating a complex symphony where every note, or in this case, every aspect, must be meticulously aligned to maintain the harmony of SEO. Understanding the impact of SEO migrations on a site’s rankings is paramount for businesses looking to evolve their online presence.
URL Changes and Link Structure
Changing URLs during a migration can disrupt the existing link structure. Links, both internal and external, contribute to a website’s authority. Implementing 301 redirects is crucial to seamlessly guide users and search engines from the old URLs to their new counterparts, preserving link equity.
Content Structure and Keywords
A shift in content structure can affect keyword relevance and page authority. Careful content mapping ensures that existing SEO value is retained and aligns with the new site architecture. This involves mapping old content to its equivalent or optimized version in the new structure and maintaining keyword relevance.
Technical Aspects and User Experience
Technical considerations, such as site speed, mobile responsiveness, and schema markup, play a pivotal role in SEO. A well-optimized website not only enhances user experience but also aligns with search engine algorithms. Prioritizing technical optimization ensures a smooth migration that positively impacts SEO.
Indexing Challenges
Mismanaged migrations can lead to indexing issues, where search engines struggle to crawl and index the new site. Proper handling of sitemaps, ensuring all relevant pages are included, and addressing potential crawl errors promptly are imperative to maintain visibility in search results.
Monitoring and Analysis
Post-migration, diligent monitoring is essential. Analyzing performance metrics, search engine rankings, and user behavior provides insights for ongoing optimization. Identifying anomalies and addressing them promptly safeguards the SEO health of the migrated site.
14 Steps for Successful SEO Migration
In case you didn’t already get the hint, site migrations are very clever traps. Navigating this intricate process demands a strategic approach and the expertise of a seasoned web developer to ensure a smooth transition. Let’s explore the essential steps for a successful SEO migration, where every move is calibrated to preserve and enhance digital visibility.
1. Thorough Site Audit

Start site migrations with a comprehensive site audit. Understand the current SEO landscape, identifying strengths, weaknesses, and areas for improvement. This audit serves as the foundation for crafting a tailored migration strategy aligned with business goals.
2. Strategic URL Mapping
URL changes are inherent in most migrations and require meticulous planning. Develop a strategic URL mapping plan, ensuring that redirects (especially 301 redirects) are implemented for old URLs to guide users and search engines smoothly to their new destinations. This step is critical for preserving link equity and maintaining SEO continuity.
3. Content Mapping and Optimization
Content is the lifeblood of SEO and is crucial for successful SEO migrations. Map existing content to the new site structure, optimizing for relevant keywords. Align meta tags, headers, and other on-page elements with SEO best practices. Ensure that the migration enhances content relevance and aligns seamlessly with the new architecture.
4. Technical Optimization
Address technical aspects to bolster SEO. Prioritize site speed, ensure swift loading times, and optimize for mobile responsiveness. Implement schema markup to enhance rich snippets in search results. Technical optimization contributes not only to SEO but also to overall user experience, a key factor in search engine algorithms.
5. Implement 301 Redirects
Implementing 301 (permanent) redirects is fundamental to SEO migration success. This step ensures that old URLs are seamlessly redirected to their new counterparts, preserving both user experience and link equity. It’s a critical strategy to communicate the structural changes to search engines effectively.
6. Update XML Sitemaps
Update and submit XML sitemaps to search engines, guiding their crawling and indexing processes. This step helps search engines understand the new site structure and ensures that all relevant pages are indexed. Regularly monitor crawl errors and address them promptly.
7. Create A Custom 404 Error Page

When carrying out an SEO migration, the emergence of broken links and error pages is inevitable often due to URL changes. As a WordPress developer, understanding the root cause of these errors and coming up with a clever error page design is crucial for seamless website functionality and good user experience. To effectively resolve this issue, consider utilizing the .htaccess file. Also, make sure you customize the design so that it helps visitors navigate to the Home page or other relevant sections of your website to locate the desired content. Look at it as a branding opportunity – it will not only suppress the turnover rate and the disappointment of an empty page, but it will also build trust in your brand as it shows “You have it under control”.
8. Crawl the Staging Site
It’s crucial to thoroughly crawl the staging site before launch to identify and address any potential issues such as broken links or missing metadata. Employing tools like JetOctopus or Screaming Frog is recommended for this task, especially when dealing with password-protected or no-index-tagged staging sites. It’s essential to ensure that the staging site accurately mirrors the final version of the website to prevent any unexpected challenges or discrepancies after the launch. This proactive approach helps streamline the transition process and minimize the likelihood of post-launch surprises.
9. Monitor Post-Migration Metrics
Post-migration, closely monitor essential metrics, including organic traffic, keyword rankings, and user engagement. Analyzing these metrics provides insights into the impact of the migration on SEO performance. Swiftly address any anomalies to maintain a healthy SEO profile.
10. Backlink Preservation
Identify websites linking to the old URLs and reach out to them. Request updates to ensure that backlinks redirect to the new site, preserving link authority. Backlinks are crucial for SEO, and their preservation contributes to maintaining and enhancing search engine rankings.
11. Engage with Search Console and Bing Webmaster Tools
Leverage tools like Google Search Console and other webmaster tools to monitor how search engines perceive the migration. Address any issues highlighted in these platforms promptly. Regular engagement with these tools provides valuable insights into ongoing SEO performance.
12. User Communication
Transparent communication with users is essential. Inform them of the migration through on-site announcements and social media. Managing user expectations helps mitigate any temporary disruptions and builds trust. Users who are aware of the changes are more likely to adapt positively. Also, don’t forget to update your Google My Business, Bing Places and any directory listings you have online.
13. Post-Migration SEO Analysis
Conduct a detailed analysis of SEO performance post-migration. Evaluate the impact on search engine rankings, organic traffic, and user engagement. Carry out SEO forecasting to identify areas for further optimization and fine-tune strategies based on the analysis.

14. Ongoing SEO Maintenance
SEO is not a one-time effort but an ongoing process. Continue to monitor and adapt SEO strategies as the digital landscape evolves. Regularly update content, address technical issues promptly, and stay informed about industry trends and algorithm changes.
Wrapping Up
Each step in site SEO migrations, from strategic URL mapping to ongoing SEO maintenance, shapes the success of your newly launched website. The symphony of technology and strategy ensures not only a seamless transition but also an optimized online presence. Remember, migrating a website is not an easy decision, and if you lack the expertise to DIY it, partner up with a web developer to ensure success.
Featured image by Kindel Media
The post A WordPress Agency Advises On SEO Migrations appeared first on noupe.
Harnessing the Power of Pay-Per-Call in Local Marketing Campaigns
As the number of smartphone users rises and our phones become indispensable for connecting with people and businesses, pay-per-call marketing emerges as the most pertinent and cutting-edge digital marketing model to ensure your brand presence.
In our blog post, we’ll discuss the benefits of pay-per-call advertising and the importance of call tracking for running successful pay-per-call campaigns for local businesses.
What Is Pay-Per-Call?
Pay-per-call marketing is akin to striking gold in digital marketing. Simply put, businesses only pay when the inbound calls from potential customers actively seek their services after spotting an ad or listing. This approach differs from traditional marketing channels, where businesses usually fork out much on impressions, conversions, or placements.
Wonder how businesses track call leads? That’s when software for call tracking comes into play. Digital marketing is getting more and more challenging, so every small business and enterprise needs the right tracking and lead distribution solutions to get ahead of the competition and keep tabs on their marketing performance.
Pay-per-call capitalizes on our plugged-in world, where practically every grown-up owns a smartphone and leans toward calling service providers when in need. In fact, 59% of people prefer reaching out via phone, and 57% of customers prefer to talk with a real person.
Pay-per-call marketing is a surefire way to drive and convert more leads, especially in scenarios where customers crave human interaction over the phone. Take, for instance, emergencies demanding a swift contractor hire or the quest for thorough health insurance advice; in such cases, a direct phone conversation has more value than an email conversation or online form submission.
These junctures present a golden chance for service providers to flex their customer service muscles and ace the art of converting calls into prospective customers, as these callers are already poised for action—you just need to seal the deal.
What Are The Benefits of Pay-Per-Call Marketing?
Driving High-Quality Leads
In the realm of digital marketing, pay-per-call stands out due to its knack for driving top-notch leads. For example, when potential customers initiate a call to a certain business directly, it means that they have a high level of interest and engagement. Unlike in any other advertising campaign, pay-per-call leads have already shown intent by picking up the receiver, making them more inclined to convert into prospective customers.
High Conversion Rate
Furthermore, pay-per-call advertising boasts a superior lead conversion rate compared to other digital marketing avenues. By engaging directly with customers over the phone, businesses can address their specific needs and queries in real-time. This personal touch nurtures customer trust and credibility. In addition, it heightens the chances of conversion. Moreover, phone conversations provide businesses with a chance to pitch additional products or services and increase sales.
Targeting Local Customers Effectively
Another feather in the cap for pay-per-call advertising is its superiority in targeting local customers. Many businesses, particularly local service providers or brick-and-mortar shops, rely on attracting their target audience within a specific geographical area. Pay-per-call advertising helps businesses to focus on customers based on their location. Consequently, incoming calls hail from individuals who are more likely to become local “fans.” This localized approach turbocharges the efficiency of a local marketing campaign and aids businesses in increasing a robust presence in their locale. At the same time, you can convert local audiences into your customers if you generate dynamic QR code that includes all relevant information about your business like the phone number to distribute near your local area. This way, interested people could learn more about the business and become paying customers.
Valuable Insights in Business Performance
Moreover, in digital marketing, pay-per-call advertising delivers a treasure trove of valuable information and data for businesses to dissect and refine their digital marketing strategies. Thanks to call tracking and recording technology, businesses can gain deeper insights into their customers’ preferences, pain points, consumer behavior, and shopping habits. Thus, armed with this call analytics, businesses can fine-tune customer acquisition strategies, enhance customer service, and optimize overall business performance marketing. By leveraging this intel, businesses can make data-driven decisions that bring more sales opportunities (a higher return on investment rate).
Industries Benefiting from Pay-Per-Call in Local Marketing
Pay-per-call marketing does wonders for businesses dealing in services where a personal touch matters. It’s particularly effective for those with longer sales cycles as the advertising model provides a means to track and attribute leads, unlike other marketing channels.
Moreover, PPCall offers a level of transparency, unlike many other marketing methods. It makes the pay-per-call model a magnet for companies seeking quality pay leads and ready to enhance their sales while delivering outstanding customer service.
In essence, to measure whether your company can reap the rewards of pay-per-call marketing, you’ll need to crunch the numbers: analyze your average cost per job, evaluate the average cost per lead from a pay-per-call network, and realistically assess your ability to convert calls into bookings or purchases.
Some Industries Making Strides With Pay-Per-Call
- Healthcare: dentistry, chiropractic, physical therapy, optometry, podiatry
- Home services: electricians, plumbers, HVAC, water damage restoration, mold removal, roofing, pest control, appliance repair
- Automotive: auto body shops, auto glass repair, towing services
- Legal & professional services: lawyers, accountants, credit repair, tax specialists, wealth managers
Drawbacks of Pay-Per-Call Lead Conversion
Restricted audience targeting | Pay-per-call marketing may have a narrower reach compared to online advertising since it relies on potential customers initiating phone calls. |
Call quality and scam | Advertisers might encounter difficulties related to call quality and fraudulent calls, which could hinder the effectiveness of the campaign. |
Increased lead expenses | Although Pay-per-call marketing can provide value, the costs of digital leads may rise in comparison to other advertising methods, especially in situations with low call volumes. |
How to Utilize Pay-Per-Call Marketing in Local Campaigns
Optimize Landing Pages
The initial step for businesses is to enhance their landing pages, transforming them into effective lead generators. This involves optimizing for mobile accessibility and user experience. If you utilize responsive designs tailored for seamless interaction, it will facilitate easy navigation on the brand’s landing page. In addition, it’ll provide access to essential information such as phone numbers or click-to-call options.
Implement Efficient Call Tracking
After a pay-per-call marketing campaign is devised and set out, it’s time to choose and employ call-tracking software. The latter captures data related to both incoming and outgoing phone calls (audio recordings, call locations, etc). Why is it important? Local businesses and marketers tap into call tracking to refine customer service, gain valuable business insights, and optimize various operational processes.
Wonder how it works? Call-tracking software operates seamlessly in the cloud. When a customer makes a call routed through a VoIP service or similar cloud-based platform, it generates a log of the interaction. Then, being digital in nature, call-tracking software enables businesses to link customer journeys to individual phone numbers. It reveals their primary digital marketing channels. Thus, callers can reach companies through customizable webpage buttons, smartphones, or traditional landlines.
Other Benefits Of Call-Tracking Software
- Increased sales and conversions
- Optimized advertising expenses
- Enhanced customer insights
- Refined customer service
Tap Into Pay-Per-Call Marketing To Drive Business Growth
In the end, the success of pay-per-call marketing relies on you as a business owner, the skills of your team, and the efficiency of your call center, if applicable.
By embracing this swiftly expanding marketing tactic, you can capitalize on the opportunity for steady, enduring revenue streams. Therefore, don’t overlook this hidden gem in marketing—embrace pay-per-call and witness a surge in your return on investment.
The post Harnessing the Power of Pay-Per-Call in Local Marketing Campaigns appeared first on noupe.
How To Monitor And Optimize Google Core Web Vitals
This article is a sponsored by DebugBear
Google’s Core Web Vitals initiative has increased the attention website owners need to pay to user experience. You can now more easily see when users have poor experiences on your website, and poor UX also has a bigger impact on SEO.
That means you need to test your website to identify optimizations. Beyond that, monitoring ensures that you can stay ahead of your Core Web Vitals scores for the long term.
Let’s find out how to work with different types of Core Web Vitals data and how monitoring can help you gain a deeper insight into user experiences and help you optimize them.
What Are Core Web Vitals?
There are three web vitals metrics Google uses to measure different aspects of website performance:
- Largest Contentful Paint (LCP),
- Cumulative Layout Shift (CLS),
- Interaction to Next Paint (INP).
Largest Contentful Paint (LCP)
The Largest Contentful Paint metric is the closest thing to a traditional load time measurement. However, LCP doesn’t track a purely technical page load milestone like the JavaScript Load Event. Instead, it focuses on what the user can see by measuring how soon after opening a page, the largest content element on the page appears.
The faster the LCP happens, the better, and Google rates a passing LCP score below 2.5 seconds.
Cumulative Layout Shift (CLS)
Cumulative Layout Shift is a bit of an odd metric, as it doesn’t measure how fast something happens. Instead, it looks at how stable the page layout is once the page starts loading. Layout shifts mean that content moves around, disorienting the user and potentially causing accidental clicks on the wrong UI element.
The CLS score is calculated by looking at how far an element moved and how big the element is. Aim for a score below 0.1 to get a good rating from Google.
Interaction to Next Paint (INP)
Even websites that load quickly often frustrate users when interactions with the page feel sluggish. That’s why Interaction to Next Paint measures how long the page remains frozen after user interaction with no visual updates.
Page interactions should feel practically instant, so Google recommends an INP score below 200 milliseconds.
What Are The Different Types Of Core Web Vitals Data?
You’ll often see different page speed metrics reported by different tools and data sources, so it’s important to understand the differences. We’ve published a whole article just about that, but here’s the high-level breakdown along with the pros and cons of each one:
-
Synthetic Tests
These tests are run on-demand in a controlled lab environment in a fixed location with a fixed network and device speed. They can produce very detailed reports and recommendations. -
Real User Monitoring (RUM)
This data tells you how fast your website is for your actual visitors. That means you need to install an analytics script to collect it, and the reporting that’s available is less detailed than for lab tests. -
CrUX Data
Google collects from Chrome users as part of the Chrome User Experience Report (CrUX) and uses it as a ranking signal. It’s available for every website with enough traffic, but since it covers a 28-day rolling window, it takes a while for changes on your website to be reflected here. It also doesn’t include any debug data to help you optimize your metrics.
Start By Running A One-Off Page Speed Test
Before signing up for a monitoring service, it’s best to run a one-off lab test with a free tool like Google’s PageSpeed Insights or the DebugBear Website Speed Test. Both of these tools report with Google CrUX data that reflects whether real users are facing issues on your website.
Note: The lab data you get from some Lighthouse-based tools — like PageSpeed Insights — can be unreliable.
INP is best measured for real users, where you can see the elements that users interact with most often and where the problems lie. But a free tool like the INP Debugger can be a good starting point if you don’t have RUM set up yet.
How To Monitor Core Web Vitals Continuously With Scheduled Lab-Based Testing
Running tests continuously has a few advantages over ad-hoc tests. Most importantly, continuous testing triggers alerts whenever a new issue appears on your website, allowing you to start fixing them right away. You’ll also have access to historical data, allowing you to see exactly when a regression occurred and letting you compare test results before and after to see what changed.
Scheduled lab tests are easy to set up using a website monitoring tool like DebugBear. Enter a list of website URLs and pick a device type, test location, and test frequency to get things running:
As this process runs, it feeds data into the detailed dashboard with historical Core Web Vitals data. You can monitor a number of pages on your website or track the speed of your competition to make sure you stay ahead.
When regression occurs, you can dive deep into the results using DebuBears’s Compare mode. This mode lets you see before-and-after test results side-by-side, giving you context for identifying causes. You see exactly what changed. For example, in the following case, we can see that HTTP compression stopped working for a file, leading to an increase in page weight and longer download times.
How To Monitor Real User Core Web Vitals
Synthetic tests are great for super-detailed reporting of your page load time. However, other aspects of user experience, like layout shifts and slow interactions, heavily depend on how real users use your website. So, it’s worth setting up real user monitoring with a tool like DebugBear.
To monitor real user web vitals, you’ll need to install an analytics snippet that collects this data on your website. Once that’s done, you’ll be able to see data for all three Core Web Vitals metrics across your entire website.
To optimize your scores, you can go into the dashboard for each individual metric, select a specific page you’re interested in, and then dive deeper into the data.
For example, you can see whether a slow LCP score is caused by a slow server response, render blocking resources, or by the LCP content element itself.
You’ll also find that the LCP element varies between visitors. Lab test results are always the same, as they rely on a single fixed screen size. However, in the real world, visitors use a wide range of devices and will see different content when they open your website.
INP is tricky to debug without real user data. Yet an analytics tool like DebugBear can tell you exactly what page elements users are interacting with most often and which of these interactions are slow to respond.
Thanks to the new Long Animation Frames API, we can also see specific scripts that contribute to slow interactions. We can then decide to optimize these scripts, remove them from the page, or run them in a way that does not block interactions for as long.
Conclusion
Continuously monitoring Core Web Vitals lets you see how website changes impact user experience and ensures you get alerted when something goes wrong. While it’s possible to measure Core Web Vitals using a wide range of tools, those tools are limited by the type of data they use to evaluate performance, not to mention they only provide a single snapshot of performance at a specific point in time.
A tool like DebugBear gives you access to several different types of data that you can use to troubleshoot performance and optimize your website, complete with RUM capabilities that offer a historial record of performance for identifying issues where and when they occur. Sign up for a free DebugBear trial here.
20 Best New Websites, April 2024
Welcome to our sites of the month for April. With some websites, the details make all the difference, while in others, it is the overall tone or aesthetic that lifts the standard. In this collection, we have instances of both.
Sliding 3D Image Frames In CSS
In a previous article, we played with CSS masks to create cool hover effects where the main challenge was to rely only on the tag as our markup. In this article, pick up where we left off by “revealing” the image from behind a sliding door sort of thing — like opening up a box and finding a photograph in it.
This is because the padding has a transition that goes from s - 2*b
to 0
. Meanwhile, the background transitions from 100%
(equivalent to --s
) to 0
. There’s a difference equal to 2*b
. The background covers the entire area, while the padding covers less of it. We need to account for this.
Ideally, the padding transition would take less time to complete and have a small delay at the beginning to sync things up, but finding the correct timing won’t be an easy task. Instead, let’s increase the padding transition’s range to make it equal to the background.
img {
--h: calc(var(--s) - var(--b));
padding-top: min(var(--h), var(--s) - 2*var(--b));
transition: --h 1s linear;
}
img:hover {
--h: calc(-1 * var(--b));
}
The new variable, --h
, transitions from s - b
to -b
on hover, so we have the needed range since the difference is equal to --s
, making it equal to the background
and clip-path
transitions.
The trick is the min()
function. When --h
transitions from s - b
to s - 2*b
, the padding is equal to s - 2*b
. No padding changes during that brief transition. Then, when --h
reaches 0
and transitions from 0
to -b
, the padding remains equal to 0
since, by default, it cannot be a negative value.
It would be more intuitive to use clamp()
instead:
padding-top: clamp(0px, var(--h), var(--s) - 2*var(--b));
That said, we don’t need to specify the lower parameter since padding cannot be negative and will, by default, be clamped to 0
if you give it a negative value.
We are getting much closer to the final result!
First, we increase the border’s thickness on the left and bottom sides of the image:
img {
--b: 10px; /* the image border */
--d: 30px; /* the depth */
border: solid #0000;
border-width: var(--b) var(--b) calc(var(--b) + var(--d)) calc(var(--b) + var(--d));
}
Second, we add a conic-gradient()
on the background to create darker colors around the box:
background:
conic-gradient(at left var(--d) bottom var(--d),
#0000 25%,#0008 0 62.5%,#0004 0)
var(--c);
Notice the semi-transparent black color values (e.g., #0008
and #0004
). The slight bit of transparency blends with the colors behind it to create the illusion of a dark variation of the main color since the gradient is placed above the background color.
And lastly, we apply a clip-path
to cut out the corners that establish the 3D box.
clip-path: polygon(var(--d) 0, 100% 0, 100% calc(100% - var(--d)), calc(100% - var(--d)) 100%, 0 100%, 0 var(--d));
See the Pen The image within a 3D box by Temani Afif.
Now that we see and understand how the 3D effect is built let’s put back the things we removed earlier, starting with the padding:
See the Pen Putting back the padding animation by Temani Afif.
It works fine. But note how we’ve introduced the depth (--d
) to the formula. That’s because the bottom border is no longer equal to b
but b + d
.
--h: calc(var(--s) - var(--b) - var(--d));
padding-top: min(var(--h),var(--s) - 2*var(--b) - var(--d));
Let’s do the same thing with the linear gradient. We need to decrease its size so it covers the same area as it did before we introduced the depth so that it doesn’t overlap with the conic gradient:
See the Pen Putting back the gradient animation by Temani Afif.
We are getting closer! The last piece we need to add back in from earlier is the clip-path
transition that is combined with the box-shadow
. We cannot reuse the same code we used before since we changed the clip-path
value to create the 3D box shape. But we can still transition it to get the sliding result we want.
The idea is to have two points at the top that move up and down to reveal and hide the box-shadow
while the other points remain fixed. Here is a small video to illustrate the movement of the points.
See that? We have five fixed points. The two at the top move to increase the area of the polygon and reveal the box shadow.
img {
clip-path: polygon(
var(--d) 0, /* --> var(--d) calc(-1*(var(--s) - var(--d))) */
100% 0, /* --> 100% calc(-1*(var(--s) - var(--d))) */
/* the fixed points */
100% calc(100% - var(--d)), /* 1 */
calc(100% - var(--d)) 100%, /* 2 */
0 100%, /* 3 */
0 var(--d), /* 4 */
var(--d) 0); /* 5 */
}
And we’re done! We’re left with a nice 3D frame around the image element with a cover that slides up and down on hover. And we did it with zero extra markup or reaching for pseudo-elements!
See the Pen 3D image with reveal effect by Temani Afif.
And here is the first demo I shared at the start of this article, showing the two sliding variations.
See the Pen Image gift box (hover to reveal) by Temani Afif.
This last demo is an optimized version of what we did together. I have written most of the formulas using the variable --h
so that I only update one value on hover. It also includes another variation. Can you reverse-engineer it and see how its code differs from the one we did together?
One More 3D Example
Want another fancy effect that uses 3D effects and sliding overlays? Here’s one I put together using a different 3D perspective where the overlay splits open rather than sliding from one side to the other.
See the Pen Image gift box II (hover to reveal) by Temani Afif.
Your homework is to dissect the code. It may look complex, but if you trace the steps we completed for the original demo, I think you’ll find that it’s not a terribly different approach. The sliding effect still combines the padding
, the object-*
properties, and clip-path
but with different values to produce this new effect.
Conclusion
I hope you enjoyed this little 3D image experiment and the fancy effect we applied to it. I know that adding an extra element (i.e., a parent
Limiting the HTML to only a single element allows us to push the limits of CSS to discover new techniques that can save us time and bytes, especially in those situations where you might not have direct access to modify HTML, like when you’re working in a CMS template. Don’t look at this as an over-complicated exercise. It’s an exercise that challenges us to leverage the power and flexibility of CSS.
Connecting With Users: Applying Principles Of Communication To UX Research
Communication is in everything we do. We communicate with users through our research, our design, and, ultimately, the products and services we offer. UX practitioners and those working on digital product teams benefit from understanding principles of communication and their application to our craft. Treating our UX processes as a mode of communication between users and the digital environment can help unveil in-depth, actionable insights.
In this article, I’ll focus on UX research. Communication is a core component of UX research, as it serves to bridge the gap between research insights, design strategy, and business outcomes. UX researchers, designers, and those working with UX researchers can apply key aspects of communication theory to help gather valuable insights, enhance user experiences, and create more successful products.
Fundamentals of Communication Theory
Communications as an academic field encompasses various models and principles that highlight the dynamics of communication between individuals and groups. Communication theory examines the transfer of information from one person or group to another. It explores how messages are transmitted, encoded, and decoded, acknowledges the potential for interference (or ‘noise’), and accounts for feedback mechanisms in enhancing the communication process.
In this article, I will focus on the Transactional Model of Communication. There are many other models and theories in the academic literature on communication. I have included references at the end of the article for those interested in learning more.
The Transactional Model of Communication (Figure 1) is a two-way process that emphasizes the simultaneous sending and receiving of messages and feedback. Importantly, it recognizes that communication is shaped by context and is an ongoing, evolving process. I’ll use this model and understanding when applying principles from the model to UX research. You’ll find that much of what is covered in the Transactional Model would also fall under general best practices for UX research, suggesting even if we aren’t communications experts, much of what we should be doing is supported by research in this field.
Understanding the Transactional Model
Let’s take a deeper dive into the six key factors and their applications within the realm of UX research:
- Sender: In UX research, the sender is typically the researcher who conducts interviews, facilitates usability tests, or designs surveys. For example, if you’re administering a user interview, you are the sender who initiates the communication process by asking questions.
- Receiver: The receiver is the individual who decodes and interprets the messages sent by the sender. In our context, this could be the user you interview or the person taking a survey you have created. They receive and process your questions, providing responses based on their understanding and experiences.
- Message: This is the content being communicated from the sender to the receiver. In UX research, the message can take various forms, like a set of survey questions, interview prompts, or tasks in a usability test.
- Channel: This is the medium through which the communication flows. For instance, face-to-face interviews, phone interviews, email surveys administered online, and usability tests conducted via screen sharing are all different communication channels. You might use multiple channels simultaneously, for example, communicating over voice while also using a screen share to show design concepts.
- Noise: Any factor that may interfere with the communication is regarded as ‘noise.’ In UX research, this could be complex jargon that confuses respondents in a survey, technical issues during a remote usability test, or environmental distractions during an in-person interview.
- Feedback: The communication received by the receiver, who then provides an output, is called feedback. For example, the responses given by a user during an interview or the data collected from a completed survey are types of feedback or the physical reaction of a usability testing participant while completing a task.
Applying the Transactional Model of Communication to Preparing for UX Research
We can become complacent or feel rushed to create our research protocols. I think this is natural in the pace of many workplaces and our need to deliver results quickly. You can apply the lens of the Transactional Model of Communication to your research preparation without adding much time. Applying the Transactional Model of Communication to your preparation should:
-
Improve Clarity
The model provides a clear representation of communication, empowering the researcher to plan and conduct studies more effectively. -
Minimize misunderstanding
By highlighting potential noise sources, user confusion or misunderstandings can be better anticipated and mitigated. -
Enhance research participant participation
With your attentive eye on feedback, participants are likely to feel valued, thus increasing active involvement and quality of input.
You can address the specific elements of the Transactional Model through the following steps while preparing for research:
Defining the Sender and Receiver
In UX research, the sender can often be the UX researcher conducting the study, while the receiver is usually the research participant. Understanding this dynamic can help researchers craft questions or tasks more empathetically and efficiently. You should try to collect some information on your participant in advance to prepare yourself for building a rapport.
For example, if you are conducting contextual inquiry with the field technicians of an HVAC company, you’ll want to dress appropriately to reflect your understanding of the context in which your participants (receivers) will be conducting their work. Showing up dressed in formal attire might be off-putting and create a negative dynamic between sender and receiver.
Message Creation
The message in UX research typically is the questions asked or tasks assigned during the study. Careful consideration of tenor, terminology, and clarity can aid data accuracy and participant engagement. Whether you are interviewing or creating a survey, you need to double-check that your audience will understand your questions and provide meaningful answers. You can pilot-test your protocol or questionnaire with a few representative individuals to identify areas that might cause confusion.
Using the HVAC example again, you might find that field technicians use certain terminology in a different way than you expect, such as asking them about what “tools” they use to complete their tasks yields you an answer that doesn’t reflect digital tools you’d find on a computer or smartphone, but physical tools like a pipe and wrench.
Choosing the Right Channel
The channel selection depends on the method of research. For instance, face-to-face methods might use physical verbal communication, while remote methods might rely on emails, video calls, or instant messaging. The choice of the medium should consider factors like tech accessibility, ease of communication, reliability, and participant familiarity with the channel. For example, you introduce an additional challenge (noise) if you ask someone who has never used an iPhone to test an app on an iPhone.
Minimizing Noise
Noise in UX research comes in many forms, from unclear questions inducing participant confusion to technical issues in remote interviews that cause interruptions. The key is to foresee potential issues and have preemptive solutions ready.
Facilitating Feedback
You should be prepared for how you might collect and act on participant feedback during the research. Encouraging regular feedback from the user during UX research ensures their understanding and that they feel heard. This could range from asking them to ‘think aloud’ as they perform tasks or encouraging them to email queries or concerns after the session. You should document any noise that might impact your findings and account for that in your analysis and reporting.
Track Your Alignment to the Framework
You can track what you do to align your processes with the Transactional Model prior to and during research using a spreadsheet. I’ll provide an example of a spreadsheet I’ve used in the later case study section of this article. You should create your spreadsheet during the process of preparing for research, as some of what you do to prepare should align with the factors of the model.
You can use these tips for preparation regardless of the specific research method you are undertaking. Let’s now look closer at a few common methods and get specific on how you can align your actions with the Transactional Model.
Applying the Transactional Model to Common UX Research Methods
UX research relies on interaction with users. We can easily incorporate aspects of the Transactional Model of Communication into our most common methods. Utilizing the Transactional Model in conducting interviews, surveys, and usability testing can help provide structure to your process and increase the quality of insights gathered.
Interviews
Interviews are a common method used in qualitative UX research. They provide the perfect method for applying principles from the Transactional Model. In line with the Transactional Model, the researcher (sender) sends questions (messages) in-person or over the phone/computer medium (channel) to the participant (receiver), who provides answers (feedback) while contending with potential distraction or misunderstanding (noise). Reflecting on communication as transactional can help remind us we need to respect the dynamic between ourselves and the person we are interviewing. Rather than approaching an interview as a unidirectional interrogation, researchers need to view it as a conversation.
Applying the Transactional Model to conducting interviews means we should account for a number of facts to allow for high-quality communication. Note how the following overlap with what we typically call best practices.
Asking Open-ended Questions
To truly harness a two-way flow of communication, open-ended questions, rather than close-ended ones, are crucial. For instance, rather than asking, “Do you use our mobile application?” ask, “Can you describe your use of our mobile app?”. This encourages the participant to share more expansive and descriptive insights, furthering the dialogue.
Actively Listening
As the success of an interview relies on the participant’s responses, active listening is a crucial skill for UX researchers. The researcher should encourage participants to express their thoughts and feelings freely. Reflective listening techniques, such as paraphrasing or summarizing what the participant has shared, can reinforce to the interviewee that their contributions are being acknowledged and valued. It also provides an opportunity to clarify potential noise or misunderstandings that may arise.
Being Responsive
Building on the simultaneous send-receive nature of the Transactional Model, researchers must remain responsive during interviews. Providing non-verbal cues (like nodding) and verbal affirmations (“I see,” “Interesting”) lets participants know their message is being received and understood, making them feel comfortable and more willing to share.
Minimizing Noise
We should always attempt to account for noise in advance, as well as during our interview sessions. Noise, in the form of misinterpretations or distractions, can disrupt effective communication. Researchers can proactively reduce noise by conducting a dry run in advance of the scheduled interviews. This helps you become more fluent at going through the interview and also helps identify areas that might need improvement or be misunderstood by participants. You also reduce noise by creating a conducive interview environment, minimizing potential distractions, and asking clarifying questions during the interview whenever necessary.
For example, if a participant uses a term the researcher doesn’t understand, the researcher should politely ask for clarification rather than guessing its meaning and potentially misinterpreting the data.
Additional forms of noise can include participant confusion or distraction. You should let participants know to ask if they are unclear on anything you say or do. It’s a good idea to always ask participants to put their smartphones on mute. You should only provide information critical to the process when introducing the interview or tasks. For example, you don’t need to give a full background of the history of the product you are researching if that isn’t required for the participant to complete the interview. However, you should let them know the purpose of the research, gain their consent to participate, and inform them of how long you expect the session to last.
Strategizing the Flow
Researchers should build strategic thinking into their interviews to support the Transaction Model. Starting the interview with less intrusive questions can help establish rapport and make the participant more comfortable, while more challenging or sensitive questions can be left for later when the interviewee feels more at ease.
A well-planned interview encourages a fluid dialogue and exchange of ideas. This is another area where conducting a dry run can help to ensure high-quality research. You and your dry-run participants should recognize areas where questions aren’t flowing in the best order or don’t make sense in the context of the interview, allowing you to correct the flow in advance.
While much of what the Transactional Model informs for interviews already aligns with common best practices, the model would suggest we need to have a deeper consideration of factors that we can sometimes give less consideration when we become overly comfortable with interviewing or are unaware of the implications of forgetting to address the factors of context considerations, power dynamics, and post-interview actions.
Context Considerations
You need to account for both the context of the participant, e.g., their background, demographic, and psychographic information, as well as the context of the interview itself. You should make subtle yet meaningful modifications depending on the channel you are conducting an interview.
For example, you should utilize video and be aware of your facial and physical responses if you are conducting an interview using an online platform, whereas if it’s a phone interview, you will need to rely on verbal affirmations that you are listening and following along, while also being mindful not to interrupt the participant while they are speaking.
Power Dynamics
Researchers need to be aware of how your role, background, and identity might influence the power dynamics of the interview. You can attempt to address power dynamics by sharing research goals transparently and addressing any potential concerns about bias a participant shares.
We are responsible for creating a safe and inclusive space for our interviews. You do this through the use of inclusive language, listening actively without judgment, and being flexible to accommodate different ways of knowing and expressing experiences. You should also empower participants as collaborators whenever possible. You can offer opportunities for participants to share feedback on the interview process and analysis. Doing this validates participants’ experiences and knowledge and ensures their voices are heard and valued.
Post-Interview Actions
You have a number of options for actions that can close the loop of your interviews with participants in line with the “feedback” the model suggests is a critical part of communication. Some tactics you can consider following your interview include:
-
Debriefing
Dedicate a few minutes at the end to discuss the participant’s overall experience, impressions, and suggestions for future interviews. -
Short surveys
Send a brief survey via email or an online platform to gather feedback on the interview experience. -
Follow-up calls
Consider follow-up calls with specific participants to delve deeper into their feedback and gain additional insight if you find that is warranted. -
Thank you emails
Include a “feedback” section in your thank you email, encouraging participants to share their thoughts on the interview.
You also need to do something with the feedback you receive. Researchers and product teams should make time for reflexivity and critical self-awareness.
As practitioners in a human-focused field, we are expected to continuously examine how our assumptions and biases might influence our interviews and findings.
We shouldn’t practice our craft in a silo. Instead, seeking feedback from colleagues and mentors to maintain ethical research practices should be a standard practice for interviews and all UX research methods.
By considering interviews as an ongoing transaction and exchange of ideas rather than a unidirectional Q&A, UX researchers can create a more communicative and engaging environment. You can see how models of communication have informed best practices for interviews. With a better knowledge of the Transactional Model, you can go deeper and check your work against the framework of the model.
Surveys
The Transactional Model of Communication reminds us to acknowledge the feedback loop even in seemingly one-way communication methods like surveys. Instead of merely sending out questions and collecting responses, we need to provide space for respondents to voice their thoughts and opinions freely. When we make participants feel heard, engagement with our surveys should increase, dropouts should decrease, and response quality should improve.
Like other methods, surveys involve the researcher(s) creating the instructions and questionnaire (sender), the survey, including any instructions, disclaimers, and consent forms (the message), how the survey is administered, e.g., online, in person, or pen and paper (the channel), the participant (receiver), potential misunderstandings or distractions (noise), and responses (feedback).
Designing the Survey
Understanding the Transactional Model will help researchers design more effective surveys. Researchers are encouraged to be aware of both their role as the sender and to anticipate the participant’s perspective as the receiver. Begin surveys with clear instructions, explaining why you’re conducting the survey and how long it’s estimated to take. This establishes a more communicative relationship with respondents right from the start. Test these instructions with multiple people prior to launching the survey.
Crafting Questions
The questions should be crafted to encourage feedback and not just a simple yes or no. You should consider asking scaled questions or items that have been statistically validated to measure certain attributes of users.
For example, if you were looking deeper at a mobile banking application, rather than asking, “Did you find our product easy to use?” you would want to break that out into multiple aspects of the experience and ask about each with a separate question such as “On a scale of 1–7, with 1 being extremely difficult and 7 being extremely easy, how would you rate your experience transferring money from one account to another?”.
Minimizing Noise
Reducing ‘noise,’ or misunderstandings, is crucial for increasing the reliability of responses. Your first line of defense in reducing noise is to make sure you are sampling from the appropriate population you want to conduct the research with. You need to use a screener that will filter out non-viable participants prior to including them in the survey. You do this when you correctly identify the characteristics of the population you want to sample from and then exclude those falling outside of those parameters.
Additionally, you should focus on prioritizing finding participants through random sampling from the population of potential participants versus using a convenience sample, as this helps to ensure you are collecting reliable data.
When looking at the survey itself, there are a number of recommendations to reduce noise. You should ensure questions are easily understandable, avoid technical jargon, and sequence questions logically. A question bank should be reviewed and tested before being finalized for distribution.
For example, question statements like “Do you use and like this feature?” can confuse respondents because they are actually two separate questions: do you use the feature, and do you like the feature? You should separate out questions like this into more than one question.
You should use visual aids that are relevant whenever possible to enhance the clarity of the questions. For example, if you are asking questions about an application’s “Dashboard” screen, you might want to provide a screenshot of that page so survey takers have a clear understanding of what you are referencing. You should also avoid the use of jargon if you are surveying a non-technical population and explain any terminology that might be unclear to participants taking the survey.
The Transactional Model suggests active participation in communication is necessary for effective communication. Participants can become distracted or take a survey without intending to provide thoughtful answers. You should consider adding a question somewhere in the middle of the survey to check that participants are paying attention and responding appropriately, particularly for longer surveys.
This is often done using a simple math problem such as “What is the answer to 1+1?” Anyone not responding with the answer of “2” might not be adequately paying attention to the responses they are providing and you’d want to look closer at their responses, eliminating them from your analysis if deemed appropriate.
Encouraging Feedback
While descriptive feedback questions are one way of promoting dialogue, you can also include areas where respondents can express any additional thoughts or questions they have outside of the set question list. This is especially useful in online surveys, where researchers can’t immediately address participant’s questions or clarify doubts.
You should be mindful that too many open-ended questions can cause fatigue, so you should limit the number of open-ended questions. I recommend two to three open-ended questions depending on the length of your overall survey.
Post-Survey Actions
After collecting and analyzing the data, you can send follow-up communications to the respondents. Let them know the changes made based on their feedback, thank them for their participation, or even share a summary of the survey results. This fulfills the Transactional Model’s feedback loop and communicates to the respondent that their input was received, valued, and acted upon.
You can also meet this suggestion by providing an email address for participants to follow up if they desire more information post-survey. You are allowing them to complete the loop themselves if they desire.
Applying the transactional model to surveys can breathe new life into the way surveys are conducted in UX research. It encourages active participation from respondents, making the process more interactive and engaging while enhancing the quality of the data collected. You can experiment with applying some or all of the steps listed above. You will likely find you are already doing much of what’s mentioned, however being explicit can allow you to make sure you are thoughtfully applying these principles from the field communication.
Usability Testing
Usability testing is another clear example of a research method highlighting components of the Transactional Model. In the context of usability testing, the Transactional Model of Communication’s application opens a pathway for a richer understanding of the user experience by positioning both the user and the researcher as sender and receiver of communication simultaneously.
Here are some ways a researcher can use elements of the Transactional Model during usability testing:
Task Assignment as Message Sending
When a researcher assigns tasks to a user during usability testing, they act as the sender in the communication process. To ensure the user accurately receives the message, these tasks need to be clear and well-articulated. For example, a task like “Register a new account on the app” sends a clear message to the user about what they need to do.
You don’t need to tell them how to do the task, as usually, that’s what we are trying to determine from our testing, but if you are not clear on what you want them to do, your message will not resonate in the way it is intended. This is another area where a dry run in advance of the testing is an optimal solution for making sure tasks are worded clearly.
Observing and Listening as Message Receiving
As the participant interacts with the application, concept, or design, the researcher, as the receiver, picks up on verbal and nonverbal cues. For instance, if a user is clicking around aimlessly or murmuring in confusion, the researcher can take these as feedback about certain elements of the design that are unclear or hard to use. You can also ask the user to explain why they are giving these cues you note as a way to provide them with feedback on their communication.
Real-time Interaction
The transactional nature of the model recognizes the importance of real-time interaction. For example, if during testing, the user is unsure of what a task means or how to proceed, the researcher can provide clarification without offering solutions or influencing the user’s action. This interaction follows the communication flow prescribed by the transactional model. We lose the ability to do this during unmoderated testing; however, many design elements are forms of communication that can serve to direct users or clarify the purpose of an experience (to be covered more in article two).
Noise
In usability testing, noise could mean unclear tasks, users’ preconceived notions, or even issues like slow software response. Acknowledging noise can help researchers plan and conduct tests better. Again, carrying out a pilot test can help identify any noise in the main test scenarios, allowing for necessary tweaks before actual testing. Other forms of noise can be less obvious but equally intrusive. For example, if you are conducting a test using a Macbook laptop and your participant is used to a PC, there is noise you need to account for, given their unfamiliarity with the laptop you’ve provided.
The fidelity of the design artifact being tested might introduce another form of noise. I’ve always advocated testing at any level of fidelity, but you should note that if you are using “Lorem Ipsum” or black and white designs, this potentially adds noise.
One of my favorite examples of this was a time when I was testing a financial services application, and the designers had put different balances on the screen; however, the total for all balances had not been added up to the correct total. Virtually every person tested noted this discrepancy, although it had nothing to do with the tasks at hand. I had to acknowledge we’d introduced noise to the testing. As at least one participant noted, they wouldn’t trust a tool that wasn’t able to total balances correctly.
Encouraging Feedback
Under the Transactional Model’s guidance, feedback isn’t just final thoughts after testing; it should be facilitated at each step of the process. Encouraging ‘think aloud’ protocols, where the user verbalizes their thoughts, reactions, and feelings during testing, ensures a constant flow of useful feedback.
You are receiving feedback throughout the process of usability testing, and the model provides guidance on how you should use that feedback to create a shared meaning with the participants. You will ultimately summarize this meaning in your report. You’ll later end up uncovering if this shared meaning was correctly interpreted when you design or redesign the product based on your findings.
We’ve now covered how to apply the Transactional Model of Communication to three common UX Research methods. All research with humans involves communication. You can break down other UX methods using the Model’s factors to make sure you engage in high-quality research.
Analyzing and Reporting UX Research Data Through the Lens of the Transactional Model
The Transactional Model of Communication doesn’t only apply to the data collection phase (interviews, surveys, or usability testing) of UX research. Its principles can provide valuable insights during the data analysis process.
The Transactional Model instructs us to view any communication as an interactive, multi-layered dialogue — a concept that is particularly useful when unpacking user responses. Consider the ‘message’ components: In the context of data analysis, the messages are the users’ responses. As researchers, thinking critically about how respondents may have internally processed the survey questions, interview discussion, or usability tasks can yield richer insights into user motivations.
Understanding Context
Just as the Transactional Model emphasizes the simultaneous interchange of communication, UX researchers should consider the user’s context while interpreting data. Decoding the meaning behind a user’s words or actions involves understanding their background, experiences, and the situation when they provide responses.
Deciphering Noise
In the Transactional Model, noise presents a potential barrier to effective communication. Similarly, researchers must be aware of snowballing themes or frequently highlighted issues during analysis. Noise, in this context, could involve patterns of confusion, misunderstandings, or consistently highlighted problems by users. You need to account for this, e.g., the example I provided where participants constantly referred to the incorrect math on static wireframes.
Considering Sender-Receiver Dynamics
Remember that as a UX researcher, your interpretation of user responses will be influenced by your understandings, biases, or preconceptions, just as the responses were influenced by the user’s perceptions. By acknowledging this, researchers can strive to neutralize any subjective influence and ensure the analysis remains centered on the user’s perspective. You can ask other researchers to double-check your work to attempt to account for bias.
For example, if you come up with a clear theme that users need better guidance in the application you are testing, another researcher from outside of the project should come to a similar conclusion if they view the data; if not, you should have a conversation with them to determine what different perspectives you are each bringing to the data analysis.
Reporting Results
Understanding your audience is crucial for delivering a persuasive UX research presentation. Tailoring your communication to resonate with the specific concerns and interests of your stakeholders can significantly enhance the impact of your findings. Here are some more details:
-
Identify Stakeholder Groups
Identify the different groups of stakeholders who will be present in your audience. This could include designers, developers, product managers, and executives. -
Prioritize Information
Prioritize the information based on what matters most to each stakeholder group. For example, designers might be more interested in usability issues, while executives may prioritize business impact. -
Adapt Communication Style
Adjust your communication style to align with the communication preferences of each group. Provide technical details for developers and emphasize user experience benefits for executives.
Acknowledging Feedback
Respecting this Transactional Model’s feedback loop, remember to revisit user insights after implementing design changes. This ensures you stay user-focused, continuously validating or adjusting your interpretations based on users’ evolving feedback. You can do this in a number of ways. You can reconnect with users to show them updated designs and ask questions to see if the issues you attempted to resolve were resolved.
Another way to address this without having to reconnect with the users is to create a spreadsheet or other document to track all the recommendations that were made and reconcile the changes with what is then updated in the design. You should be able to map the changes users requested to updates or additions to the product roadmap for future updates. This acknowledges that users were heard and that an attempt to address their pain points will be documented.
Crucially, the Transactional Model teaches us that communication is rarely simple or one-dimensional. It encourages UX researchers to take a more nuanced, context-aware approach to data analysis, resulting in deeper user understanding and more accurate, user-validated results.
By maintaining an ongoing feedback loop with users and continually refining interpretations, researchers can ensure that their work remains grounded in real user experiences and needs.
Tracking Your Application of the Transactional Model to Your Practice
You might find it useful to track how you align your research planning and execution to the framework of the Transactional Model. I’ve created a spreadsheet to outline key factors of the model and used this for some of my work. Demonstrated below is an example derived from a study conducted for a banking client that included interviews and usability testing. I completed this spreadsheet during the process of planning and conducting interviews. Anonymized data from our study has been furnished to show an example of how you might populate a similar spreadsheet with your information.
You can customize the spreadsheet structure to fit your specific research topic and interview approach. By documenting your application of the transactional model, you can gain valuable insights into the dynamic nature of communication and improve your interview skills for future research.
Stage | Columns | Description | Example |
---|---|---|---|
Pre-Interview Planning | Topic/Question (Aligned with research goals) | Identify the research question and design questions that encourage open-ended responses and co-construction of meaning. | Testing mobile banking app’s bill payment feature. How do you set up a new payee? How would you make a payment? What are your overall impressions? |
Participant Context | Note relevant demographic and personal information to tailor questions and avoid biased assumptions. | 35-year-old working professional, frequent user of the online banking and mobile application but unfamiliar with using the app for bill pay. | |
Engagement Strategies | Outline planned strategies for active listening, open-ended questions, clarification prompts, and building rapport. | Open-ended follow-up questions (“Can you elaborate on XYZ? Or Please explain more to me what you mean by XYZ.”), active listening cues, positive reinforcement (“Thank you for sharing those details”). | |
Shared Understanding | List potential challenges to understanding participant’s perspectives and strategies for ensuring shared meaning. | Initially, the participant expressed some confusion about the financial jargon I used. I clarified and provided simpler [non-jargon] explanations, ensuring we were on the same page. | |
During Interview | Verbal Cues | Track participant’s language choices, including metaphors, pauses, and emotional expressions. | Participant used a hesitant tone when describing negative experiences with the bill payment feature. When questioned, they stated it was “likely their fault” for not understanding the flow [it isn’t their fault]. |
Nonverbal Cues | Note participant’s nonverbal communication like body language, facial expressions, and eye contact. | Frowning and crossed arms when discussing specific pain points. | |
Researcher Reflexivity | Record moments where your own biases or assumptions might influence the interview and potential mitigation strategies. | Recognized my own familiarity with the app might bias my interpretation of users’ understanding [e.g., going slower than I would have when entering information]. Asked clarifying questions to avoid imposing my assumptions. | |
Power Dynamics | Identify instances where power differentials emerge and actions taken to address them. | Participant expressed trust in the research but admitted feeling hesitant to criticize the app directly. I emphasized anonymity and encouraged open feedback. | |
Unplanned Questions | List unplanned questions prompted by the participant’s responses that deepen understanding. | What alternative [non-bank app] methods for paying bills that you use? (Prompted by participant’s frustration with app bill pay). | |
Post-Interview Reflection | Meaning Co-construction | Analyze how both parties contributed to building shared meaning and insights. | Through dialogue, we collaboratively identified specific design flaws in the bill payment interface and explored additional pain points and areas that worked well. |
Openness and Flexibility | Evaluate how well you adapted to unexpected responses and maintained an open conversation. | Adapted questioning based on participant’s emotional cues and adjusted language to minimize technical jargon when that issue was raised. | |
Participant Feedback | Record any feedback received from participants regarding the interview process and areas for improvement. | Thank you for the opportunity to be in the study. I’m glad my comments might help improve the app for others. I’d be happy to participate in future studies. | |
Ethical Considerations | Reflect on whether the interview aligned with principles of transparency, reciprocity, and acknowledging power dynamics. | Maintained anonymity throughout the interview and ensured informed consent was obtained. Data will be stored and secured as outlined in the research protocol. | |
Key Themes/Quotes | Use this column to identify emerging themes or save quotes you might refer to later when creating the report. | Frustration with a confusing interface, lack of intuitive navigation, and desire for more customization options. | |
Analysis Notes | Use as many lines as needed to add notes for consideration during analysis. | Add notes here. |
You can use the suggested columns from this table as you see fit, adding or subtracting as needed, particularly if you use a method other than interviews. I usually add the following additional Columns for logistical purposes:
- Date of Interview,
- Participant ID,
- Interview Format (e.g., in person, remote, video, phone).
Conclusion
By incorporating aspects of communication theory into UX research, UX researchers and those who work with UX researchers can enhance the effectiveness of their communication strategies, gather more accurate insights, and create better user experiences. Communication theory provides a framework for understanding the dynamics of communication, and its application to UX research enables researchers to tailor their approaches to specific audiences, employ effective interviewing techniques, design surveys and questionnaires, establish seamless communication channels during usability testing, and interpret data more effectively.
As the field of UX research continues to evolve, integrating communication theory into research practices will become increasingly essential for bridging the gap between users and design teams, ultimately leading to more successful products that resonate with target audiences.
As a UX professional, it is important to continually explore and integrate new theories and methodologies to enhance your practice. By leveraging communication theory principles, you can better understand user needs, improve the user experience, and drive successful outcomes for digital products and services.
Integrating communication theory into UX research is an ongoing journey of learning and implementing best practices. Embracing this approach empowers researchers to effectively communicate their findings to stakeholders and foster collaborative decision-making, ultimately driving positive user experiences and successful design outcomes.
References and Further Reading
- The Mathematical Theory of Communication (PDF), Shannon, C. E., & Weaver, W.
- From organizational effectiveness to relationship indicators: Antecedents of relationships, public relations strategies, and relationship outcomes, Grunig, J. E., & Huang, Y. H.
- Communication and persuasion: Psychological studies of opinion change, Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Yale University Press
- Communication research as an autonomous discipline, Chaffee, S. H. (1986). Communication Yearbook, 10, 243-274
- Interpersonal Communication: Everyday Encounters (PDF), Wood, J. (2015)
- Theories of Human Communication, Littlejohn, S. W., & Foss, K. A. (2011)
- McQuail’s Mass Communication Theory (PDF), McQuail, D. (2010)
- Bridges Not Walls: A Book About Interpersonal Communication, Stewart, J. (2012)
Exciting New Tools for Designers, April 2024
Welcome to our April tools collection. There are no practical jokes here, just practical gadgets, services, and apps to make life that little bit easier and keep you working smarter.
Managing User Focus with :focus-visible
Demo
The demo below shows how :focus-visible
works when added to your CSS. The first part of the video shows the experience when navigating through with a mouse the second shows navigating through with just my keyboard. I recorded myself as well to show that I did switch from using my mouse, to my keyboard.

The browser is predicting what to do with the focus ring based on my input (keyboard/mouse), and then adding a focus ring to those elements. In this case, when I am navigating through this example with the keyboard, everything receives focus. When using the mouse, only the input gets focus and the buttons don’t. If you remove :focus-visible
, the browser will apply the default focus ring.
The code below is applying :focus-visible
to the focusable elements.
:focus-visible {
outline-color: black;
font-size: 1.2em;
font-family: serif;
font-weight: bold;
}
If you want to specify the label
or the button to receive :focus-visible
just prepend the class with input
or button
respectively.
button:focus-visible {
outline-color: black;
font-size: 1.2em;
font-family: serif;
font-weight: bold;
}
/*** OR ***/
input:focus-visible {
outline-color: black;
font-size: 1.2em;
font-family: serif;
font-weight: bold;
}
Support
If the browser does not support :focus-visible
you can have a fall back in place to handle the interaction. The code below is from the
This is going to be the 2nd post in a small series we are doing on form accessibility. If you missed the first post, check out Accessible Forms with Pseudo Classes. In this post we are going to look at :focus-visible and how to use it in your web sites!
Focus Touchpoint
Before we move forward with :focus-visible
, let’s revisit how :focus
works in your CSS. Focus is the visual indicator that an element is being interacted with via keyboard, mouse, trackpad, or assistive technology. Certain elements are naturally interactive, like links, buttons, and form elements. We want to make sure that our users know where they are and the interactions they are making.
Remember don’t do this in your CSS!
:focus {
outline: 0;
}
/*** OR ***/
:focus {
outline: none;
}
When you remove focus, you remove it for EVERYONE! We want to make sure that we are preserving the focus.
If for any reason you do need to remove the focus, make sure there is also fallback :focus
styles for your users. That fallback can match your branding colors, but make sure those colors are also accessible. If marketing, design, or branding doesn’t like the default focus ring styles, then it is time to start having conversations and collaborate with them on the best way of adding it back in.
What is focus-visible?
The pseudo class, :focus-visible
, is just like our default :focus
pseudo class. It gives the user an indicator that something is being focused on the page. The way you write :focus-visible
is cut and dry:
:focus-visible {
/* ... */
}
When using :focus-visible
with a specific element, the syntax looks something like this:
.your-element:focus-visible {
/*...*/
}
The great thing about using :focus-visible
is you can make your element stand out, bright and bold! No need to worry about it showing if the element is clicked/tapped. If you choose not to implement the class, the default will be the user agent focus ring which to some is undesirable.
Backstory of focus-visible
Before we had the :focus-visible
, the user agent styling would apply :focus
to most elements on the page; buttons, links, etc. It would apply an outline or “focus ring” to the focusable element. This was deemed to be ugly, most didn’t like the default focus ring the browser provided. As a result of the focus ring being unfavorable to look at, most authors removed it… without a fallback. Remember, when you remove :focus
, it decreases usability and makes the experience inaccessible for keyboard users.
In the current state of the web, the browser no longer visibly indicates focus around various elements when they have focus. The browser instead uses varying heuristics to determine when it would help the user, providing a focus ring in return. According to Khan Academy, a heuristic is, “a technique that guides an algorithm to find good choices.”
What this means is that the browser can detect whether or not the user is interacting with the experience from a keyboard, mouse, or trackpad and based on that input type, it adds or removes the focus ring. The example in this post highlights the input interaction.
In the early days of :focus-visible
we were using a polyfill to handle the focus ring created by Alice Boxhall and Brian Kardell, Mozilla also came out with their own pseudo class, :moz-focusring
, before the official specification. If you want to learn more about the early days of the focus-ring, check out A11y Casts with Rob Dodson.
Focus Importance
There are plenty of reasons why focus is important in your application. For one, like I stated above, we as ambassadors of the web have to make sure we are providing the best, accessible experience we can. We don’t want any of our users guessing where they are while they are navigation through the experience.
One example that always comes to mind is the Two Blind Brothers website. If you go to the website and click/tap (this works on mobile), the closed eye in the bottom left corner, you will see the eye open and a simulation begins. Both the brothers, Bradford and Bryan Manning, were diagnosed at a young age with Stargardt’s Disease. Stargardt’s disease is a form of macular degeneration of the eye. Over time both brothers will be completely blind. Visit the site and click the eye to see how they see.
If you were in their shoes and you had to navigate through a page, you would want to make sure you knew exactly where you were throughout the whole experience. A focus ring gives you that power.
Demo
The demo below shows how :focus-visible
works when added to your CSS. The first part of the video shows the experience when navigating through with a mouse the second shows navigating through with just my keyboard. I recorded myself as well to show that I did switch from using my mouse, to my keyboard.

The browser is predicting what to do with the focus ring based on my input (keyboard/mouse), and then adding a focus ring to those elements. In this case, when I am navigating through this example with the keyboard, everything receives focus. When using the mouse, only the input gets focus and the buttons don’t. If you remove :focus-visible
, the browser will apply the default focus ring.
The code below is applying :focus-visible
to the focusable elements.
:focus-visible {
outline-color: black;
font-size: 1.2em;
font-family: serif;
font-weight: bold;
}
If you want to specify the label
or the button to receive :focus-visible
just prepend the class with input
or button
respectively.
button:focus-visible {
outline-color: black;
font-size: 1.2em;
font-family: serif;
font-weight: bold;
}
/*** OR ***/
input:focus-visible {
outline-color: black;
font-size: 1.2em;
font-family: serif;
font-weight: bold;
}
Support
If the browser does not support :focus-visible
you can have a fall back in place to handle the interaction. The code below is from the MDN Playground. You can use the @supports at-rule or “feature query” to check support. One thing to keep in mind, the rule should be placed at the top of the code or nested inside another group at-rule.
<button class="button with-fallback" type="button">Button with fallback</button>
<button class="button without-fallback" type="button">Button without fallback</button>
.button {
margin: 10px;
border: 2px solid darkgray;
border-radius: 4px;
}
.button:focus-visible {
/* Draw the focus when :focus-visible is supported */
outline: 3px solid deepskyblue;
outline-offset: 3px;
}
@supports not selector(:focus-visible) {
.button.with-fallback:focus {
/* Fallback for browsers without :focus-visible support */
outline: 3px solid deepskyblue;
outline-offset: 3px;
}
}
Further Accessibility Concerns
Accessibility concerns to keep in mind when building out your experience:
- Make sure the colors you choose for your focus indicator, if at all, are still accessible according to the information documented in the WCAG 2.2 Non-text Contrast (Level AA)
- Cognitive overload can cause a user distress. Make sure to keep styles on varying interactive elements consistent
Browser Support
This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.
Desktop
Chrome | Firefox | IE | Edge | Safari |
---|---|---|---|---|
86 | 4* | No | 86 | 15.4 |
Mobile / Tablet
Android Chrome | Android Firefox | Android | iOS Safari |
---|---|---|---|
123 | 124 | 123 | 15.4 |
Links
- https://daverupert.com/2024/01/focus-visible-love/
- https://css-tricks.com/almanac/selectors/f/focus-visible/
Managing User Focus with :focus-visible originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.