Archive

Archive for the ‘’ Category

How To Monitor And Optimize Google Core Web Vitals

April 16th, 2024 No comments

This article is a sponsored by DebugBear

Google’s Core Web Vitals initiative has increased the attention website owners need to pay to user experience. You can now more easily see when users have poor experiences on your website, and poor UX also has a bigger impact on SEO.

That means you need to test your website to identify optimizations. Beyond that, monitoring ensures that you can stay ahead of your Core Web Vitals scores for the long term.

Let’s find out how to work with different types of Core Web Vitals data and how monitoring can help you gain a deeper insight into user experiences and help you optimize them.

What Are Core Web Vitals?

There are three web vitals metrics Google uses to measure different aspects of website performance:

  • Largest Contentful Paint (LCP),
  • Cumulative Layout Shift (CLS),
  • Interaction to Next Paint (INP).

Largest Contentful Paint (LCP)

The Largest Contentful Paint metric is the closest thing to a traditional load time measurement. However, LCP doesn’t track a purely technical page load milestone like the JavaScript Load Event. Instead, it focuses on what the user can see by measuring how soon after opening a page, the largest content element on the page appears.

The faster the LCP happens, the better, and Google rates a passing LCP score below 2.5 seconds.

Cumulative Layout Shift (CLS)

Cumulative Layout Shift is a bit of an odd metric, as it doesn’t measure how fast something happens. Instead, it looks at how stable the page layout is once the page starts loading. Layout shifts mean that content moves around, disorienting the user and potentially causing accidental clicks on the wrong UI element.

The CLS score is calculated by looking at how far an element moved and how big the element is. Aim for a score below 0.1 to get a good rating from Google.

Interaction to Next Paint (INP)

Even websites that load quickly often frustrate users when interactions with the page feel sluggish. That’s why Interaction to Next Paint measures how long the page remains frozen after user interaction with no visual updates.

Page interactions should feel practically instant, so Google recommends an INP score below 200 milliseconds.

What Are The Different Types Of Core Web Vitals Data?

You’ll often see different page speed metrics reported by different tools and data sources, so it’s important to understand the differences. We’ve published a whole article just about that, but here’s the high-level breakdown along with the pros and cons of each one:

  • Synthetic Tests
    These tests are run on-demand in a controlled lab environment in a fixed location with a fixed network and device speed. They can produce very detailed reports and recommendations.
  • Real User Monitoring (RUM)
    This data tells you how fast your website is for your actual visitors. That means you need to install an analytics script to collect it, and the reporting that’s available is less detailed than for lab tests.
  • CrUX Data
    Google collects from Chrome users as part of the Chrome User Experience Report (CrUX) and uses it as a ranking signal. It’s available for every website with enough traffic, but since it covers a 28-day rolling window, it takes a while for changes on your website to be reflected here. It also doesn’t include any debug data to help you optimize your metrics.

Start By Running A One-Off Page Speed Test

Before signing up for a monitoring service, it’s best to run a one-off lab test with a free tool like Google’s PageSpeed Insights or the DebugBear Website Speed Test. Both of these tools report with Google CrUX data that reflects whether real users are facing issues on your website.

Note: The lab data you get from some Lighthouse-based tools — like PageSpeed Insights — can be unreliable.

INP is best measured for real users, where you can see the elements that users interact with most often and where the problems lie. But a free tool like the INP Debugger can be a good starting point if you don’t have RUM set up yet.

How To Monitor Core Web Vitals Continuously With Scheduled Lab-Based Testing

Running tests continuously has a few advantages over ad-hoc tests. Most importantly, continuous testing triggers alerts whenever a new issue appears on your website, allowing you to start fixing them right away. You’ll also have access to historical data, allowing you to see exactly when a regression occurred and letting you compare test results before and after to see what changed.

Scheduled lab tests are easy to set up using a website monitoring tool like DebugBear. Enter a list of website URLs and pick a device type, test location, and test frequency to get things running:

As this process runs, it feeds data into the detailed dashboard with historical Core Web Vitals data. You can monitor a number of pages on your website or track the speed of your competition to make sure you stay ahead.

When regression occurs, you can dive deep into the results using DebuBears’s Compare mode. This mode lets you see before-and-after test results side-by-side, giving you context for identifying causes. You see exactly what changed. For example, in the following case, we can see that HTTP compression stopped working for a file, leading to an increase in page weight and longer download times.

How To Monitor Real User Core Web Vitals

Synthetic tests are great for super-detailed reporting of your page load time. However, other aspects of user experience, like layout shifts and slow interactions, heavily depend on how real users use your website. So, it’s worth setting up real user monitoring with a tool like DebugBear.

To monitor real user web vitals, you’ll need to install an analytics snippet that collects this data on your website. Once that’s done, you’ll be able to see data for all three Core Web Vitals metrics across your entire website.

To optimize your scores, you can go into the dashboard for each individual metric, select a specific page you’re interested in, and then dive deeper into the data.

For example, you can see whether a slow LCP score is caused by a slow server response, render blocking resources, or by the LCP content element itself.

You’ll also find that the LCP element varies between visitors. Lab test results are always the same, as they rely on a single fixed screen size. However, in the real world, visitors use a wide range of devices and will see different content when they open your website.

INP is tricky to debug without real user data. Yet an analytics tool like DebugBear can tell you exactly what page elements users are interacting with most often and which of these interactions are slow to respond.

Thanks to the new Long Animation Frames API, we can also see specific scripts that contribute to slow interactions. We can then decide to optimize these scripts, remove them from the page, or run them in a way that does not block interactions for as long.

Conclusion

Continuously monitoring Core Web Vitals lets you see how website changes impact user experience and ensures you get alerted when something goes wrong. While it’s possible to measure Core Web Vitals using a wide range of tools, those tools are limited by the type of data they use to evaluate performance, not to mention they only provide a single snapshot of performance at a specific point in time.

A tool like DebugBear gives you access to several different types of data that you can use to troubleshoot performance and optimize your website, complete with RUM capabilities that offer a historial record of performance for identifying issues where and when they occur. Sign up for a free DebugBear trial here.

Categories: Others Tags:

20 Best New Websites, April 2024

April 15th, 2024 No comments

Welcome to our sites of the month for April. With some websites, the details make all the difference, while in others, it is the overall tone or aesthetic that lifts the standard. In this collection, we have instances of both.

Categories: Designing, Others Tags:

Sliding 3D Image Frames In CSS

April 12th, 2024 No comments

In a previous article, we played with CSS masks to create cool hover effects where the main challenge was to rely only on the tag as our markup. In this article, pick up where we left off by “revealing” the image from behind a sliding door sort of thing — like opening up a box and finding a photograph in it.

This is because the padding has a transition that goes from s - 2*b to 0. Meanwhile, the background transitions from 100% (equivalent to --s) to 0. There’s a difference equal to 2*b. The background covers the entire area, while the padding covers less of it. We need to account for this.

Ideally, the padding transition would take less time to complete and have a small delay at the beginning to sync things up, but finding the correct timing won’t be an easy task. Instead, let’s increase the padding transition’s range to make it equal to the background.

img {
  --h: calc(var(--s) - var(--b));
  padding-top: min(var(--h), var(--s) - 2*var(--b));
  transition: --h 1s linear;
}
img:hover {
  --h: calc(-1 * var(--b));
}

The new variable, --h, transitions from s - b to -b on hover, so we have the needed range since the difference is equal to --s, making it equal to the background and clip-path transitions.

The trick is the min() function. When --h transitions from s - b to s - 2*b, the padding is equal to s - 2*b. No padding changes during that brief transition. Then, when --h reaches 0 and transitions from 0 to -b, the padding remains equal to 0 since, by default, it cannot be a negative value.

It would be more intuitive to use clamp() instead:

padding-top: clamp(0px, var(--h), var(--s) - 2*var(--b));

That said, we don’t need to specify the lower parameter since padding cannot be negative and will, by default, be clamped to 0 if you give it a negative value.

We are getting much closer to the final result!

First, we increase the border’s thickness on the left and bottom sides of the image:

img {
  --b: 10px; /* the image border */
  --d: 30px; /* the depth */

  border: solid #0000;
  border-width: var(--b) var(--b) calc(var(--b) + var(--d)) calc(var(--b) + var(--d));
}

Second, we add a conic-gradient() on the background to create darker colors around the box:

background: 
  conic-gradient(at left var(--d) bottom var(--d),
   #0000 25%,#0008 0 62.5%,#0004 0) 
  var(--c);

Notice the semi-transparent black color values (e.g., #0008 and #0004). The slight bit of transparency blends with the colors behind it to create the illusion of a dark variation of the main color since the gradient is placed above the background color.

And lastly, we apply a clip-path to cut out the corners that establish the 3D box.

clip-path: polygon(var(--d) 0, 100% 0, 100% calc(100% - var(--d)), calc(100% - var(--d)) 100%, 0 100%, 0 var(--d));

See the Pen The image within a 3D box by Temani Afif.

Now that we see and understand how the 3D effect is built let’s put back the things we removed earlier, starting with the padding:

See the Pen Putting back the padding animation by Temani Afif.

It works fine. But note how we’ve introduced the depth (--d) to the formula. That’s because the bottom border is no longer equal to b but b + d.

--h: calc(var(--s) - var(--b) - var(--d));
padding-top: min(var(--h),var(--s) - 2*var(--b) - var(--d));

Let’s do the same thing with the linear gradient. We need to decrease its size so it covers the same area as it did before we introduced the depth so that it doesn’t overlap with the conic gradient:

See the Pen Putting back the gradient animation by Temani Afif.

We are getting closer! The last piece we need to add back in from earlier is the clip-path transition that is combined with the box-shadow. We cannot reuse the same code we used before since we changed the clip-path value to create the 3D box shape. But we can still transition it to get the sliding result we want.

The idea is to have two points at the top that move up and down to reveal and hide the box-shadow while the other points remain fixed. Here is a small video to illustrate the movement of the points.

See that? We have five fixed points. The two at the top move to increase the area of the polygon and reveal the box shadow.

img {
  clip-path: polygon(
    var(--d) 0, /* --> var(--d) calc(-1*(var(--s) - var(--d))) */
    100%     0, /* --> 100%     calc(-1*(var(--s) - var(--d))) */

    /* the fixed points */ 
    100% calc(100% - var(--d)), /* 1 */
    calc(100% - var(--d)) 100%, /* 2 */
    0 100%,                     /* 3 */
    0 var(--d),                 /* 4 */
    var(--d) 0);                /* 5 */
}

And we’re done! We’re left with a nice 3D frame around the image element with a cover that slides up and down on hover. And we did it with zero extra markup or reaching for pseudo-elements!

See the Pen 3D image with reveal effect by Temani Afif.

And here is the first demo I shared at the start of this article, showing the two sliding variations.

See the Pen Image gift box (hover to reveal) by Temani Afif.

This last demo is an optimized version of what we did together. I have written most of the formulas using the variable --h so that I only update one value on hover. It also includes another variation. Can you reverse-engineer it and see how its code differs from the one we did together?

One More 3D Example

Want another fancy effect that uses 3D effects and sliding overlays? Here’s one I put together using a different 3D perspective where the overlay splits open rather than sliding from one side to the other.

See the Pen Image gift box II (hover to reveal) by Temani Afif.

Your homework is to dissect the code. It may look complex, but if you trace the steps we completed for the original demo, I think you’ll find that it’s not a terribly different approach. The sliding effect still combines the padding, the object-* properties, and clip-path but with different values to produce this new effect.

Conclusion

I hope you enjoyed this little 3D image experiment and the fancy effect we applied to it. I know that adding an extra element (i.e., a parent

as a wrapper) to the markup would have made the effect a lot easier to achieve, as would pseudo-elements and translations. But we are here for the challenge and learning opportunity, right?

Limiting the HTML to only a single element allows us to push the limits of CSS to discover new techniques that can save us time and bytes, especially in those situations where you might not have direct access to modify HTML, like when you’re working in a CMS template. Don’t look at this as an over-complicated exercise. It’s an exercise that challenges us to leverage the power and flexibility of CSS.

Categories: Others Tags:

Connecting With Users: Applying Principles Of Communication To UX Research

April 9th, 2024 No comments

Communication is in everything we do. We communicate with users through our research, our design, and, ultimately, the products and services we offer. UX practitioners and those working on digital product teams benefit from understanding principles of communication and their application to our craft. Treating our UX processes as a mode of communication between users and the digital environment can help unveil in-depth, actionable insights.

In this article, I’ll focus on UX research. Communication is a core component of UX research, as it serves to bridge the gap between research insights, design strategy, and business outcomes. UX researchers, designers, and those working with UX researchers can apply key aspects of communication theory to help gather valuable insights, enhance user experiences, and create more successful products.

Fundamentals of Communication Theory

Communications as an academic field encompasses various models and principles that highlight the dynamics of communication between individuals and groups. Communication theory examines the transfer of information from one person or group to another. It explores how messages are transmitted, encoded, and decoded, acknowledges the potential for interference (or ‘noise’), and accounts for feedback mechanisms in enhancing the communication process.

In this article, I will focus on the Transactional Model of Communication. There are many other models and theories in the academic literature on communication. I have included references at the end of the article for those interested in learning more.

The Transactional Model of Communication (Figure 1) is a two-way process that emphasizes the simultaneous sending and receiving of messages and feedback. Importantly, it recognizes that communication is shaped by context and is an ongoing, evolving process. I’ll use this model and understanding when applying principles from the model to UX research. You’ll find that much of what is covered in the Transactional Model would also fall under general best practices for UX research, suggesting even if we aren’t communications experts, much of what we should be doing is supported by research in this field.

Understanding the Transactional Model

Let’s take a deeper dive into the six key factors and their applications within the realm of UX research:

  1. Sender: In UX research, the sender is typically the researcher who conducts interviews, facilitates usability tests, or designs surveys. For example, if you’re administering a user interview, you are the sender who initiates the communication process by asking questions.
  2. Receiver: The receiver is the individual who decodes and interprets the messages sent by the sender. In our context, this could be the user you interview or the person taking a survey you have created. They receive and process your questions, providing responses based on their understanding and experiences.
  3. Message: This is the content being communicated from the sender to the receiver. In UX research, the message can take various forms, like a set of survey questions, interview prompts, or tasks in a usability test.
  4. Channel: This is the medium through which the communication flows. For instance, face-to-face interviews, phone interviews, email surveys administered online, and usability tests conducted via screen sharing are all different communication channels. You might use multiple channels simultaneously, for example, communicating over voice while also using a screen share to show design concepts.
  5. Noise: Any factor that may interfere with the communication is regarded as ‘noise.’ In UX research, this could be complex jargon that confuses respondents in a survey, technical issues during a remote usability test, or environmental distractions during an in-person interview.
  6. Feedback: The communication received by the receiver, who then provides an output, is called feedback. For example, the responses given by a user during an interview or the data collected from a completed survey are types of feedback or the physical reaction of a usability testing participant while completing a task.

Applying the Transactional Model of Communication to Preparing for UX Research

We can become complacent or feel rushed to create our research protocols. I think this is natural in the pace of many workplaces and our need to deliver results quickly. You can apply the lens of the Transactional Model of Communication to your research preparation without adding much time. Applying the Transactional Model of Communication to your preparation should:

  • Improve Clarity
    The model provides a clear representation of communication, empowering the researcher to plan and conduct studies more effectively.
  • Minimize misunderstanding
    By highlighting potential noise sources, user confusion or misunderstandings can be better anticipated and mitigated.
  • Enhance research participant participation
    With your attentive eye on feedback, participants are likely to feel valued, thus increasing active involvement and quality of input.

You can address the specific elements of the Transactional Model through the following steps while preparing for research:

Defining the Sender and Receiver

In UX research, the sender can often be the UX researcher conducting the study, while the receiver is usually the research participant. Understanding this dynamic can help researchers craft questions or tasks more empathetically and efficiently. You should try to collect some information on your participant in advance to prepare yourself for building a rapport.

For example, if you are conducting contextual inquiry with the field technicians of an HVAC company, you’ll want to dress appropriately to reflect your understanding of the context in which your participants (receivers) will be conducting their work. Showing up dressed in formal attire might be off-putting and create a negative dynamic between sender and receiver.

Message Creation

The message in UX research typically is the questions asked or tasks assigned during the study. Careful consideration of tenor, terminology, and clarity can aid data accuracy and participant engagement. Whether you are interviewing or creating a survey, you need to double-check that your audience will understand your questions and provide meaningful answers. You can pilot-test your protocol or questionnaire with a few representative individuals to identify areas that might cause confusion.

Using the HVAC example again, you might find that field technicians use certain terminology in a different way than you expect, such as asking them about what “tools” they use to complete their tasks yields you an answer that doesn’t reflect digital tools you’d find on a computer or smartphone, but physical tools like a pipe and wrench.

Choosing the Right Channel

The channel selection depends on the method of research. For instance, face-to-face methods might use physical verbal communication, while remote methods might rely on emails, video calls, or instant messaging. The choice of the medium should consider factors like tech accessibility, ease of communication, reliability, and participant familiarity with the channel. For example, you introduce an additional challenge (noise) if you ask someone who has never used an iPhone to test an app on an iPhone.

Minimizing Noise

Noise in UX research comes in many forms, from unclear questions inducing participant confusion to technical issues in remote interviews that cause interruptions. The key is to foresee potential issues and have preemptive solutions ready.

Facilitating Feedback

You should be prepared for how you might collect and act on participant feedback during the research. Encouraging regular feedback from the user during UX research ensures their understanding and that they feel heard. This could range from asking them to ‘think aloud’ as they perform tasks or encouraging them to email queries or concerns after the session. You should document any noise that might impact your findings and account for that in your analysis and reporting.

Track Your Alignment to the Framework

You can track what you do to align your processes with the Transactional Model prior to and during research using a spreadsheet. I’ll provide an example of a spreadsheet I’ve used in the later case study section of this article. You should create your spreadsheet during the process of preparing for research, as some of what you do to prepare should align with the factors of the model.

You can use these tips for preparation regardless of the specific research method you are undertaking. Let’s now look closer at a few common methods and get specific on how you can align your actions with the Transactional Model.

Applying the Transactional Model to Common UX Research Methods

UX research relies on interaction with users. We can easily incorporate aspects of the Transactional Model of Communication into our most common methods. Utilizing the Transactional Model in conducting interviews, surveys, and usability testing can help provide structure to your process and increase the quality of insights gathered.

Interviews

Interviews are a common method used in qualitative UX research. They provide the perfect method for applying principles from the Transactional Model. In line with the Transactional Model, the researcher (sender) sends questions (messages) in-person or over the phone/computer medium (channel) to the participant (receiver), who provides answers (feedback) while contending with potential distraction or misunderstanding (noise). Reflecting on communication as transactional can help remind us we need to respect the dynamic between ourselves and the person we are interviewing. Rather than approaching an interview as a unidirectional interrogation, researchers need to view it as a conversation.

Applying the Transactional Model to conducting interviews means we should account for a number of facts to allow for high-quality communication. Note how the following overlap with what we typically call best practices.

Asking Open-ended Questions

To truly harness a two-way flow of communication, open-ended questions, rather than close-ended ones, are crucial. For instance, rather than asking, “Do you use our mobile application?” ask, “Can you describe your use of our mobile app?”. This encourages the participant to share more expansive and descriptive insights, furthering the dialogue.

Actively Listening

As the success of an interview relies on the participant’s responses, active listening is a crucial skill for UX researchers. The researcher should encourage participants to express their thoughts and feelings freely. Reflective listening techniques, such as paraphrasing or summarizing what the participant has shared, can reinforce to the interviewee that their contributions are being acknowledged and valued. It also provides an opportunity to clarify potential noise or misunderstandings that may arise.

Being Responsive

Building on the simultaneous send-receive nature of the Transactional Model, researchers must remain responsive during interviews. Providing non-verbal cues (like nodding) and verbal affirmations (“I see,” “Interesting”) lets participants know their message is being received and understood, making them feel comfortable and more willing to share.

Minimizing Noise

We should always attempt to account for noise in advance, as well as during our interview sessions. Noise, in the form of misinterpretations or distractions, can disrupt effective communication. Researchers can proactively reduce noise by conducting a dry run in advance of the scheduled interviews. This helps you become more fluent at going through the interview and also helps identify areas that might need improvement or be misunderstood by participants. You also reduce noise by creating a conducive interview environment, minimizing potential distractions, and asking clarifying questions during the interview whenever necessary.

For example, if a participant uses a term the researcher doesn’t understand, the researcher should politely ask for clarification rather than guessing its meaning and potentially misinterpreting the data.

Additional forms of noise can include participant confusion or distraction. You should let participants know to ask if they are unclear on anything you say or do. It’s a good idea to always ask participants to put their smartphones on mute. You should only provide information critical to the process when introducing the interview or tasks. For example, you don’t need to give a full background of the history of the product you are researching if that isn’t required for the participant to complete the interview. However, you should let them know the purpose of the research, gain their consent to participate, and inform them of how long you expect the session to last.

Strategizing the Flow

Researchers should build strategic thinking into their interviews to support the Transaction Model. Starting the interview with less intrusive questions can help establish rapport and make the participant more comfortable, while more challenging or sensitive questions can be left for later when the interviewee feels more at ease.

A well-planned interview encourages a fluid dialogue and exchange of ideas. This is another area where conducting a dry run can help to ensure high-quality research. You and your dry-run participants should recognize areas where questions aren’t flowing in the best order or don’t make sense in the context of the interview, allowing you to correct the flow in advance.

While much of what the Transactional Model informs for interviews already aligns with common best practices, the model would suggest we need to have a deeper consideration of factors that we can sometimes give less consideration when we become overly comfortable with interviewing or are unaware of the implications of forgetting to address the factors of context considerations, power dynamics, and post-interview actions.

Context Considerations

You need to account for both the context of the participant, e.g., their background, demographic, and psychographic information, as well as the context of the interview itself. You should make subtle yet meaningful modifications depending on the channel you are conducting an interview.

For example, you should utilize video and be aware of your facial and physical responses if you are conducting an interview using an online platform, whereas if it’s a phone interview, you will need to rely on verbal affirmations that you are listening and following along, while also being mindful not to interrupt the participant while they are speaking.

Power Dynamics

Researchers need to be aware of how your role, background, and identity might influence the power dynamics of the interview. You can attempt to address power dynamics by sharing research goals transparently and addressing any potential concerns about bias a participant shares.

We are responsible for creating a safe and inclusive space for our interviews. You do this through the use of inclusive language, listening actively without judgment, and being flexible to accommodate different ways of knowing and expressing experiences. You should also empower participants as collaborators whenever possible. You can offer opportunities for participants to share feedback on the interview process and analysis. Doing this validates participants’ experiences and knowledge and ensures their voices are heard and valued.

Post-Interview Actions

You have a number of options for actions that can close the loop of your interviews with participants in line with the “feedback” the model suggests is a critical part of communication. Some tactics you can consider following your interview include:

  • Debriefing
    Dedicate a few minutes at the end to discuss the participant’s overall experience, impressions, and suggestions for future interviews.
  • Short surveys
    Send a brief survey via email or an online platform to gather feedback on the interview experience.
  • Follow-up calls
    Consider follow-up calls with specific participants to delve deeper into their feedback and gain additional insight if you find that is warranted.
  • Thank you emails
    Include a “feedback” section in your thank you email, encouraging participants to share their thoughts on the interview.

You also need to do something with the feedback you receive. Researchers and product teams should make time for reflexivity and critical self-awareness.

As practitioners in a human-focused field, we are expected to continuously examine how our assumptions and biases might influence our interviews and findings.

We shouldn’t practice our craft in a silo. Instead, seeking feedback from colleagues and mentors to maintain ethical research practices should be a standard practice for interviews and all UX research methods.

By considering interviews as an ongoing transaction and exchange of ideas rather than a unidirectional Q&A, UX researchers can create a more communicative and engaging environment. You can see how models of communication have informed best practices for interviews. With a better knowledge of the Transactional Model, you can go deeper and check your work against the framework of the model.

Surveys

The Transactional Model of Communication reminds us to acknowledge the feedback loop even in seemingly one-way communication methods like surveys. Instead of merely sending out questions and collecting responses, we need to provide space for respondents to voice their thoughts and opinions freely. When we make participants feel heard, engagement with our surveys should increase, dropouts should decrease, and response quality should improve.

Like other methods, surveys involve the researcher(s) creating the instructions and questionnaire (sender), the survey, including any instructions, disclaimers, and consent forms (the message), how the survey is administered, e.g., online, in person, or pen and paper (the channel), the participant (receiver), potential misunderstandings or distractions (noise), and responses (feedback).

Designing the Survey

Understanding the Transactional Model will help researchers design more effective surveys. Researchers are encouraged to be aware of both their role as the sender and to anticipate the participant’s perspective as the receiver. Begin surveys with clear instructions, explaining why you’re conducting the survey and how long it’s estimated to take. This establishes a more communicative relationship with respondents right from the start. Test these instructions with multiple people prior to launching the survey.

Crafting Questions

The questions should be crafted to encourage feedback and not just a simple yes or no. You should consider asking scaled questions or items that have been statistically validated to measure certain attributes of users.

For example, if you were looking deeper at a mobile banking application, rather than asking, “Did you find our product easy to use?” you would want to break that out into multiple aspects of the experience and ask about each with a separate question such as “On a scale of 1–7, with 1 being extremely difficult and 7 being extremely easy, how would you rate your experience transferring money from one account to another?”.

Minimizing Noise

Reducing ‘noise,’ or misunderstandings, is crucial for increasing the reliability of responses. Your first line of defense in reducing noise is to make sure you are sampling from the appropriate population you want to conduct the research with. You need to use a screener that will filter out non-viable participants prior to including them in the survey. You do this when you correctly identify the characteristics of the population you want to sample from and then exclude those falling outside of those parameters.

Additionally, you should focus on prioritizing finding participants through random sampling from the population of potential participants versus using a convenience sample, as this helps to ensure you are collecting reliable data.

When looking at the survey itself, there are a number of recommendations to reduce noise. You should ensure questions are easily understandable, avoid technical jargon, and sequence questions logically. A question bank should be reviewed and tested before being finalized for distribution.

For example, question statements like “Do you use and like this feature?” can confuse respondents because they are actually two separate questions: do you use the feature, and do you like the feature? You should separate out questions like this into more than one question.

You should use visual aids that are relevant whenever possible to enhance the clarity of the questions. For example, if you are asking questions about an application’s “Dashboard” screen, you might want to provide a screenshot of that page so survey takers have a clear understanding of what you are referencing. You should also avoid the use of jargon if you are surveying a non-technical population and explain any terminology that might be unclear to participants taking the survey.

The Transactional Model suggests active participation in communication is necessary for effective communication. Participants can become distracted or take a survey without intending to provide thoughtful answers. You should consider adding a question somewhere in the middle of the survey to check that participants are paying attention and responding appropriately, particularly for longer surveys.

This is often done using a simple math problem such as “What is the answer to 1+1?” Anyone not responding with the answer of “2” might not be adequately paying attention to the responses they are providing and you’d want to look closer at their responses, eliminating them from your analysis if deemed appropriate.

Encouraging Feedback

While descriptive feedback questions are one way of promoting dialogue, you can also include areas where respondents can express any additional thoughts or questions they have outside of the set question list. This is especially useful in online surveys, where researchers can’t immediately address participant’s questions or clarify doubts.

You should be mindful that too many open-ended questions can cause fatigue, so you should limit the number of open-ended questions. I recommend two to three open-ended questions depending on the length of your overall survey.

Post-Survey Actions

After collecting and analyzing the data, you can send follow-up communications to the respondents. Let them know the changes made based on their feedback, thank them for their participation, or even share a summary of the survey results. This fulfills the Transactional Model’s feedback loop and communicates to the respondent that their input was received, valued, and acted upon.

You can also meet this suggestion by providing an email address for participants to follow up if they desire more information post-survey. You are allowing them to complete the loop themselves if they desire.

Applying the transactional model to surveys can breathe new life into the way surveys are conducted in UX research. It encourages active participation from respondents, making the process more interactive and engaging while enhancing the quality of the data collected. You can experiment with applying some or all of the steps listed above. You will likely find you are already doing much of what’s mentioned, however being explicit can allow you to make sure you are thoughtfully applying these principles from the field communication.

Usability Testing

Usability testing is another clear example of a research method highlighting components of the Transactional Model. In the context of usability testing, the Transactional Model of Communication’s application opens a pathway for a richer understanding of the user experience by positioning both the user and the researcher as sender and receiver of communication simultaneously.

Here are some ways a researcher can use elements of the Transactional Model during usability testing:

Task Assignment as Message Sending

When a researcher assigns tasks to a user during usability testing, they act as the sender in the communication process. To ensure the user accurately receives the message, these tasks need to be clear and well-articulated. For example, a task like “Register a new account on the app” sends a clear message to the user about what they need to do.

You don’t need to tell them how to do the task, as usually, that’s what we are trying to determine from our testing, but if you are not clear on what you want them to do, your message will not resonate in the way it is intended. This is another area where a dry run in advance of the testing is an optimal solution for making sure tasks are worded clearly.

Observing and Listening as Message Receiving

As the participant interacts with the application, concept, or design, the researcher, as the receiver, picks up on verbal and nonverbal cues. For instance, if a user is clicking around aimlessly or murmuring in confusion, the researcher can take these as feedback about certain elements of the design that are unclear or hard to use. You can also ask the user to explain why they are giving these cues you note as a way to provide them with feedback on their communication.

Real-time Interaction

The transactional nature of the model recognizes the importance of real-time interaction. For example, if during testing, the user is unsure of what a task means or how to proceed, the researcher can provide clarification without offering solutions or influencing the user’s action. This interaction follows the communication flow prescribed by the transactional model. We lose the ability to do this during unmoderated testing; however, many design elements are forms of communication that can serve to direct users or clarify the purpose of an experience (to be covered more in article two).

Noise

In usability testing, noise could mean unclear tasks, users’ preconceived notions, or even issues like slow software response. Acknowledging noise can help researchers plan and conduct tests better. Again, carrying out a pilot test can help identify any noise in the main test scenarios, allowing for necessary tweaks before actual testing. Other forms of noise can be less obvious but equally intrusive. For example, if you are conducting a test using a Macbook laptop and your participant is used to a PC, there is noise you need to account for, given their unfamiliarity with the laptop you’ve provided.

The fidelity of the design artifact being tested might introduce another form of noise. I’ve always advocated testing at any level of fidelity, but you should note that if you are using “Lorem Ipsum” or black and white designs, this potentially adds noise.

One of my favorite examples of this was a time when I was testing a financial services application, and the designers had put different balances on the screen; however, the total for all balances had not been added up to the correct total. Virtually every person tested noted this discrepancy, although it had nothing to do with the tasks at hand. I had to acknowledge we’d introduced noise to the testing. As at least one participant noted, they wouldn’t trust a tool that wasn’t able to total balances correctly.

Encouraging Feedback

Under the Transactional Model’s guidance, feedback isn’t just final thoughts after testing; it should be facilitated at each step of the process. Encouraging ‘think aloud’ protocols, where the user verbalizes their thoughts, reactions, and feelings during testing, ensures a constant flow of useful feedback.

You are receiving feedback throughout the process of usability testing, and the model provides guidance on how you should use that feedback to create a shared meaning with the participants. You will ultimately summarize this meaning in your report. You’ll later end up uncovering if this shared meaning was correctly interpreted when you design or redesign the product based on your findings.

We’ve now covered how to apply the Transactional Model of Communication to three common UX Research methods. All research with humans involves communication. You can break down other UX methods using the Model’s factors to make sure you engage in high-quality research.

Analyzing and Reporting UX Research Data Through the Lens of the Transactional Model

The Transactional Model of Communication doesn’t only apply to the data collection phase (interviews, surveys, or usability testing) of UX research. Its principles can provide valuable insights during the data analysis process.

The Transactional Model instructs us to view any communication as an interactive, multi-layered dialogue — a concept that is particularly useful when unpacking user responses. Consider the ‘message’ components: In the context of data analysis, the messages are the users’ responses. As researchers, thinking critically about how respondents may have internally processed the survey questions, interview discussion, or usability tasks can yield richer insights into user motivations.

Understanding Context

Just as the Transactional Model emphasizes the simultaneous interchange of communication, UX researchers should consider the user’s context while interpreting data. Decoding the meaning behind a user’s words or actions involves understanding their background, experiences, and the situation when they provide responses.

Deciphering Noise

In the Transactional Model, noise presents a potential barrier to effective communication. Similarly, researchers must be aware of snowballing themes or frequently highlighted issues during analysis. Noise, in this context, could involve patterns of confusion, misunderstandings, or consistently highlighted problems by users. You need to account for this, e.g., the example I provided where participants constantly referred to the incorrect math on static wireframes.

Considering Sender-Receiver Dynamics

Remember that as a UX researcher, your interpretation of user responses will be influenced by your understandings, biases, or preconceptions, just as the responses were influenced by the user’s perceptions. By acknowledging this, researchers can strive to neutralize any subjective influence and ensure the analysis remains centered on the user’s perspective. You can ask other researchers to double-check your work to attempt to account for bias.

For example, if you come up with a clear theme that users need better guidance in the application you are testing, another researcher from outside of the project should come to a similar conclusion if they view the data; if not, you should have a conversation with them to determine what different perspectives you are each bringing to the data analysis.

Reporting Results

Understanding your audience is crucial for delivering a persuasive UX research presentation. Tailoring your communication to resonate with the specific concerns and interests of your stakeholders can significantly enhance the impact of your findings. Here are some more details:

  • Identify Stakeholder Groups
    Identify the different groups of stakeholders who will be present in your audience. This could include designers, developers, product managers, and executives.
  • Prioritize Information
    Prioritize the information based on what matters most to each stakeholder group. For example, designers might be more interested in usability issues, while executives may prioritize business impact.
  • Adapt Communication Style
    Adjust your communication style to align with the communication preferences of each group. Provide technical details for developers and emphasize user experience benefits for executives.

Acknowledging Feedback

Respecting this Transactional Model’s feedback loop, remember to revisit user insights after implementing design changes. This ensures you stay user-focused, continuously validating or adjusting your interpretations based on users’ evolving feedback. You can do this in a number of ways. You can reconnect with users to show them updated designs and ask questions to see if the issues you attempted to resolve were resolved.

Another way to address this without having to reconnect with the users is to create a spreadsheet or other document to track all the recommendations that were made and reconcile the changes with what is then updated in the design. You should be able to map the changes users requested to updates or additions to the product roadmap for future updates. This acknowledges that users were heard and that an attempt to address their pain points will be documented.

Crucially, the Transactional Model teaches us that communication is rarely simple or one-dimensional. It encourages UX researchers to take a more nuanced, context-aware approach to data analysis, resulting in deeper user understanding and more accurate, user-validated results.

By maintaining an ongoing feedback loop with users and continually refining interpretations, researchers can ensure that their work remains grounded in real user experiences and needs.

Tracking Your Application of the Transactional Model to Your Practice

You might find it useful to track how you align your research planning and execution to the framework of the Transactional Model. I’ve created a spreadsheet to outline key factors of the model and used this for some of my work. Demonstrated below is an example derived from a study conducted for a banking client that included interviews and usability testing. I completed this spreadsheet during the process of planning and conducting interviews. Anonymized data from our study has been furnished to show an example of how you might populate a similar spreadsheet with your information.

You can customize the spreadsheet structure to fit your specific research topic and interview approach. By documenting your application of the transactional model, you can gain valuable insights into the dynamic nature of communication and improve your interview skills for future research.

Stage Columns Description Example
Pre-Interview Planning Topic/Question (Aligned with research goals) Identify the research question and design questions that encourage open-ended responses and co-construction of meaning. Testing mobile banking app’s bill payment feature. How do you set up a new payee? How would you make a payment? What are your overall impressions?
Participant Context Note relevant demographic and personal information to tailor questions and avoid biased assumptions. 35-year-old working professional, frequent user of the online banking and mobile application but unfamiliar with using the app for bill pay.
Engagement Strategies Outline planned strategies for active listening, open-ended questions, clarification prompts, and building rapport. Open-ended follow-up questions (“Can you elaborate on XYZ? Or Please explain more to me what you mean by XYZ.”), active listening cues, positive reinforcement (“Thank you for sharing those details”).
Shared Understanding List potential challenges to understanding participant’s perspectives and strategies for ensuring shared meaning. Initially, the participant expressed some confusion about the financial jargon I used. I clarified and provided simpler [non-jargon] explanations, ensuring we were on the same page.
During Interview Verbal Cues Track participant’s language choices, including metaphors, pauses, and emotional expressions. Participant used a hesitant tone when describing negative experiences with the bill payment feature. When questioned, they stated it was “likely their fault” for not understanding the flow [it isn’t their fault].
Nonverbal Cues Note participant’s nonverbal communication like body language, facial expressions, and eye contact. Frowning and crossed arms when discussing specific pain points.
Researcher Reflexivity Record moments where your own biases or assumptions might influence the interview and potential mitigation strategies. Recognized my own familiarity with the app might bias my interpretation of users’ understanding [e.g., going slower than I would have when entering information]. Asked clarifying questions to avoid imposing my assumptions.
Power Dynamics Identify instances where power differentials emerge and actions taken to address them. Participant expressed trust in the research but admitted feeling hesitant to criticize the app directly. I emphasized anonymity and encouraged open feedback.
Unplanned Questions List unplanned questions prompted by the participant’s responses that deepen understanding. What alternative [non-bank app] methods for paying bills that you use? (Prompted by participant’s frustration with app bill pay).
Post-Interview Reflection Meaning Co-construction Analyze how both parties contributed to building shared meaning and insights. Through dialogue, we collaboratively identified specific design flaws in the bill payment interface and explored additional pain points and areas that worked well.
Openness and Flexibility Evaluate how well you adapted to unexpected responses and maintained an open conversation. Adapted questioning based on participant’s emotional cues and adjusted language to minimize technical jargon when that issue was raised.
Participant Feedback Record any feedback received from participants regarding the interview process and areas for improvement. Thank you for the opportunity to be in the study. I’m glad my comments might help improve the app for others. I’d be happy to participate in future studies.
Ethical Considerations Reflect on whether the interview aligned with principles of transparency, reciprocity, and acknowledging power dynamics. Maintained anonymity throughout the interview and ensured informed consent was obtained. Data will be stored and secured as outlined in the research protocol.
Key Themes/Quotes Use this column to identify emerging themes or save quotes you might refer to later when creating the report. Frustration with a confusing interface, lack of intuitive navigation, and desire for more customization options.
Analysis Notes Use as many lines as needed to add notes for consideration during analysis. Add notes here.

You can use the suggested columns from this table as you see fit, adding or subtracting as needed, particularly if you use a method other than interviews. I usually add the following additional Columns for logistical purposes:

  • Date of Interview,
  • Participant ID,
  • Interview Format (e.g., in person, remote, video, phone).

Conclusion

By incorporating aspects of communication theory into UX research, UX researchers and those who work with UX researchers can enhance the effectiveness of their communication strategies, gather more accurate insights, and create better user experiences. Communication theory provides a framework for understanding the dynamics of communication, and its application to UX research enables researchers to tailor their approaches to specific audiences, employ effective interviewing techniques, design surveys and questionnaires, establish seamless communication channels during usability testing, and interpret data more effectively.

As the field of UX research continues to evolve, integrating communication theory into research practices will become increasingly essential for bridging the gap between users and design teams, ultimately leading to more successful products that resonate with target audiences.

As a UX professional, it is important to continually explore and integrate new theories and methodologies to enhance your practice. By leveraging communication theory principles, you can better understand user needs, improve the user experience, and drive successful outcomes for digital products and services.

Integrating communication theory into UX research is an ongoing journey of learning and implementing best practices. Embracing this approach empowers researchers to effectively communicate their findings to stakeholders and foster collaborative decision-making, ultimately driving positive user experiences and successful design outcomes.

References and Further Reading

Categories: Others Tags:

Exciting New Tools for Designers, April 2024

April 8th, 2024 No comments

Welcome to our April tools collection. There are no practical jokes here, just practical gadgets, services, and apps to make life that little bit easier and keep you working smarter.

Categories: Designing, Others Tags:

Managing User Focus with :focus-visible

April 5th, 2024 No comments

This is going to be the 2nd post in a small series we are doing on form accessibility. If you missed the first post, check out Accessible Forms with Pseudo Classes. In this post we are going to look at :focus-visible and how to use it in your web sites!

Focus Touchpoint

Before we move forward with :focus-visible, let’s revisit how :focus works in your CSS. Focus is the visual indicator that an element is being interacted with via keyboard, mouse, trackpad, or assistive technology. Certain elements are naturally interactive, like links, buttons, and form elements. We want to make sure that our users know where they are and the interactions they are making.

Remember don’t do this in your CSS!

:focus {
  outline: 0;
}

/*** OR ***/

:focus {
  outline: none;
}

When you remove focus, you remove it for EVERYONE! We want to make sure that we are preserving the focus.

If for any reason you do need to remove the focus, make sure there is also fallback :focus styles for your users. That fallback can match your branding colors, but make sure those colors are also accessible. If marketing, design, or branding doesn’t like the default focus ring styles, then it is time to start having conversations and collaborate with them on the best way of adding it back in.

What is focus-visible?

The pseudo class, :focus-visible, is just like our default :focus pseudo class. It gives the user an indicator that something is being focused on the page. The way you write :focus-visible is cut and dry:

:focus-visible {
  /* ... */
}

When using :focus-visible with a specific element, the syntax looks something like this:

.your-element:focus-visible {
  /*...*/
}

The great thing about using :focus-visible is you can make your element stand out, bright and bold! No need to worry about it showing if the element is clicked/tapped. If you choose not to implement the class, the default will be the user agent focus ring which to some is undesirable.

Backstory of focus-visible

Before we had the :focus-visible, the user agent styling would apply :focus to most elements on the page; buttons, links, etc. It would apply an outline or “focus ring” to the focusable element. This was deemed to be ugly, most didn’t like the default focus ring the browser provided. As a result of the focus ring being unfavorable to look at, most authors removed it… without a fallback. Remember, when you remove :focus, it decreases usability and makes the experience inaccessible for keyboard users.

In the current state of the web, the browser no longer visibly indicates focus around various elements when they have focus. The browser instead uses varying heuristics to determine when it would help the user, providing a focus ring in return. According to Khan Academy, a heuristic is, “a technique that guides an algorithm to find good choices.”

What this means is that the browser can detect whether or not the user is interacting with the experience from a keyboard, mouse, or trackpad and based on that input type, it adds or removes the focus ring. The example in this post highlights the input interaction.

In the early days of :focus-visible we were using a polyfill to handle the focus ring created by Alice Boxhall and Brian Kardell, Mozilla also came out with their own pseudo class, :moz-focusring, before the official specification. If you want to learn more about the early days of the focus-ring, check out A11y Casts with Rob Dodson.

Focus Importance

There are plenty of reasons why focus is important in your application. For one, like I stated above, we as ambassadors of the web have to make sure we are providing the best, accessible experience we can. We don’t want any of our users guessing where they are while they are navigation through the experience.

One example that always comes to mind is the Two Blind Brothers website. If you go to the website and click/tap (this works on mobile), the closed eye in the bottom left corner, you will see the eye open and a simulation begins. Both the brothers, Bradford and Bryan Manning, were diagnosed at a young age with Stargardt’s Disease. Stargardt’s disease is a form of macular degeneration of the eye. Over time both brothers will be completely blind. Visit the site and click the eye to see how they see.

If you were in their shoes and you had to navigate through a page, you would want to make sure you knew exactly where you were throughout the whole experience. A focus ring gives you that power.

Demo

The demo below shows how :focus-visible works when added to your CSS. The first part of the video shows the experience when navigating through with a mouse the second shows navigating through with just my keyboard. I recorded myself as well to show that I did switch from using my mouse, to my keyboard.

Video showing how the heuristics of the browser works based on input and triggering the focus visible pseudo class.
Video showing how the heuristics of the browser works based on input and triggering the focus visible pseudo class.

The browser is predicting what to do with the focus ring based on my input (keyboard/mouse), and then adding a focus ring to those elements. In this case, when I am navigating through this example with the keyboard, everything receives focus. When using the mouse, only the input gets focus and the buttons don’t. If you remove :focus-visible, the browser will apply the default focus ring.

The code below is applying :focus-visible to the focusable elements.

:focus-visible {
  outline-color: black;
  font-size: 1.2em;
  font-family: serif;
  font-weight: bold;
}

If you want to specify the label or the button to receive :focus-visible just prepend the class with input or button respectively.

button:focus-visible {
  outline-color: black;
  font-size: 1.2em;
  font-family: serif;
  font-weight: bold;
}

/*** OR ***/

input:focus-visible {
  outline-color: black;
  font-size: 1.2em;
  font-family: serif;
  font-weight: bold;
}

Support

If the browser does not support :focus-visible you can have a fall back in place to handle the interaction. The code below is from the MDN Playground. You can use the @supports at-rule or “feature query” to check support. One thing to keep in mind, the rule should be placed at the top of the code or nested inside another group at-rule.

<button class="button with-fallback" type="button">Button with fallback</button>
<button class="button without-fallback" type="button">Button without fallback</button>
.button {
  margin: 10px;
  border: 2px solid darkgray;
  border-radius: 4px;
}

.button:focus-visible {
  /* Draw the focus when :focus-visible is supported */
  outline: 3px solid deepskyblue;
  outline-offset: 3px;
}

@supports not selector(:focus-visible) {
  .button.with-fallback:focus {
    /* Fallback for browsers without :focus-visible support */
    outline: 3px solid deepskyblue;
    outline-offset: 3px;
  }
}

Further Accessibility Concerns

Accessibility concerns to keep in mind when building out your experience:

  • Make sure the colors you choose for your focus indicator, if at all, are still accessible according to the information documented in the WCAG 2.2 Non-text Contrast (Level AA)
  • Cognitive overload can cause a user distress. Make sure to keep styles on varying interactive elements consistent

Browser Support

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

Desktop

Chrome Firefox IE Edge Safari
86 4* No 86 15.4

Mobile / Tablet

Android Chrome Android Firefox Android iOS Safari
123 124 123 15.4

Links


Managing User Focus with :focus-visible originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Categories: Designing, Others Tags:

How Web Designers Can Stay Relevant in the Age of AI

April 5th, 2024 No comments

The digital landscape is evolving rapidly. With the advent of AI, every sector is witnessing a revolution, including web design.

Categories: Designing, Others Tags:

The Things Users Would Appreciate In Mobile Apps

April 5th, 2024 No comments

Remember the “mobile first” mantra? The idea was born out of the early days of responsive web design. Rather than design and build for the “desktop” up front, a “mobile-first” approach treats small screens as first-class citizens. There’s a reduced amount of real estate, certainly less than the number of pixels we get from the viewport of Firefox expanded fullscreen on a 27-inch studio monitor.

The constraint is a challenge to make sure that whatever is sent to mobile devices is directly relevant to what users should need; nothing more, nothing less. Anything more additive to the UI can be reserved for wider screens where we’re allowed to stretch out and make more liberal use of space.

/* A sample CSS snippet for a responsive main content */

/* Base Styles */
.main-content {
  container: main / inline-size;
}

.gallery {
  display: grid;
  gap: 1rem;
}

/* Container is wider than or equal to 700px */
@container main (width >= 700px) {
  .gallery {
    grid-template-columns: 1fr 1fr;
  }
}

/* Container is wider than or equal to 1200px */
@container main (width >= 1200px) {
  .gallery {
    grid-template-columns: repeat(4, 1fr);
  }
}

Now, I’m not here to admonish anyone who isn’t using a mobile-first approach when designing and building web interfaces. If anything, the last five or so years have shown us just how unpredictable of a medium the web is, including what sort of device is displaying our work all the way down to a user’s individual preferences.

Even so, there are things that any designer and developer should consider when working with mobile interfaces. Now that we’re nearly 15 years into responsive web design as a practice and craft, users are beginning to form opinions about what to expect when trying to do certain things in a mobile interface. I know that I have expectations. I am sure you have them as well.

I’ve been keeping track of the mobile interfaces I use and have started finding common patterns that feel mobile in nature but are more desktop-focused in practice. While keeping track of the mobile interfaces I use, I’ve found common patterns that are unsuitable for small screens and thus could use some updates. Here are some reworked features that are worth considering for mobile interfaces.

Economically-Sized Forms

There are myriad problems that come up while completing mobile forms — e.g., small tap targets, lack of offline support, and incorrect virtual keyboards, to name a few — but it’s how a mobile form interacts with the device’s virtual keyboard that draws my ire the most.

The keyboard obscures the form more times than not. You tap a form field, and the keyboard slides up, and — poof! — it’s as though half the form is chopped out of view. If you’re thinking, Meh, all that means is a little extra scrolling, consider that scrolling isn’t always a choice. If the page is a short one with only the form on it, it’s highly possible what you see on the screen is what you get.

A more delightful user experience for mobile forms is to take a “less is more” approach. Display one form field at a time for an economical layout that allows the field and virtual keyboard to co-exist in harmony without any visual obstructions. Focusing the design on the top half of the viewport with room for navigation controls and microcopy creates a seamless flow from one form input to the next.

More Room For Searching

Search presents a dichotomy: It is incredibly useful, yet is treated as an afterthought, likely tucked in the upper-right corner of a global header, out of view and often further buried by hiding the form input until the user clicks some icon, typically a magnifying glass. (It’s ironic we minimize the UI with a magnifying glass, isn’t it?)

The problem with burying search in a mobile context is two-fold:

  1. The feature is less apparent, and
  2. The space to enter a search query, add any filters, and display results is minimized.

That may very well be acceptable if the site has only a handful of pages to navigate. However, if the search allows a user to surface relevant content and freely move about an app, then you’re going to want to give it higher prominence.

Any service-oriented mobile app can improve user experience by providing a search box that’s immediately recognizable and with enough breathing room for tapping a virtual keyboard.

Some sites even have search forms that occupy the full screen without surrounding components, offering a “distraction-free” interface for typing queries.

No Drop-Downs, If Possible

The element can negatively impact mobile UX in two glaring ways:

  1. An expanding with so many options that it produces excessive scrolling.
  2. Scrolling excessively, particularly on long pages, produces an awkward situation where the page continues scrolling after already scrolling through the list of s.

I’ll come across these situations, stare at my phone, and ask:

Do I really need a populated with one hundred years to submit my birthdate?

Seems like an awful lot of trouble to sort through one hundred years compared to manually typing in a four-digit value myself.

A better pattern for making list selections, if absolutely needed, in a space-constrained mobile layout could be something closer to the ones used by iOS and Android devices by default. Sure, there’s scrolling involved, but within a confined space that leaves the rest of the UI alone.

A Restrained Overview

A dashboard overview in a mobile app is something that displays key data to users right off the bat, surfacing common information that the user would otherwise have to seek out. There are great use cases for dashboards, many of which you likely interact with daily, like your banking app, a project management system, or even a page showing today’s baseball scores. The WordPress dashboard is a perfect example, showing site activity, security checks, traffic, and more.

Dashboards are just fine. But the sensitive information they might contain is what worries me when I’m viewing a dashboard on a mobile device.

Let’s say I’m having dinner with friends at a restaurant. We split the check. To pay my fair share, I take out my phone and log into my banking app, and… the home screen displays my bank balance in big, bold numbers to the friends sitting right next to me, one of whom is the gossipiest of the bunch. There goes a bit of my pride.

That’s an extreme illustration because not all apps convey that level of sensitive information. But many do, and the care we put into protecting a user’s information from peeping eyeballs is only becoming more important as entire industries, like health care and education, lean more heavily into online experiences.

My advice: Hide sensitive information until prompted by the user to display it.

It’s generally a good practice to obscure sensitive information and have a liberal definition of what constitutes sensitive information.

Shortcuts Provided In The Login Flow

There’s a natural order to things, particularly when logging into a web app. We log in, see a welcome message, and are taken to a dashboard before we tap on some item that triggers some sort of action to perform. In other words, it takes a few steps and walking through a couple of doors to get to accomplish what we came to do.

What if we put actions earlier in the login flow? As in, they are displayed right along with the login form. This is what we call a shortcut.

Let’s take the same restaurant example from earlier, where I’m back at dinner and ready to pay. This time, however, I will open a different bank app. This one has shortcuts next to the login form, one of which is a “Scan to Pay” option. I tap it, log in, and am taken straight to the scanning feature that opens the camera on my device, completely bypassing any superfluous welcome messaging or dashboard. This spares the user both time and effort (and prevents sensitive information from leaking into view to boot).

Mobile operating systems also provide options for shortcuts when long-pressing an app’s icon on the home screen, which also can be used to an app’s advantage.

The Right Keyboard Configuration

All modern mobile operating systems are smart enough to tailor the virtual keyboard for specialized form inputs. A form field markup with type="email", for instance, triggers an onscreen keyboard that shows the “@” key in the primary view, preventing users from having to tap Shift to reveal it in a subsequent view. The same is true with URLs, where a “.com” key surfaces for inputs with type="url".

The right keyboard saves the effort of hunting down relevant special characters and numbers for specific fields. All we need to do is to use the right attributes and semantics for those fields, like, type=email, type=url, type=tel, and such.

<!-- Input Types for Virtual Keyboards -->
<input type="text">   <!-- default -->
<input type="tel">    <!-- numeric keyboard -->
<input type="number"> <!-- numeric keyboard -->
<input type="email">  <!-- displays @ key -->
<input type="url">    <!-- displays .com key -->
<input type="search"> <!-- displays search button -->
<input type="date">   <!-- displays date picker or wheel controls -->

Bigger Fonts With Higher Contrast

This may have been one of the first things that came to your mind when reading the article title. That’s because small text is prevalent in mobile interfaces. It’s not wrong to scale text in response to smaller screens, but where you set the lower end of the range may be too small for many users, even those with great vision.

The default size of body text is 16px on the web. That’s thanks to user agent styling that’s consistent across all browsers, including those on mobile platforms. But what exactly is the ideal size for mobile? The answer is not entirely clear. For example, Apple’s Human Interface Guidelines do not specify exact font sizes but rather focus on the use of Dynamic Text that adjusts the size of content to the user’s device-level preferences. Google’s Material Design guidelines are more concrete but are based on three scales: small, medium, and large. The following table shows the minimum font sizes for each scale based on the system’s typography tokens.

Scale Body Text (pt) Body Text (px)
Small 12pt 16px
Medium 14pt 18.66px
Large 16pt 21.33px

The real standard we ought to be looking at is the current WCAG 2.2, and here’s what it says on the topic:

“When using text without specifying the font size, the smallest font size used on major browsers for unspecified text would be a reasonable size to assume for the font.”

So, bottom line is that the bottom end of a font’s scale matches the web’s default 16px if we accept Android’s “Small” defaults. But even then, there are caveats because WCAG is more focused on contrast than size. Like, if the font in use is thin by default, WCAG suggests bumping up the font size to produce a higher contrast ratio between the text and the background it sits against.

There are many, many articles that can give you summaries of various typographical guidelines and how to adhere to them for optimal font sizing. The best advice I’ve seen, however, is Eric Bailey rallying us to “beat the “Reader Mode” button. Eric is talking more specifically about preventing clutter in an interface, but the same holds for font sizing. If the text is tiny or thin, I’m going to bash that button on your site with no hesitation.

Wrapping Up

Everything we’ve covered here in this article is personal irritations I feel when interacting with different mobile interfaces. I’m sure you have your own set of annoyances, and if you do, I’d love to read them in the comments if you’re willing to share. And someone else is likely to have even more examples.

The point is that we’re in some kind of “post-responsive” era of web design, one that looks beyond how elements stack in response to the viewport to consider user preferences, privacy, and providing optimal experiences at any breakpoint regardless of whatever device is used.

Categories: Others Tags:

Why TikTok Has Become the Dream Platform for Marketers in 2024?

April 5th, 2024 No comments

With its ability to reach diverse audiences at the same time, TikTok can propel your branding game using various TikTok audience engagement strategies

It’s a savior of many small businesses, creators, and daily vlog makers.  According to a report, it has around one million active users. In the US, the majority of them are 18-19 years of age and middle-aged adults constitute 56%. 

In fact, It’s becoming the marketer’s dream platform in 2024 but why? Let’s find it out here in this blog. 

Key Features Making TikTok Ideal for Marketers

If you are wondering why Tiktok is becoming the ideal platform for marketers, take a look over some of its features:

Short-form Video Content

The inception of short-form video content has changed the way a brand connects with its audience. It’s very convenient to use and share your art and talent with this world. 

Due to the busyness and franticness of life, the level of individuals’ attention span has diminished. They tend to prefer short videos because they are easily accessible. Meanwhile, various social media networks have also adopted this approach; for example, Instagram introduced reels as an alternative, and YouTube added shorts.

User Engagement

TikTok has the highest user base in comparison to other social media platforms. When it comes to Android device users, the majority of them spend 95 minutes on an average per day on TikTok, following YouTube with 74 minutes, and then Instagram with 51 minutes. 

With such a huge user base, it’s not difficult to market your brand and find the right people to connect with. At the same time, the freedom to post and create trend-driven marketing content makes TikTok a wonderful choice for marketing. 

Creative Tools

“If it doesn’t sell, it isn’t creative” – David Ogilvy 

The beauty of TikTok is that you do not require many resources to build your content; it already has a lot of video creation tools. By emotionally charging content and merging it with music, marketers can generate interactive media to high standards.

Ultimately, it helps to attain your goal of boosting your brand’s visibility by fostering community participation and uniquely engaging with the audience. 

Hashtag Challenges

The inclusion of hashtag challenges in TikTok enables you to promote your brand most conveniently. Being a creator, all you need to do is set up a particular hashtag and ask others to post videos using the same hashtag. This works as a trendsetter where other TikTok users can generate their content and endorse your brand in their videos. 

Doing so can provide a rapid boost to brand awareness and visibility. This is a brilliant TikTok audience targeting strategy to drive user engagement and spark others’ interest in your brand. Another positive effect of a successful hashtag challenge is the ability to go viral. This is also a brilliant way to get your brand noticed by prospects. 

Influencer Marketing 

There are plenty of influencers on TikTok who can promote your brand at some nominal charge. Either you can target multiple nano-influencers working on the same niche as you or find a trustworthy and popular one but be prepared to pay a large sum of money for him. 

Brand Effects

Brand effects are customized effects sponsored by brands that comprise various brand elements to meet your particular needs. This feature has served the purpose of brand awareness and pushing promotional activities on TikTok. 

Beyond the Buzz: Why TikTok is a Marketer’s Dream

For a marketer, choosing the right branding strategy and tool is as much as important as having the appropriate content. That’s when TikTok comes into play! 

It serves as a strong foundation for marketers while saving their time and resources. Now, forget about using video maker tools and spending hours on them to create content. TikTok can do it for you within a few minutes by blending creativity and putting the right elements in place. 

Positive ROI for marketers

Marketing is gambling! Because You never know if your potential customers will love your products or services. 

Like, if we take the data of 2023, 78% of small businesses got a positive ROI. Among them, 75% of them received the return within six months. Thus, don’t take TikTok lightly as it can serve as a go-to resource for your marketing needs.

Get deep Insights with TikTok Analytics

TikTok simplifies the process of measuring the performance of your brand without putting any additional burden on your pocket. It provides some significant information about the audience like their demographics, age, gender, and a lot more. Knowing these metrics aids in personalizing the content. 

Once you become aware of who is following you or what kind of content they like and hate, it becomes easier to come up with data-driven engaging content. This gives a push to your TikTok page and allows you to expand your reach to a wider audience. 

Advertise Yourself or Your Brand

TikTok for business has eased the task of promoting and growing your brand. Now, you can create ads and share them with 885 million people. Yes, you heard it right! TikTok ads are capable of reaching such a huge number of people in a very limited period. 

If your content is powerful and captivating to the eyes of the audiences, you can expect the reach of 1.9 billion users with TikTok ads as claimed by the Global Digital Report from Kepios. 

Have a check on the image below to see the current trend of the amount earned by TikTok with ad spending. It indicates how businesses have kept relying on TikTok ads in the past 5 years to grow their brands.

Source

Conclusion

No doubt, why marketers around the world are fond of TikTok and its unparalleled capabilities. It can help your brand to accomplish your goals and attain positive TikTok ROI for marketers. 

Considering the current inclination of people over short video content and the user base of TikTok, choosing it for your branding needs can pay off in the long run. 

In the end, you can use various TikTok viral video tactics to promote your content. Remember, anything can go viral on it so just be consistent and keep tracking the performance of your page. If the growth slows down, you should start a little bit of experimenting with your content. 

Let us know your views about TikTok in the comments or share your experiences. Feel free to drop your queries too as we would love to assist you further.

Featured image by Solen Feyissa on Unsplash

The post Why TikTok Has Become the Dream Platform for Marketers in 2024? appeared first on noupe.

Categories: Others Tags:

How AI is Transforming Business

April 4th, 2024 No comments

Everyone knows that the continuous advancement in Artificial Intelligence has taken over numerous fields of human life, and business is one of those fields. 

With the advanced capabilities, AI has not only streamlined the routine tasks for employees but also provided companies with the opportunity to quickly grow online. 

In this blog, we are going to explain some of the major ways through which Artificial Intelligence is transforming modern businesses. Along with this, we will also discuss some useful AI-powered tools that brands can consider using.   

Ways in Which Artificial Intelligence in Transforming Businesses

Below we have discussed several ways through which AI is transforming businesses all around the globe. 

By Automating Routine Tasks:

No matter if it is a small or large business, it will definitely have some routine tasks. 

For instance, performing data extraction from invoices, or any documents and then updating the information in the company’s database is a routine task. Additionally, creating valuable content to promote business online is also a regular task.

And performing those tasks manually with humans will not require a good amount of time, but also contains a chance of error. 

Fortunately, AI has solved this problem. Several AI-based data extraction and content creation tools will automatically get the job done within seconds without compromising on accuracy and quality. 

By Providing Assistance in Decision Making

Decision-making is highly crucial for any business, a single decision can either result in significant growth/profit or a sudden loss. Due to this reason, you may have seen organizations spending a good amount of time on decision-making. Some companies even hire expensive professionals to make critical decisions. 

Thankfully, after the recent advancements in Artificial Intelligence, that’s not the case now. The AI-based tools or solutions are allowing organizations to quickly make informed decisions with ease. Let us explain how AI contributes to decision-making. 

Artificial Intelligence tools have the ability to quickly scan large amounts of data to predict future trends and provide a plan of action. This will eliminate the need for businesses to hire special professionals or spend time and effort to make accurate decisions. 

24/7 Customer Support

Providing top-notch customer service is one of the main priorities of every business out there. To fulfill this requirement, companies have to be available 24/7 to address customer’s queries, questions, or problems. For this, companies have to hire representatives for both day and night shifts, this will definitely increase the overall cost. 

Apart from this, there are also chances that the representatives may be on leave, affecting the overall customer service flow. 

The good news is AI can also help in this regard. There are numerous AI-powered chatbots available online that businesses can consider integrating with their website, or any other platform. The chatbots will provide responses to customer’s queries in a human manner 24/7. 

So, now there is no need for businesses to hire additional customer service representatives, saving overall costs and allowing them to focus on other necessary tasks.    

Increasing the Overall Productivity & Efficiency

Productivity and working efficiency are considered essential for organization employees. Both these play a key role in the success of a company.  

When most of the routine tasks are automatically performed with AI, this will not only result in saving a significant amount of time and effort but also maximum accuracy. Moreover, saving time and effort will encourage the employees to work at their full potential and concentration. 

This will then result in a perfect working environment as well as maximum company growth – which are the main objectives of every business out there.   

Interview Scheduling

Finally, organizations often have to hire employees to scale up. Obviously, during the recruitment process, they have to both schedule and keep track of applicant’s interviews. 

Performing both these tasks by a real person will definitely require a lot of time but also increases the overall chances of mismanagement. However, this issue can be easily avoided by utilizing Artificial intelligence tools

There are numerous AI-powered scheduling tools available that will automatically schedule and monitor the applications of candidates, streamlining the overall recruitment process. 

After going through all these ways, we think you may have an efficient idea of how AI is transforming modern businesses. 

Best AI-powered Tools That Businesses Should Use

Below are some of the best AI-based tools businesses can consider using.

  1. ChatGPT

ChatGPT is a widely known AI-based tool. This tool can assist online brands in several ways. For instance, it can help in creating personalized marketing strategies for the targeted audience and in the creation of any type of content (blogs, emails, newsletters, e-books, etc.) in no time. 

This will eliminate the need for businesses to spend valuable money on hiring professional content writers. Apart from these, there are several other ways as well.

To use this AI tool, companies just need to provide instructions about the task they want the tool to perform. To demonstrate this, check out the attachment below.

Additionally, it has the ability to understand and respond in almost any language, this ability makes it highly suitable for worldwide companies. 

  1. Bing AI Image Generator

We all know, online brands often have to create high-quality visuals either for their products or for the content they are creating. In the old days, organizations had to higher professional graphics designers for the creation of high-quality visuals for them.

But that’s not the case now, all thanks to Bing AI Image Generator. The tool is a product of Microsoft and operates on advanced technologies that allow it to create high-resolution images within a matter of seconds. 

To explain, we asked it to create an image of the leather jacket. The output we got can be seen in the attachment below. 

As you can see, the tool has generated high-resolution pictures according to our input. Unlike ChatGPT, this AI image generator is completely free to use, which means companies can make use of its unlimited times without facing any usage restrictions. 

  1. AI Summarizer

Creating short product descriptions, emails, or summaries of marketing content is also a routine task. And this AI-powered summarizing tool will help them to quickly and accurately get the job done. 

The tool operates on advanced Natural Language Processing (NLP) and Machine Learning (ML) algorithms. These first efficiently understand and identify the main points or sentences in the given text and then generate a concise version.  

One notable feature is that the tool offers an adjustable summary length feature through which companies can create summaries or descriptions of their desired length. The less length you select in terms of percentage, the more concise the output tool will generate. 

To demonstrate this, we condensed some marketing content with this tool, the results can be seen in the attachment below. 

So, the tool automatically shortens the given content without causing any harm to its original meaning and quality. Moreover, this AI summarizer has the ability to perform summarization in seven different languages. 

  1. Authentic AI

This is an AI-based decision-making tool. It quickly analyzes large amounts of data using Natural Language Processing (NLP) and provides useful insights about it within seconds. Along with this, it also provides future trends and recommends approaches or strategies that companies should follow. All these will then assist the organizations in making informed decisions. 

It offers solutions for product-based analysis, marketing analytics, ERP reporting, and WooCommerce analytics. 

Moreover, this tool supports integration with a variety of data sources including WooCommerce plugins, SQL databases, files & sheets, data warehouses, and many more. 

The good thing is that it is available in both and paid versions. The paid version includes multiple plans (starter, Pro, & Enterprise) to serve users with different budget requirements. 

Final Words

The advancements in Artificial Intelligence have completely changed the field of business in many ways such as automation of routine tasks, 24/7 customer support, and many more. In this article, we have explained the major ways in detail, along with this we also discussed some useful AI-based tools that businesses can consider using.

The post How AI is Transforming Business appeared first on noupe.

Categories: Others Tags: