15 Best New Fonts, July 2023

July 17th, 2023 No comments

Choosing a font for your project is one of the most important steps you’ll take in the process; it’s the point at which you finalize the tone of your content. And so, to give you a fresh voice for each new client each month, we write this roundup of the best new fonts we’ve found. Enjoy!

Categories: Designing, Others Tags:

Google’s Bard Raises the Stakes with Exciting New Update

July 14th, 2023 No comments

Amidst the battle for AI chatbot supremacy, the team behind Google’s Bard has released a new update detailing some monumental changes. Top of the list is Bard’s long-anticipated entry into Europe. With the AI language model now supporting 40 languages – including Portuguese, Arabic, Chinese, Hindi and Spanish – Bard seems set to capture the market and rival ChatGPT during the remainder of 2023.

Categories: Designing, Others Tags:

5 Tips for a Successful Typographic Logo

July 14th, 2023 No comments

Every designer knows that a strong brand identity is essential for a company that wants to embed itself in the minds of its customers. And when it comes to impact, there’s nothing stronger than a typographic logo.

Categories: Designing, Others Tags:

How Will AI Help You Better Understand Your Customers

July 14th, 2023 No comments

Society is growing to rely on Artificial Intelligence more and more. Over the years AI has become embedded in everyday life, such as our smartphones and devices that control smart technology in our homes. One of the major components of AI is its ability to continuously learn so that it is constantly getting better and becoming more efficient. There are so many capabilities and it’s hard to imagine the limits of this technology. One such capability is to help businesses better understand their customers.

With customer expectations higher than they have ever been, it is important for companies to adapt and continue to be aware of those expectations. One report shows that about 80% of customers would prefer not to do business with a brand after just one bad experience. This shows that the bar is extremely high for companies to get it right from the beginning. That’s where AI can help.

There are plenty of people who have already come to embrace the idea of AI-enhanced CX (AI/CX). It has been found that 70% of executives believe their industry is ready to adopt AI/CX and three out of four predict AI will have a vital role in the future of their organizations.

When used in a contact center, artificial Intelligence can also save a business time and money, which is always a plus. However, being able to tap into the knowledge behind what drives customer engagement is key to success. The great thing is that AI will do the work for you and you get all the benefits. So, this is what AI can do for you.

Improving Customer Service

Having a responsive and helpful customer service department is essential to any business that wants to be successful.  90% of consumers say “immediate” response time is very important when they have a question. Customers need to be able to have direct access to businesses to file complaints or resolve any issues they may have. So having different channels where people can express those things is necessary for companies to retain customers. Multiple kinds of AI-powered software have been developed to enhance the customer experience.

Chatbots are one way that artificial intelligence can improve customer service. They are very relevant in today’s world, as 67% of people expect to use messaging apps to talk to businesses. They can easily answer frequently asked questions or simpler questions that do not require intervention from a customer service representative. They’re also available 24/7 so people have access at all times, without the need for shift workers. 

Receiving feedback from customers is one of the best ways for companies to know how they are doing and what they need to improve. AI is a tool that can exceed what the standard survey provides. For example, AI can facilitate online focus groups in real-time and opens up a channel for communication, where you can have a conversation with people and participants don’t have to give one-sided responses. 

Predicting Behavior

Being able to predict customer behavior has a major impact on the way companies conduct their business. In predicting behavior, brands can offer a more personalized experience and drive sales. Artificial intelligence can capture and analyze customer data to a much higher extent than ever before.

Programs, like Google Analytics, Facebook Ad Insights, Adwords, and CRM data are designed to track customers across platforms and generate reports with information about people’s online habits. They will track things such as which websites people visit and specifically what products they are looking at. They can get very detailed to even include how long someone stays on a page and how often they return.

By knowing ahead of time how people shop and what products/services they are looking for, brands can plan accordingly. They are able to design their websites in a way that draws people’s attention to exactly what they want and makes them more user-friendly. They can also put their time and effort into funding the design of products that they know people want. 

Targeted Marketing

Marketing has drastically changed over the years. Traditional marketing no longer makes the cut and now people listen more to ‘influencers’ and spend so much of their time on social media, where they are seeing ads bombarding them in a whole new way.

It’s no coincidence when someone logs onto Instagram and sees ads for products they specifically like. That is because these ads are tailored for and targeted at each individual. AI is to thank for that. Algorithms are built into these platforms that run based on AI technologies and machine learning to make recommendations on content for users. It can almost be eerie in the ads that users see, especially if someone has just been searching for that very product being advertised.

Having AI keep up with customer data allows brands to know exactly how to customize their marketing tactics for each person. This keeps the customer drawn in and clicking that add to cart button. It also helps prevent churn and retains the best customers by detecting when they are starting to lose interest and can launch new campaigns to keep their attention.

Artificial intelligence is only going to continue improving and has so much potential in businesses. Brands are in a unique position, compared to those in the past, in the way they can use technology to tap directly into customer insights. By having a better understanding of customers’ expectations and habits, a company can completely transform itself to cater to its customers. The customer experience is everything. And AI is a powerful tool that can revolutionize the way customers interact with a brand. So, to get started with integrating AI to create a better customer experience, check out your options with LiveVox.

The post How Will AI Help You Better Understand Your Customers appeared first on noupe.

Categories: Others Tags:

Apple’s New OLED iPad Pro Models Could Arrive as Early as Spring 2024

July 13th, 2023 No comments

Rumors are circulating that the newest iPad pro model could feature an OLED screen. These new displays would be a significant upgrade on the standard LCD Displays used on all but one iPad model (strangely, the 12.9-inch iPad Pro uses a Mini-LED screen, instead).

Categories: Designing, Others Tags:

The Engineering Behind Recommendation Systems

July 13th, 2023 No comments

When you want to buy something, you seldom directly opt for an unknown brand or a new shopping portal you’ve only heard of. This is because you are driven to trust the brands or e-commerce portals, or a particular offline store based on your previous experience. You feel connected because somehow you find items that align with your interests and preferences. 

In offline stores, one can say it largely depends on the relationship building of the owners and the shoppers. When it comes to online shopping, there is no face-to-face meeting. Yet, your favorite brands know what you like and recommend products you will likely purchase sooner or later. 

This is no magic. Say hello to recommendation systems, one of the most frequently used machine learning techniques. 

What are recommendation systems?

Recommendation systems are advanced data filtering systems powered by AI, recommending the most relevant item to a particular customer or user. It uses behavioral data and machine learning algorithms to predict the most relevant content or services for customers.

For instance, when you watch a web series or a movie on Netflix, you will see recommendations for shows and movies of similar genres. Netflix uses recommendation systems to analyze your watching preferences and browsing activities to suggest relevant content. 

Doing this helps reduce users’ time to browse through thousands of contents to find a new show or movie to watch.

Recommendation Systems: The Science Behind It

A recommendation system is an algorithm that uses big data to suggest relevant products and services to users. Since big data fuels recommendations, the inputs required for model training are crucial. 

It can work on various types of data depending on your business goals. For example, it can be based on user demographic data, past behavior, purchase habits, interactions with your product or website, and even search history. Recommendation systems analyze the data to identify patterns and then make suggestions for products most likely to appeal to the users.

For instance, on Amazon, you will discover related products on the homepage, upon adding items to your cart, reviewing a particular product, or completing a purchase. There is always a recommended list of items that align with your past purchase and overall browsing patterns. 

Source: amazon.in

Recommendation systems can offer personalized experiences to customers by providing relevant recommendations on various topics. They also help businesses increase customer engagement and loyalty by recommending related items and services.

Why use Recommendation Systems?

An essential component of recommendation engine algorithms is filtering based on different aspects. The recommender function gathers information about the user and predicts the most relevant product. For example, they predict user ratings and preferences based on recurring actions.

Recommendation system models are crucial in helping us have a hassle-free and seamless user experience. It also helps expose more inventory or content that we might not discover otherwise amidst the enormous volumes of data. There are many uses for recommendation engines:

  • Personalized content: Improves the on-site experience by creating dynamic recommendations for a diverse audience.
  • Better search experience: Categorizes products based on their features.
  • Increased sales: Recommends products to customers based on their purchase history, allowing them to make more informed decisions. 
  • Re-engagement strategies: Helps bring users back by suggesting relevant items or services they may be interested in.
  • Cross-selling and up-selling opportunities: Identifies complementary items that customers may have yet to consider or be aware of. 
  • Collecting valuable customer data: Generates useful insights into user behavior and preferences. 
  • Optimizing marketing campaigns: Analyzes customer data to refine targeted promotions and campaigns to drive higher conversion rates.

Product recommendations on online shopping apps, social media news feeds, and Google ads are examples of recommendation systems in our day-to-day life. 

Engineering Behind Recommendation Systems

Recommender Systems (RSs) are advanced software tools and techniques that recommend products to a user that they find useful. The recommendations involve different decision-making processes, like things to buy, music to listen to, or online news to read.

The four clearly defined, logical steps of a recommendation system are:

1. Data Collection

The first and most crucial stage in developing a recommendation engine is obtaining the appropriate data for each user. There are two kinds of data:

  • Explicit data is information gathered from user inputs like product ratings, reviews, likes, and dislikes.
  • Implicit data comprises details obtained from user behaviors such as web searches, clicks, cart actions, search logs, and order histories

Since each user’s data profile will change over time to become more distinctive, it is also essential to gather consumer characteristic information like

Demographics (age, gender) User demographics refer to the various characteristics of individuals, such as age, gender, income, education level, location, and cultural background. In recommendation systems, user demographics can be used to personalize recommendations and improve user experience. In addition, it encourages engagement and loyalty to drive more business.
Psychographics (interests, value) Psychometrics is the scientific study of human mental processes and behavior and can be used to develop more accurate user recommendations. By tailoring recommendations based on personality, interests, and behavior, companies can increase users’ likelihood of engaging with products or services.
Feature Information (genre, object type) Feature information such as genre, object type, and keywords can also be used to personalize recommendations. By leveraging these characteristics, companies can create more targeted and relevant content for their users. For example, an online clothing retailer could use feature information to suggest items based on a user’s style preferences or the season.
Contextual Information (time of day, location) Contextual information such as time of day, weather conditions, and location can be used to determine what content is most appropriate for each individual at any given moment. For instance, if a user is located in a city that experiences hot summers and cold winters may be prompted with seasonal items or activities suitable for the local climate.

2. Data Storage

Data storage is essential for any business as it allows them to store and access customer information. By monitoring user data, a business can gain insights into its target audience, including purchasing habits and preferences.

 There needs to be enough scalable storage as you gather more data. Depending on the data you gather, you have various storage options, including NoSQL, a regular SQL database, MongoDB, and AWS.

Considerations for selecting the best storage alternatives for Big Data include portability, integration, data storage space, and ease of implementation.

3. Data Analysis

Data analysis is required to provide prompt recommendations. Therefore, the data must be probed and examined. Data analysis techniques that are most frequently used include:

  • Real-time analysis involves the system using tools to assess and analyze events as they are happening. Then, when we wish to make prompt recommendations, this strategy is typically used.
  • Batch analysis is a method for routinely processing and evaluating data. When we wish to send emails with recommendations, this strategy is typically used. 
  • A near-real-time analysis is practical when you do not immediately require the data. It allows you to process and analyze the data in minutes rather than seconds. The main application of the strategy is when we offer suggestions to users while they are still on the website.

4. Data Filtering

The data must be appropriately filtered after analysis to produce insightful recommendations. During filtration, Big Data is subjected to various matrices, mathematical algorithms, and rules for the best suggestion. The recommendations that result from this filtering depend on the algorithm you choose.

Recommendation engines rely on understanding relationships. Relationships give recommender systems a wealth of information and a thorough knowledge of their clients. Three different kinds typically exist:

User to product: When specific users have a preference or affinity for particular goods they need, there is a user-product relationship User to user: Users with comparable preferences for a given good or service form user-user relationships Product to user: Product-product relationships develop when two products are similar, whether through description or appearance. These are then shared with an interested user

How do Recommendation Engines Gather Data?

Recommendation engines gather data based on the following:

  • User Behavior: Information regarding user interaction with a product can be gleaned from user behavior data. Rating clicks and purchase history can all be used to gather it.
  • User Demographics: Users’ data, such as their age, education, income, and location, are tied to their demographic data.
  • Product Attributes: Information about a product’s attributes includes information on the product itself, such as the genre of a book, the movie actors, or the cuisine of a dish.

Challenges and Advantages of Using Recommendation Systems

Challenges: 

Implementing a recommendation system is rather complicated. The key challenges are: 

  • Data Collection and Integration: The first challenge is collecting data from various sources, such as customer profiles, product catalogs, purchase histories, social media accounts, etc., and then integrating the data into a unified format for the recommendation engine. The process can be time-consuming and labor-intensive.
  • Algorithms: Choosing the right algorithms for your recommendation engine is also challenging since you need to build an algorithm that understands user preferences and makes accurate predictions based on those preferences. Additionally, new algorithms must be implemented as customer preferences change over time.
  • Accuracy: Once you have chosen the best algorithms for your purposes, it can still be difficult to ensure accuracy in the system’s output. This becomes even more difficult as the amount of data increases.
  • Scalability: As your business grows, so does the volume of data, and it can be difficult to ensure that your recommendation engine can handle these large volumes of data without a performance decrease.
  • Data Storage: Storing all of the necessary data requires considerable space and resources. Additionally, the data must be kept up-to-date to ensure the system’s output accuracy.
  • Privacy and Security: Customers may be concerned about privacy when they share their personal information with a recommendation engine. It is important to have a strong security measure that protects customer data and prevents unauthorized access or use for malicious purposes.

Despite the steep implementation curve, companies continue to rely heavily on recommendation systems. This is because user recommendations have a very strong influence on user purchase decisions. The TWO major advantages of using a recommendation system will always outweigh all the challenges. 

Advantages: 

  • Deliver Personalized and Relevant Content

Recommendation systems enable brands to customize the customer experience by identifying and suggesting items that may pique the customer’s interest. Additionally, the recommendation engine allows you to analyze the customer’s previous browsing history and current website usage. With the help of these real-time data, brands can deliver the most relevant product recommendation. 

For instance, Instagram often suggests pages or brands you might want to start following based on your preferences. Similarly, based on the kind of reels you watch, Instagram keeps suggesting new ones. This keeps the addiction on as you continue scrolling through new recommendations you can’t deny liking. It boosts engagement via personalized content recommendations. 

  • Increase Sales and Average Order Value

You can significantly enhance your revenue and average order value (AOV) by bringing in more website visitors by adding recommended products.

A recommendation system enables users to drive more conversions and deliver a high level of relevance that will boost sales. It exposes the customers to more products they are more likely to purchase.

You can add multiple data sets to your recommendation algorithm using a recommendation engine. The data sets will help provide recommendations in real time.

For instance, when looking at a product on Amazon, you will always find two sections back-to-back – customers who viewed this have also viewed it, and customers who bought this also bought. These two sections will push you to consider different products that go well with the product you are trying to buy. 

Let’s say you are looking for a channel file on Amazon and have zeroed in on one of the options. As you scroll down to read more about its features and look at a comparative table (which Amazon always adds to propel better decision-making, ideally for the more priced product), you will come across two sections like this: 

Now, offline stores will call this push selling. This is exactly what Amazon is doing here. Recommending similar or related products often triggers human brains to explore those products and probably buy or add them to the cart. Towards the end of the purchase cycle, the order value automatically goes up from what it was when the cycle started. 

Filtering Algorithms Marketers Must Understand

To put a recommendation engine into proper functioning, marketers must also understand the algorithms that make up this system. Three major types of filtering algorithms are at play: Collaborative filtering content-based filtering Hybrid System

Collaborative Filtering

Collaborative filtering gathers information regarding customers’ activities or preferences to predict user interest. It is usually acknowledged based on their similarities with other users who might have the same preference. Similar customers are found using customer characteristics like demographics and psychographics. 

E-commerce platforms like Amazon and Myntra are pioneers in effectively implementing collaborative filtering. 

It works by gathering preferences from each user to determine a Customer X Product Matrix.

It is also known as the matrix factorization method and can be used to determine how similar user evaluations or interactions are. 

For example, according to the straightforward user-item matrix, Ted and Carol bought products B and C. Bob also searched for product B. Matrix factorization determines that users who enjoyed B also like C, making C a potential recommendation for Bob.

Collaborative filtering is divided as follows: 

User-item collaborative filtering Like-minded customers are spotted based on similar rating patterns. It is a method for recommending products based on ratings from other users.
Item-item collaborative filtering Similarities between multiple items are calculated. A recommendation approach that looks for comparable products based on the things consumers have already shown interest in or interact with.

Content-based Filtering

Predictions made by content-based engines are based on end-user interest in a particular product. The engine uses metadata to find and suggest related content items once a user interacts with a piece of content. 

You can spot this recommender system on news websites with prompts like “You may also be interested in reading” or “You may like.”

For instance, on Medium, you will always find recommendations for articles similar to the ones you are reading or have read. 

The recommendation engine algorithms filter content based on the following:

  • User ratings: User ratings are of two types: Explicit and Implicit.  User profiles and star ratings are common sources of explicit data for recommendation systems (difficult to achieve). Implicit data is based on users’ engagement with the item. These are simple to obtain, including purchases, clicks, and views.
  • Product similarity: The most effective method for recommending products based on how much a customer would like a certain item is product similarity. Users may be shown comparable products if they browse or look for a specific item. Here’s how Amazon does it.
  • User similarity: It is a particular kind of algorithm for recommendation systems that suggest products based on product resemblances. Using data from previous user-product interactions, the algorithm generates suggestions. It assumes that individuals with similar interests will favor related goods. This is how Myntra does it: 

In Conclusion:

Note: The debate on privacy versus personalization has been hot since the recommendation systems rise. While such technologies have enabled organizations to provide tailor-made customer experiences through big data analytics, there is an inherent tension between these two concepts.

Companies must collect and store large amounts of data to generate personalized customer recommendations. Reports show that Big Data analytics has helped them increase revenue by 30%

On the other hand, companies must ensure that all collected data is securely stored and not misused. Survey results show 97% of consumers are concerned about their data being misused by companies and the government.

Ultimately, striking a balance between privacy and personalization should be the primary goal to ensure that customer data remains secure while providing an enjoyable customer experience. 

With this in mind, businesses can leverage big data-powered recommendation engines to offer a customized experience that will keep their customers returning for more.

Images from analytixlabs.co.in

The post The Engineering Behind Recommendation Systems appeared first on noupe.

Categories: Others Tags:

How To Create A Rapid Research Program To Support Insights At Scale

July 12th, 2023 No comments

While the User Experience practice has been expanding and will continue to balloon in the coming years, so have its sub-disciplines such as content strategy, operations, and user research. As the practice of UX Research matures, scalability will continue to be important in order to meet the rapid needs of iterative product development.

While there are several effective ways to scale user research, such as increasing researcher-to-designer ratios, leveraging big data and real-time analytics, or research democratization, one of the most effective methods is developing a Rapid Research program. In a Rapid Research program, teams are provided quick insight into key problems at an unprecedented operational speed.

Rapid Research-type support has been around for a while and has taken different shapes across different organizations. What remains true, however, is the goal to provide actionable insights from end-users at a quick pace that fits within product sprints and maintains pace with agile development practices.

In this article, I’m going to unpack what a Rapid Research program is, how to build one in your organization, and underscore the unique benefits that a program like this can provide to your team. Given that there is no singular ‘right way’ to scale insights or mature a user research practice, this outline is intended to provide building blocks and considerations that you may take in the context of the culture, opportunities, and challenges of your organization.

What Is Rapid Research?

Rapid research is a relatively recent program where typical user research practices and operations are standardized and templatized to provide a consistent, repeatable cadence of insights. As the name suggests, a core requirement of a rapid research program is that it delivers quicker-than-average insights. In many teams, this means delivering research on a weekly cadence where a confluence of guardrails, templates, and requirements work to ensure a smooth and consistent process.

Programs like Rapid Research may be created out of a necessity to keep up with the pace of development while freeing the bandwidth of expert researchers’ time for more complex discovery work that often takes longer. A rapid research program can be a crucial component of any team’s insight ecosystem, balanced against solving different business problems with flexible levels of support.

Scope

Research Methods

In order to make research more rapid, teams may consider some research methodologies out of the question in their Rapid Research program. Methods such as longitudinal diary studies, surveys, or long-form interviews might suffer from lower quality if done too quickly. When determining the scope of your rapid research program, ask yourself what methods you can easily templatize and, most importantly, which best support the needs of your experience teams.

For example, if your experience teams work on 2-week sprints and need insights in that time, then you will need to consider which research methods can reliably be conducted in 1–2 week increments.

Sample Size And Research Duration

Methods alone won’t ensure a successful implementation of a rapid research program. You will also need to consider sample size and session duration. Even if you decide usability tests are a reasonable methodology for your rapid research framework, you may be introducing too much complexity to run them with 15+ users within 60-min sessions and analyze all that data efficiently. This may require you to narrow your focus to fewer sessions with shorter duration.

Participant Recruitment

While there may be fewer and shorter sessions for each study, you also need to consider your participant pool. Recruitment is one of the most difficult aspects of conducting any user research, and this effort must be considered when determining the scope of the program. Recruitment can jeopardize the pace of your program if you source highly specific participants or if they are harder to reach due to internal bureaucracy or compliance constraints.

In order to simplify recruitment, consider what types of participants are both the easiest to reach and who account for the most use cases or products you expect to be researching. Be careful with this, though, as you don’t want to broaden your customer profiles too much for fear of not getting the helpful feedback you need, as UserZoom says:

“Why is sourcing participants such a challenge? Well, you could probably find as many users as you like by spreading the net as wide as possible and offering generous incentives, but you won’t necessarily find the ‘right’ participants.”

— UserZoom, “Four top challenges UX teams face in 2020 and how to solve them

Timing

Why Timing Matters

Coupled tightly with scope, the timing of your rapid research end-to-end process will be paramount to the program’s success. Even if you have narrowed the scope to only a handful of research methods with limited sessions at shorter durations and with specific participant profiles, it won’t be ‘rapid’ if your end-to-end project timeline is as long as your average traditional study. Care must be taken to ensure that the project timelines of your rapid research studies are notably quicker than your average studies so that this program feels differentiating and adds value on top of the work your team is already doing.

Reconsidering scope

If your timelines are about the same, or your rapid cadence is less than 50% more efficient than your average study, consider whether or not you’re being judicious enough in your scope above. Always monitor your timelines and identify where you can speed things up or limit the scope in order to reach a quick turnaround, which is acceptable. One way to support shorter project timelines is through compartmentalization.

Compartmentalization

About Compartmentalization

One way to balance scope, timing, and consistency is by breaking up pieces of your average study process into smaller, separate efforts. Consider what your program would look like if you separated project intake from the study kick-off or if discussion guides were not dependent on recruitment or participant types. Splitting out your workflow into separate parts and templating them may eliminate typical dependencies and streamline your processes.

Ways To Compartmentalize

Once you’ve determined the set of research methods and ideal participants to include in your program, you may:

  • Templatize the discussion guides to provide a quick starting point for researchers and cut down on upfront preparation time.
  • Create a consistent recruitment schedule independent of the study method to start before study intake or kick-off to save upfront time.
  • Pre-schedule recurring kick-off and readout sessions to set expectations for all studies while limiting timeline risk when at the mercy of others’ calendars.

There is a myriad of opportunities to do things differently than your typical research study when you reconsider the relationships and interdependencies in the process.

Consistency

Expectability

While a quality rapid research program takes into consideration scope, timing, and compartmentalization, it also needs to consider consistency. It would be difficult to discern whether or not the program was ‘rapid’ if, on one week, a study takes one week, and on another week, a study takes 2.5 weeks. Both may be below your current study average. However, project stakeholders may blur the lines between the differences in your rapid studies and your typical studies due to the variability in approach. In addition, it may be difficult to operationalize compartmentalization or rapid recruitment without some form of expected cadence.

More Agility

As you and your team get used to operating within your rapid cadence, you may identify additional opportunities to templatize, compartmentalize or focus scope. If the program is inconsistent from study to study, it may be more difficult to notice these opportunities for increased agility, hindering your program from becoming even more rapid over time.

A Rapid Research Case Study

While working at one of the largest telecommunications companies in the US, I had the privilege of witnessing the growth of the UX Research team from just four practitioners to over 25 by the time I left. During this time, the company had matured its user experience practice, including the standards, processes, and discipline of user research.

Identifying The Need

As we grew, human insight became a central part of the product development process, which meant an exponential increase in its demand. While this was a great thing and allowed our team to grow, the work we were doing was not sustainable — we were constantly trying to keep pace with product teams who brought us in too late in the process simply to validate their ideas. Not only did we always feel rushed, but we were stuck doing only evaluative work, which not only stifled innovation but also did not satisfy our more senior researchers who wished to do more generative research.

How It Fits In

Once diagnosing this issue, our leadership initiated several new processes to build a more well-rounded research portfolio that supported iterative research while enabling generative research. This included a democratization program, quarterly planning, and my initiative: Rapid Research. We determined that we needed a program that would allow us to take on mid-sized projects at the pace of product development while providing a new opportunity to hire junior researchers who would be a great talent pool for our team and provide a meaningful way for those new to the field to grow their skills.

Getting Started

In order to build the rapid research program, I audited the previous year’s worth of research to determine our average timelines, the most common methodologies used for iterative and mid-sized projects, and to identify our primary customer who we do research with most often. My findings would be the bedrock of the program:

  • Most iterative research was lite interviews and brief usability tests.
  • Many objectives could be covered in 30-minute sessions.
  • Mid-sized projects were often with just a handful of current customers.
  • Our average study time was 2–3 weeks, so we’d need to cut this down.
  • Given the above constraints, study goals should be highly focused.

Building The Program

At first, we did not have the budget for hiring new junior researchers to staff the program team. What we did have, however, was a contract with a research vendor who we’ve worked with for years, so we decided to partner with researchers from their team to run our rapid research program.

  • We created specific templates for ‘rapid’ usability tests and interviews.
  • Studies were capped at two objectives and only a handful of questions in order to fit into 30-min sessions.
  • Study intake was governed via a simple intake form, required to be filled out by EOD every Wednesday.
  • We scheduled standing kick-off and readout sessions every Friday and shared these invites with product teams for visibility.
  • To further establish our senior researchers as Portfolio Research Leads and to protect against scope creep, we required teams to formally request ‘rapid’ studies through them first.
  • We started our rapid cadence at two weeks and were able to cut it down to just one week after piloting the program for a month.

Strong Results

We saw the incredible value and strong results from building our rapid research program, especially alongside the other processes our team was standing up to support varying insights needs.

  • Speed
    We were able to eventually run three research studies simultaneously, enabling us to deliver more research at twice the pace of a traditional study.
  • Scale
    Through this enablement of speed, consistent recruitment, and templatized process, we ran over 100 studies & 650+ moderated interviews.
  • Impact
    Because we outsourced rapid research to a vendor, our team was freed up to deliver foundational research, which doubled our work capacity.
  • Growth
    Eventually, we hired junior researchers and transitioned the program from the vendor, increasing subject matter expertise & operational efficiency.

How To Build A Rapid Research Program

The following steps outline a process for getting started with building your own rapid research program in your organization. Exactly which steps you choose to follow, or if you decide to add more or less to your process, will be entirely up to you and the unique needs of your team. Follow the proceeding steps while considering the above guidelines regarding scope, timing, compartmentalization, and consistency.

Determine If You Even Need A Rapid Research Program

While seemingly counter-intuitive, the first step in building a rapid research program is considering whether you even need one in the first place. Every new initiative or tactic intended to mature user research practice should consider the available talent and capabilities of the team and the needs or opportunities of the organization it sits within. It would be unfortunate to invest time to build a robust, rapid research program only to find that nobody uses or needs it.

Reflection On Current Needs

Start by documenting the needs of your experience teams or the organization you support by the different types of requests you receive.

  • Are you often asked to deliver research faster?
  • What are the types of research which are most often requested?
  • Does your team have the capability or operational rigor required to deliver at a faster pace?
  • Are you staffed enough to support a more rapid pace, even if you could deliver one?
  • Is delivering faster, rigidly-scoped research in service to your long-term goals as a research team, or might it sacrifice them?

Gather More Information

Answering these questions should be your first step before any meaningful work is done to build a rapid research program. In addition, you might consider the following information-gathering activities:

  • Audit previous research you or your team have done to determine their average scope, timeline, and method.
  • Conduct a series of internal stakeholder interviews to identify what potential value a rapid research program might hold.
  • Look for signals for where the organization is going. If leadership is hiring or training teams on agile methods or demanding teams to take a step back to focus on discovery can help you decide when and where to invest your time.

These additional inputs will either help you refine your approach to building a program or to steer away from doing so.

Limitations Of Rapid Research

Finally, when considering if you should build a rapid research program in the first place, you should consider what the program cannot do.

  • What a rapid research program might save on time, it cannot necessarily save on effort. You will still need researchers to deliver this work, which means you may need to restructure your team or hire more people.
  • If you decide to make your rapid research program self-service, you likely will still need ResOps support for recruitment and managing the intake process effectively.
  • It is also possible to hire a research vendor partner to lead this program, though that will require a budget that not every team may have.
  • As mentioned above, a good rapid research program is tight and focused in its scope, which limits the type of projects it can accommodate.

Identify Your Starting Scope, Timing & Cadence

Once you’ve decided to pursue a rapid research program, you’ll need to understand what form your program should take in order to deliver the highest value to your team and those you support. As mentioned above, a right-sized scope should consider the research methods, requirements, session quantity & duration, and participant profiles, which you can confidently accommodate. And you will need to determine the end-to-end timing and program cadence that differentiates from current work while providing just enough time to still deliver sustainable quality.

Determine Participant Profiles

Start building your scope backwards from the needs gaps you’re filling within your team based on the answers to the discovery questions above. You’ll want to identify the primary type(s) of end-users this program will research.

  1. Audit the past 6–12 months of research you or your team has done, looking at the most common customer type with whom you do research.
  2. Then, couple that with any knowledge you may have of where the business or your experience teams will be focused for the following 6–12 months.

For example, if your audit revealed that your team had focused most frequently on current customers over the past year, and you also know that your business will soon focus on the acquisition of new customers, consider including both current customers and prospective customers in your rapid research scope.

Remember the important note about consistency above? Once you’ve identified potential participant profiles, make sure you can consistently recruit them. For example, if you use a research panel to source participants for research studies, test the incidence of your participant profiles. If you find they don’t have many panelists with the attributes you need, you might spend too much time in recruitment and jeopardize the speed of the program.

A balance should be struck between participant profiles that are specific enough to be useful for most projects and those broad enough to reach easily.

Determine Research Methods

You can conduct the same audit and rough forecasting when determining the research methods your program ought to support but with two additional considerations:

  1. Team strategy,
  2. Individual career development.

User researchers tend to focus their work further upstream, where they’re driving product roadmaps or influencing business strategy. This can bode well for your rapid research program if it is focused on evaluative research projects, which are often quicker and cheaper to conduct.

The ultimate goal is for the rapid research program to be a complement to what your team provides or as an enabler for freeing up their bandwidth so that they can focus on the type of work they want to do more of.

Right-size Research Methods

Once you’ve determined which research methods you want to include in your rapid research program, consider the level of rigor you need to balance effort and complexity.

Determining Timelines

Project timelines within a rapid research cadence are directly affected by the above scope decisions for participant profiles and research methodology. Timelines can also compound in highly regulated industries such as healthcare or banking, where you may be required to gather legal & compliance approval on every moderation guide. In order to call this a rapid research program, the end-to-end project timelines need to be shorter than a typical project of a similar scope, or at least feel that way.

  1. Scope current minimum effort
    Start by jotting down the minimum amount of time it takes a researcher on your team to do each sub-step in your current non-rapid research process. Do this for the same participant profiles and methods you want to include in your rapid research program.
  2. Dependencies
    Now, identify which sub-steps are dependent on others and think of ways to program them in order to build efficiency. For example, if you need legal approval on every moderation guide before data collection, which takes 2–3 days, see if Legal will commit to a change to a 24-hour SLA for rapid research-specific projects. Another example is if you typically give stakeholders a few days to provide feedback on moderation guides, change this for rapid research projects to cut down dependency time.
  3. Identify compartmentalization
    In addition to programming project dependencies, consider the above guidance for compartmentalizing some of the programs in order to remove dependencies entirely, such as with recruitment. Identify what parts of the process don’t have the same dependencies in your rapid research program and can be started earlier. By removing dependencies entirely, you may be able to do several things simultaneously to speed up project timelines.

Once you’ve documented your current research process (steps, dependencies, timing) and the changes you need to make to build efficiencies or remove dependencies, document what ‘must be true’ in order to consistently deliver identified changes. Create a table to document all of these details, then sum up the total timelines to compare your typical end-to-end research project timeline with your potential new ‘rapid’ timeline.

Ask yourself if this seems ‘rapid’ when stacked against your average study duration.

  • If not, look back at the guidance above. Ask yourself if there are other customer types that may be easier to get in front of that you haven’t considered. Consider whether you need to create a new process, expedite existing processes, or create new relationships in order to make your timelines even more rapid.
  • If so, congratulations! You might have just landed on the right scope for your rapid research program. Consider whether this new rapid timeline is something that you can deliver consistently and reliably over time and whether or not you have enough access to participants, and enough budget, to carry out this cadence long-term.

Build Infrastructure, Standards & Rules

It’s time to set the foundation. Return back to the tables you made above and create an action plan with the following steps and a timeline to build the infrastructure required to bring your program to life. As part of this, you’ll need to establish the rules and standards for communicating with partners. You might consider a playbook and formal scope document to inform others of the ins/outs of the program.

Gather Buy-in

Prioritize any work that requires buy-in, generating understanding, or acquiring budget first before spending your time and energy building templates or documentation. You wouldn’t want to create a 20-page scope document outlining the bandwidth for two researchers, a limit to 1 round of stakeholder feedback, and a 24hr SLA for legal approval, only to find out others cannot commit to that.

Create Templates

You’ll need plenty of templates, tools, and processes specific to the scope of your program.

  • If you’re limiting moderation guides to a maximum of 10 questions, then create a specific discussion guide template reflecting that.
  • If your data analysis will be sped up by using structured note-taking templates, create those.
  • If you’ve determined that all rapid research projects only require an executive summary one-pager, make that too.

Staffing

As mentioned above, even a drastically reduced version of your typical research processes still requires effort to support. You’ll need to determine, based on the expected scope and cadence of each rapid research project, how many researchers and/or research operations coordinators you’ll need to support the program. While all rapid research programs will require dedicated effort, there are creative ways of staffing the program, such as:

  • A dedicated team of 1–2 researchers and 1–2 Ops coordinators to deliver projects with the greatest efficiency and quality.
  • A dedicated team of 1–2 researchers who also handle the operations of running the program itself.
  • A self-service program, with 1–2 Ops coordinators for supporting anyone doing the research work.
  • Outsourcing the entire program to a vendor.

Work with your leadership, HR, and TA professionals on securing approval for any team restructure, needed headcount budget, or to onboard a new vendor. Then, take the appropriate steps to hire your next researcher or secure the staffing help you need to support your program.

Coaching And Guidance

Consider training, coaching, and check-in meetings as part of your infrastructure.

  • If you are staffing new researchers to this rapid research program, make sure they understand the expectations and have what they need to succeed.
  • If you’re implementing a self-service model, provide brown-bag sessions to partners to explain the program do’s and don’ts.
  • Schedule quarterly check-ins with partners and leadership to discuss the program accomplishments and any needed adjustments to ensure it stays relevant.

Pilot, Get Feedback, And Iterate Over Time

No matter how much preparation you do or how much time and effort you spend building the alliances, infrastructure, training, and support required to run your rapid research program effectively, you will learn that there are improvements you should make once you put it into practice.

There are many benefits to piloting a new program in an organization. One benefit is that it can mitigate risks and allow teams to learn quickly and early enough to make positive enhancements.

“Piloting offers a realistic preview experience for users at the earliest stages of development. It allows the organization and design team to gather real-time insights that can be used to shape and refine the product and prepare it for commercialization.”

— Entrepreneur, “Tasting As You Go: The 5 Benefits of ‘Piloting’

This means setting expectations early. Consider your first few projects as pilots and expect them to be rocky and imperfect. Use this to your advantage by asking stakeholders you’re closest with to be your trial projects and let them know how important their honest feedback is throughout the process. Ensure that you have clear mechanisms to gather feedback at each project milestone so that you can track progress. It is especially important to capture what might be slowing you down along the way or putting your ‘rapid’ timelines at risk.

Program Evolutions, Impacts & Considerations

Potential Evolutions & Variations

While I’ve outlined a process for getting started, there are many ways in which your rapid research program may evolve over time to meet the needs of your organization better.

  • After a few periods, you might identify volume isn’t as high as you anticipated, so you extend the 1-week timeline to every two weeks.
  • After a few months, your business might launch a new product line, requiring you to consider a new set of customer profiles in recruitment.
  • You may decide to leverage your rapid cadence for individual segments of a longitudinal diary study to accommodate new methods.
  • You might use rapid research projects to exclusively evaluate in-market products while others on the team focus on in-progress / new products.
  • Rapid research projects could be a stage-gate for larger projects — proving a customer need before larger time investments are made.

However your rapid research program takes shape, revisit its goals, scope, and operations often in relation to your organizational needs and context so that it remains relevant and delivers the highest impact.

Solid Impacts From Rapid Research

Building a rapid research program can have a big impact and can contribute positively toward your team’s long-term strategy. One impact of instituting a rapid research program could be that now your team is freed up to focus on more generative research, which unlocks your ability to deliver deep customer insights that pave the way for innovation or strategy. And due to your new rapid pace, you may be able to keep pace with agile development and conduct end-to-end research within 2-week sprints. Another impact is that you may catch more usability issues further upstream, saving you over 100x in overhead business cost. A final impact of a rapid research program is that it can double your team’s throughput, allowing your team to deliver more research, more frequently, to accommodate more organizational needs.

Be sure to track these impacts over time so that you not only get credit for the hard work you put into building the program but so that you can sustain and grow the program over time.

Considerations When Building A Rapid Research Program

As mentioned in this article, there are many benefits to building a rapid research program. That being said, there are limitations to rapid research in regard to its pros and cons when it should be used, and if you have the available time to stand up a program yourself.

Pros And Cons

As with building any new program, one should consider both its benefits as well as drawbacks. Here are a few for rapid research programs:

Pros:

  • Can free time for foundational work;
  • Rapid studies may keep a better pace with development cycles;
  • Can create meaningful opportunities for junior staff;
  • Can double project throughput, increasing output volume.

Cons:

  • Still requires work and dedicated bandwidth;
  • Another thing to diligently track and manage;
  • Not great for all types of research studies;
  • May cost more money or resources you don’t have.

Guidance For Using The Program

Rapid Research programs are best for specific types of research which do not take a long time to complete or require rigorous expertise. You may want to educate your partners on when they should expect to use a rapid research program and when they should not.

  • Use rapid research when:

    • Agility or quick turnaround is needed;
    • You need simple iterative research;
    • Stakeholder groups are easier to rally;
    • Participants are easy to reach.
  • Do not use rapid research when:

    • The study method cannot be done quickly without risking quality;
    • A highly complex or mixed-methods study is needed;
    • A project requires high visibility or stakeholder alignment;
    • You have specific, hard-to-reach participants.

Ramp Up Time

While the exact timeline of building a rapid research program varies from team to team, it does take time to do it right. Make sure to plan out enough time to do the upfront work of identifying the appropriate scope, timing, and cadence, as well as gathering consensus from leadership and appropriate stakeholder groups. Standing up a Rapid Research program can take anywhere from 3 months to 1 year, depending on the following:

  • Legal and compliance limitations or requirements.
  • The number of stakeholder groups you need buy-in from.
  • Approval of budget for outside vendors or for hiring an in-house team.
  • Time it takes to build templates, guidelines, and materials.
  • Onboarding, training, and iteration when starting out.

Conclusion

A rapid research program can be a fundamental part of your team’s UX Research strategy, enabling your team to take on new insight challenges and deliver efficient research at an unprecedented pace. Building a rapid research program with high intention by determining the goals, appropriate scope, and necessary infrastructure will set your team up for success and enable you to deliver more value for your organization as you scale your user research practice.

Don’t be afraid to try a rapid research program today!

Further Reading On SmashingMag

Categories: Others Tags:

How AI is Making Web Design More Efficient

July 12th, 2023 No comments

As the world continues to move towards automation and advanced technologies, leveraging Artificial Intelligence (AI) in web design is becoming increasingly important. With AI, designers can now create websites faster, more accurately, and more efficiently than ever before.

Categories: Designing, Others Tags:

Smashing Podcast Episode 63 With Chris Ferdinandi: What Is The Transitional Web?

July 11th, 2023 No comments

In this episode of The Smashing Podcast, we’re talking about The Transitional Web. What is it, and how does it describe the technologies we’re using? Drew McLellan talks to Chris Ferdinandi to find out.

Show Notes

Weekly Update

Transcript

Drew: He’s the author of the Vanilla JS Pocket Guide series, creator of the Vanilla JS Academy Training Program and host of the Vanilla JS Podcast. We last talked to him in late 2021 where we asked if the web is dead, and I know that because I looked it up on the web. So, we know he is still an expert in Vanilla JS but did you know he invented fish and chips? My smashing friends, please welcome back Chris Ferdinandi. Hi, Chris, how are you?

Chris: I’m smashing, thank you so much. How are you today, Drew?

Drew: I’m also smashing, thank you for asking. It’s always great to have you back on the podcast, the two of us like to chat about maybe some of the bigger picture issues surrounding the web. I think it’s easy to spend time thinking about the minutiae of techniques or day-to-day implementation or what type of CSS we should be using or these things but sometimes it is nice to take a bit of a step back and look at the wider landscape.
Late last year, you wrote an article on your Go Make Things website called The Transitional Web. What you were talking about there is the idea that the web is always changing and always in flux. After, I don’t know how long I’ve been doing this, 25 years or so working on the web, I guess change is pretty much the only constant, isn’t it?

Chris: It sure is. Although, to be fair, it feels like a lot of what we do is cyclical and so we’ll learn something and then we’ll unlearn it to learn something new and then we’ll relearn it again just in maybe a slightly different package which is, in many ways, I think the core thesis of the article that you just mentioned.

Drew: And is that just human nature? Is that particular to the web? I always think of, when I was a kid in the ’80s, the 1980s, okay, so we’re talking a long while back-

Chris: It was a wild time.

Drew: One of the pinnacles that, if you had a bit of spending power, one of the things you’d have in your living room was a hi-fi separates. So, you’d have a tape deck, maybe a CD deck, an amplifier and I always remember as a kid, they’d all be silver starting off and those were the really cool ones. And then after a while, a manufacturer would come out with one that wasn’t silver, it was black and suddenly black looked really cool and all the silver stuff looked really old. And so, then you’d have five years of everything being in black and then somebody would say, “Oh, black’s so boring. Here’s our new model, it’s silver,” and everyone would get really excited about that again.
I feel like somehow the web is slightly and, as I say, maybe it’s a human nature thing, perhaps we’re all just magpies and want to go to something that looks a bit different and a bit exciting and claim that’s the latest, greatest thing. Do you think there’s an element of that?

Chris: Yeah, I think that’s actually probably a really good analogy for what it’s like on what our industry has a tendency to do. I think it’s probably bigger than just that. I had a really, I don’t want to say a really good thought, that sounds arrogant. I had a thought, I don’t know if it was good or not, I forgot what it was.

Drew: Oh, it’ll come back.

Chris: So, I can’t tell you but it was related as you were talking.

Drew: I bamboozled you with talk of hi-fi separates.

Chris: Yeah, no, that’s great. It’s great.

Drew: We last talked about this concept of the lean web where we were seeing a bit of a swing back away from these big frameworks where everything is JavaScript and even our CSS was in JavaScript. And we were beginning to see at that time a launch of things like Petite Vue, Alpine.js, Preact, these smaller, more focused libraries that try and reduce the weight of JavaScript and be a little bit more targeted. Is that a trend that has continued?

Chris: Yeah, and it’s continued in a good way. So, you still see projects like that pop up, I’ve seen since then a few more tiny libraries. But I think one of the other big trends that I’m particularly both excited about and then maybe also a little bit disheartened about is the shift beyond smaller client side libraries into backend compiler tools. So, you have things like Svelte and SvelteKit and Astro which are designed to let people continue to author things with a state-based UI, JavaScripty approach but then compile all of that code that would normally have to exist in the browser and run at runtime into mostly HTML with just the sprinklings of JavaScript that you need to do the specific things you’re trying to do.

Chris: And so, the output looks a lot more traditional DOM manipulation but the input looks a lot more like something you might write in Reactor view. So, I think that’s pretty cool. It’s not without, in my opinion, maybe some holes that people can fall into and I’m starting to see some of those tools do the, hey, we solved this cool thing in an innovative way so let’s go and repeat some of the mistakes of our past but differently traps and we can talk about that, that didn’t make it into the article that you referenced. I recently wrote an update looking back on how things are changing that talks about where they’re headed.

Chris: But I think one of the big things in my article, The Transitional Web, was this musing about whether these tools are the future or just a transitional thing that gets us from where we are to where we’re headed. So, for example, if you’ve been on the web for a while, you may remember that there was a time where jQuery was the client side library.

Drew: It absolutely was, yeah.

Chris: If you were going to do JavaScript in the web, you were going to use jQuery.

Drew: jQuery was everywhere.

Chris: Yeah. And not that you couldn’t get by without it but doing something like getting all of the elements that have a class was incredibly difficult back in the, i.e., six through eight era. And jQuery made it a lot easier, it smoothed things out across browsers, it was great. But eventually browsers caught up, we got things like querySelector and querySelectorAll, the classList API, cool methods for moving elements around like a pen and before and after and removed. And suddenly, a lot of the stuff that was jQuery’s bread and butter, you could just do across any browser with minimal effort. But not everything, there were still some gaps or some areas where you might need polyfills.

Chris: And so, you started to see these smaller tools that were … jQuery, they did some of the things but they didn’t do everything so the ones that immediately come to mind for me are tools like Umbrella JS or Shoestring from the folks over at Filament Labs. And the thing with those tools is they were really popular for a hot minute, everyone’s like, “You don’t need jQuery, use these,” and then the browsers really caught up and they went away entirely. And actually, even before that fully happened, you started to see tools like React and Vue and Angular start to dominate and just, really, people either use jQuery or these other tools, they don’t touch Umbrella or Shoestring at all.

Chris: So, I think the thing I often wonder is are tools like Preact and Solid and Svelte and Astro, are those more like what reacted for the industry or more like Umbrella and Shoestring where they’re just getting us to whatever’s next. At the time that I wrote the article, I suspected that they were transitional. Now, I think my thoughts have shifted a little bit and I feel like tools like Preact and Solid are probably a little bit more and Petite Vue who are … You called it something weird because you’re British, I forget, Petty Vue or something but-

Drew: Petite Vue, yeah.

Chris: No, I’m just teasing, I’m sorry. I love you, Drew.

Drew: I was attempting to go for the French so …

Chris: Right? Sorry, I have the way you guys say herb stuck in my head now instead of herb and I just can’t. So, yeah, I feel like those tools are potentially transitional and the what’s next just as an industry is, in my opinion, and a lot could change in the next year or three, the way I’m feeling now, it seems like tools like Astro and Svelte are going to be that next big wave at least until browsers catch up a lot. So, in my opinion, the things that browsers really need to have to make a lot of these tools not particularly necessary is some native DOM diffing method that works as easily as inner HTML does for replacing the DOM but in a way that doesn’t just destroy everything.

Chris: I want to be able to pass in an HTML string and say make the stuff in this element look like this with as little messing up of things as possible. And so, until we have that, I think there’s always going to be some tooling. There’s a lot of other things that these tools do like you can animate transitions between pages like you would in SPA. We’ve got a new API that will hopefully be hitting the browser in the near future, it works in Chrome Canary now but nowhere else, your transitions API. There’s an API in the works for sanitizing HTML strings so that you don’t do terrible cross-site scripting stuff, hasn’t really shipped anywhere yet but it’s in the works.

Chris: So, there’s a lot of library-like things in the works but DOM diffing, I think, is really the big thing. So much of how we build for the web now is grab some data from an API or a database and then dynamically update the UI based on things the user does. And you can do that with DOM manipulation, I absolutely have but, man, it is so much harder to do. So, really, I get the appeal of state-based UI. The flip side is we also use state-based UI for a lot of stuff where it’s not appropriate and it ends up being harder to manage and maintain in the long run. So, I’m rambling, I’m sorry. Drew, stop me, ask [inaudible 00:09:57].

Drew: Yeah, I don’t want to gloss over the importance of jQuery as an example for this overall trend because, as you say, at the time, it was really difficult to just find things in the DOM to target something. You could give things an ID and then you had get element by ID and you could target it that way. But say you wanted to get everything with a certain class, that was incredibly difficult to do because there was no way of accessing the class list, you could just get the attribute value and then you would have to dissect that yourself. It’s incredibly inefficient to try and get something by class and what jQuery did was it took an API that we were already familiar with, the CSS selector API essentially, and implemented that in JavaScript.

Drew: And, all of a sudden, it was trivially easy to target things on the page which then made it … It very quickly just became the defacto way that any JavaScript library was allowing you to address elements in the DOM. And because of that trend, because that’s how everybody was wanting to do it with quite a heavy JavaScript implementation, let’s not forget this was not a cheap thing to do, that the web platform adapted and we got querySelector which does the same thing on querySelectorAll. And of course, then what jQuery did or I think its selector engine was called Sizzle, I think, under the hood. Sizzle then adopted querySelectorAll as part of its implementation.

Drew: So, if a selector could be resolved using the native one, it would. So, actually, the web platform was inspired by jQuery and then improved jQuery in this whole cycle. So, I think the way the web has always progressed is observing what people are doing, looking at the problems that they’re trying to solve and the messes of JavaScript that we’re using to try and do it and then to provide a native way to do that which just makes everything so much easier. Is that the ultimate trend? Is that what we’re looking at here?

Chris: Yeah, for sure. I often describe jQuery as paving the cow paths. So many of the methods that I love and use in the browser, I owe entirely to jQuery and I think recognizing that helped me get less angry at some of the damage that modern frameworks do or modern libraries because the reality is they are … I think the thing is a lot of them are experiments that show alternate ways to do things and then we have a tendency as an industry to be like, “If it’s good for this, it’s good for everything.” And so, React is very good at doing a specific set of things in a specific use case and, through some really good marketing from Facebook, it became the defacto library of the web.

Chris: I think tools like Astro and Svelte are similarly showing a different way we can approach things that involves authoring and adding a compiled step. And they are, by no means, original there, static site generators have existed for a while, they just layer in this. And we’ll also spit out some reactive interacting bits and you don’t have to figure out how to do that or write your own JavaScript for it, just write the stuff, we’ll figure out the rest. So, yeah, I do think that’s the nature of the web platform is libraries are experiments that extend what the platform can already do or abstract away some of the tough stuff so that people can focus on building and then, eventually, hopefully, the best stuff gets absorbed back into the platform.

Chris: The potential problem with that model is that, usually, by the time that happens, the tooling has both gotten incredibly heavy to the detriment of end users and has become really entrenched. So, even though the idea … Think of jQuery, we talk about it in the past tense but it’s still all over the web because these sites that were built with it aren’t just going to rip it out, it’s a lot of work to do that. And there’s a lot of developers even today who, when they start a new project, they reach for jQuery because that’s what they learned on and that’s what they know and it’s easiest for them.

Chris: So, these tools just really have persistence for better or for worse. It’s great if you invested a lot of time in learning them, it’s great job security, you’re not wasted time. But a lot of these tools are very heavy, very labor-intensive for the browser and ultimately result in a more fragile end user experience which is not always the best thing.

Drew: I remember, at one point, there was a movement calling for React to actually be shipped with the browser as a way of offsetting the penalty of downloading and pausing all that script. It is frustrating because it’s like, okay, you’re on the right path here, this functionality should be native to the browser but then, crucially, at the last moment, you swerve and miss and it’s like, no, we don’t want to embed React, what we want to do is look at the problems that React is helping people solve, look at the functionality that it’s providing and say, okay, how can we take the best version of that and work that into the web platform. Would you agree?

Chris: Yeah. Yeah, exactly. React will eventually be in the browser, just not the way everybody … I think a lot of people talk about it as in literally, the same way jQuery is in the browser now, too. We absorb the best bits, put some different names on them, arguably more verbose, clunky, difficult to use names in many cases and so I think that’s how it’ll eventually play out. The other thing that libraries do that I wish the web platform was better at, since we’re on this path, is just API consistency.

Chris: So, it’s one thing that jQuery got, really, is the API is very consistent in terms of how methods are authored and how they work. And just, as a counterpoint, in JavaScript proper, just native JavaScript, you could make a strong argument that querySelector and querySelectorAll shouldn’t be separate methods, it should just be one method that has a much shorter name that always returns … Hell, I’d even argue an array, not a node list because there are so many more methods that you can use to loop over arrays and manipulate them to nodes or node lists.

Chris: Why is the classList API a set of methods on a property instead of just a set of methods you call directly on the element? So, why is it classList add, classList remove instead of add class, remove class, toggle class, et cetera. It’s just lots of little things like that, this death by a thousand cuts, I think, exacerbates this problem that, even when native methods do the thing, you still get a lot of developers who reach for tooling just because it smooths over those rough edges, it often has good documentation. MDN fills the gap but it’s not perfect and, yeah.

Drew: Yes, using a well-designed framework, the methods tend to be guessable. If you’ve seen documentation that includes a remove method, you could probably guess that an add method is the opposite of that because that’s how anybody would logically name it. But it’s not always that way with native code, I guess, because of reasons, I don’t know, designed by committee, historical problems. I know that, at one point, there were, was it MooTools or prototype or some of these old frameworks that would add their own methods and basically meant that those names couldn’t be reused for compatibility reasons.

Chris: Yeah, I remember there was that whole SmooshGate thing that happened where they were trying to figure out how to … I think it was the flat method or whatever that originally was supposed to be called. MooTools had an array method of the same name attached to the array prototype. If the web standards committee implemented it the way they wanted to, it would break any website using MooTools, a whole thing.

Drew: In some ways, it seems laughable, any website using MooTools. If your website’s using MooTools, good luck at this point. But it is a fundamental attribute of the web that we try not to break things that, once it’s deployed, it should keep running and a browser update isn’t going to make your use of HTML or CSS or whatever invalid, we’re going to keep supporting it for as long as possible. Even if it’s been deprecated, the browsers will keep supporting it.

Chris: Yeah. I was just going to say, the marquee element was deprecated ages ago and it still works in every major browser just for legacy reasons. It’s that core ethos of the web which is the thing I love. I think it’s a good thing, yeah.

Drew: It is but, yes, it is not without its problems as we’ve seen. It’s, yeah, yeah, very difficult. You mentioned the View Transitions API which I think now may be more broadly supported. I don’t know if I saw from one of the web.dev posts that’s now, as of this month, has better support, which is transitional state but like an SPA style transition between one state and another but you can do it with multipage apps.

Chris: Yeah. A discord I’m in, just quick shout out to the Frontend Horse Discord, Adam Argyle was in there today talking about how, because he built this slide demo thing where every slide is its own HTML file and it uses U transitions to make it look like it’s just one single page app. He was saying that it still does require Chrome Canary with a flag turned on but things change quickly, very slowly and then all at once, that’s the-

Drew: Well, that’s pretty up to date and an authoritative statement there, yeah. But we saw, it was Google IO recently, we saw loads of announcements from them back to things they’re working on. Things like the popover API, which is really interesting, which make use of this top layer concept where you don’t have to futz around with Z index to make sure, if something needs to be on the top, it can be on the top. It’s these sorts of solutions that you get from the web platform that are always going to be a bit of a hack if they’re implemented by a library in JavaScript.

Drew: It’s the fact that you can have a popover that you can always guarantee is going to be on top of everything else and has baked into its behavior so that it can be accessibly dismissed to all those really important subtleties that it’s so easy to get wrong with a JavaScript implementation that the web platform just gets right. And I guess that means that the web platform is always going to move more slowly than a big framework like React or what have you but it does it for a reason because every change is considered for, I don’t know, robustness and performance and accessibility and backward compatibility. So, you end up with, ultimately, a better solution even if it has weird method names.

Chris: For sure. Yeah, no, that’s totally fair.

Drew: I think-

Chris: Yeah.

Drew: … we had-

Chris: Oh, sorry. Go ahead, Rich. Drew rather.

Drew: I was about to mention Rachel, we had Rachel Andrew on the show a few episodes ago talking about Google Baseline which is their initiative to say which features are supported to replace a browser support matrix idea. And if you look at the posts that Rachel writes what’s new on the web on web.dev every month, she does a roundup of what’s now stable, what you can use and there’s just a vast amount being added to the web platform all the time. It could be a change log from a major framework because it is a major framework, it’s the native web platform but there’s just things being added all the time.
Is there anything in particular that you’ve seen that you think would make a big difference or are you just hanging out for that DOM diffing after all those things that are yet to come?

Chris: Yeah. So, things like transitions between pages and stuff, I’m going to be honest, those don’t excite me as much as I think they excite a lot of other people. I know that’s a big part of the reason why a lot of developers that I know really like SPAs and, I don’t know, markers, get really excited about that thing, I’ve just never really understood that. I am really holding out for a DOM diff method. I think the API I’m honestly most excited for is the Temporal API which is still in, I think, stage three so it’s not coming anytime soon. But working with dates in JavaScript sucks and the Temporal API is hopefully going to fix a lot of those issues, probably introduce some new ones but fix most of them.

Drew: This is new to me. Give us a top level explanation of what’s going on with that one.

Chris: Oh, yeah, sure. So, one of the big things that’s tough to do with the date object in JavaScript … For me, there’s two big things that are really particularly painful. One of them is time zones. So, trying to specify a time in a particular time zone or get a time zone from a date object. So, based on when it was created or how it was created, no, this is the time zone. Figuring out the time zone the person is in is really difficult. And then the other aspect that’s difficult is relative time. So, if you’ve got two different dates and you want to just quickly figure out how much time is between them, you can do it but it involves doing a bunch of math and then making some assumptions especially once you get past days or, I guess, weeks.

Chris: So, I could easily look at two date objects, grab timestamps from them and be like, “Okay, this was two weeks ago or several days.” But then, once you start getting into months, the amount of days in a month varies. So, if I don’t want to say 37 weeks, I want to say, however many months that ends up being, it’s going to vary based on how long the months were. And so, the Temporal API addresses a lot of those issues. It’s going to have first class support for time zones, it’s going to have specific methods for getting relative time between two temporal objects and, in particular, one where you hopefully won’t have to …

Chris: It’s been a while since I’ve read the spec but I’m pretty sure it allows you to not have to worry about, if it’s more than seven days, show in weeks, if it’s more than four weeks, use months. You can just get a time string that says this was X amount of time ago or is happening N amount of time in the future or whatever. So, there’s certain things the Date API can do relatively or the date object can do relatively well but then there’s a couple of you’re trying to do appy stuff with it.

Chris: For example, I once tried to build a time zone calculator so I could quickly figure out when some of my colleagues in other parts of the world, when it was for them. And it was just really hard to account for things like, oh, most of Australia shifted daylight savings time this month but this one state there doesn’t, they actually do it a different month or not at all and so it was a huge pain.

Drew: Yeah, anything involving time zones is difficult.

Chris: Yeah. It’s one of the biggest problems in computer science. That and, obviously, naming things. But yeah, it will smooth over a lot of those issues with a nicer, more modern API. If you go over to tc39.es/proposaltemporal, they have the docs of the work in progress or the spec in progress. It’s authored a lot more nicely than what you might normally see on, say, the W3C website in terms of just human readability but you can tell they borrowed a lot of the way the API works from libraries like Moment.js and date-fns and things like that. Which again gets back to this idea that libraries really pave those cow paths and show what a good API might look like and then the best ones usually win out and eventually become part of the browser.

Drew: And again, back to my point about the web platform getting the important details right, if you’ve got native data objects, you’re going to be able to represent those as localized strings which is a whole other headache. I’ve used libraries that will tell you, “Oh, this blog post was posted two weeks ago,” but it’ll give you the string two weeks ago and there’s no way to translate that or, yeah. So, all those details, having it baked into the platform, that’s going to be super good [inaudible 00:27:24].

Chris: Smashing, one might even say.

Drew: Smashing, yeah. It raises a question because the standards process takes time and paving the cow paths, there has to be a cow path before you pave it. So, does that approach always leave us a step behind what can be done in big frameworks?

Chris: Yeah, theoretically. I think we can look at an example where this didn’t work with the Toast API that Google tried to make happen a few years back. That was done relatively quickly, it was done without consensus across browsers, I don’t think it really leaned heavily on … I think it was just doing what you described, the paving before the cow paths were there and so it was just met with a lot of resistance. But yeah, I think the platform will always be a bit behind, I think libraries are always going to be a part of the web. Even as the Vanilla JS guy, I use libraries all the time for certain things that are particularly difficult.

Chris: For me, that tends to be media stuff. So, if I need to display really nice photo galleries that expand and shrink back down and you can slide through them, I always grab a library for that, I’m not coding that myself. I probably could, I just don’t want to, it’s a lot of little details to manage.

Drew: It’s a lot of work, yeah.

Chris: Yeah. So, I do, I think the platform will always be behind, I don’t think that’s necessarily a bad thing. I think, for me, the big thing I’ve wished for years is that we run through this cycle as an industry where a little tool comes out, does a thing well, throws in more and more features, gets bigger and bigger, becomes a black hole and just sucks up the whole industry. I keep picking on React but React is the library right now. And then eventually people are like, “Oh, this is big, maybe we should not use something as big,” and then you start to see little alternatives pop up.
And I really wish that we stopped doing that whole bigger black hole thing and the tools just stayed little and people got okay with the idea that you would pull together a bunch of little tools instead of just always grabbing the behemoth that does all the things. I often liken it to people always go for the Swiss Army knife when they really just need a toothpick or a spoon or a pair of scissors. Just grab the tool you need, not the giant multi-tool that has all this stuff you don’t.

Drew: It almost comes back to the classic Unix philosophy of lots of small tools that do specific things that have a common interface between them so that you can change stuff together.

Chris: And that’s probably where … Now that you’re saying it, I hadn’t really considered this but that’s probably where the behavior or the tendency arises is, if you have a bunch of small libraries from the same author, they often play together very nicely. If you don’t, they don’t always, yeah, it’s tougher to chain them together or connect those dots. And I really wish there was some mechanism in place that incentivized that a little bit more, I don’t know. I got nothing but I hadn’t really considered that until you just said it.

Drew: Maybe it needs to be a web platform feature to be able to plug in functionality.

Chris: Yeah. Remember the jQuery, I think it was called the extend method or they had some hook that, if you were writing a plugin, basically you would attach to the jQuery prototype and add your own things in a non-destructive way. I wish there was some really lightweight core that we could bolt a bunch of stuff into, that would be nice.

Drew: Yes, and I think that would need to come from the platform rather than from any third party because done the interface would never be agreed upon.

Chris: Very true.

Drew: You talk a lot about Vanilla JavaScript as a concept, I think it helps to give things names. I feel like this approach that we’re talking about here is being web platform native. Do you think that describes it accurately?

Chris: Yes. Yeah, definitely.

Drew: Yeah. So, you’ve talked about still reaching for libraries and things where necessary. Would you say that, if it is our approach to pave the cow paths that, really, the ecosystem needs these frameworks to be innovating and pushing the boundaries and finding the requirements that are going to stick, are they just an essential part of the ecosystem and maybe not so [inaudible 00:32:08]-

Chris: Yeah, probably more than I-

Drew: … to paint.

Chris: Yeah. I think, more than I’d like to admit, they are an essential part of the ecosystem. And I think what it comes back to for me is I wish that they did the one thing well and stayed a relatively manageable size. Preact, for example, has done a really great job of adding more features and still keeping themselves around three kilobytes or so, minified [inaudible 00:32:33] which is pretty impressive considering how much like React the API is and they have fewer abstractions internally so a lot of the dynamic updates, you, user Drew, interact with the page, some state changes and a render happened, that ended up happening orders of magnitude faster than it does in React as well.
Now, to be fair, a lot of the reason why is Preact is newer and it benefits from a lot of modern JavaScript methods that didn’t exist when React was created. So, under the hood, there’s a lot more abstraction happening but it’d probably require a relatively big rewrite of React to fix that.

Drew: And we know those are always popular.

Chris: Yeah. They are dangerous, I understand why people don’t like to do them. I’ve done it multiple times, I always end up shooting myself on the foot, it’s not great.

Drew: So, say that I’m a React developer and I’m currently, day-to-day, building client side SPAs but I really like the sound of this more platform native approach and I want to give it a try for my next project. Where should I start? How do I dip a toe into this world?

Chris: Oh, it depends. So, the easiest way, and I hate myself for saying this, but the easiest way, honestly, you got a few options. One of them, you rip out React, you drop in Preact, there’s a second smaller thing you need to smooth over some compatibility between the two but that’s going to give you just an instant performance boost, a reduce in file size and you can keep doing what you were doing. The way that I think is a little bit more future-proof and interesting, you grab a tool like Astro, which allows you to literally use React to author your code and then it’s going to compile that out … Excuse me, into mostly HTML, some JavaScript, it’s going to strip out React proper and just add the little interactivy bits that you need.

Chris: I saw a tweet a year or two ago from Jason Lengstorf from the Netlify developer relations team about how he took a next app that he had built, kept 90% of the code, he just made a few changes to make it fit into the way Astro hooks into things, ran the Astro compiler and he ended up having the same exact site with almost all of the same code but the shipped JavaScript was 90% smaller than what he had put in. And you get all the performance and resilience wins that come with that just automatically, just by slapping a compiler on top of what you already have.

Chris: So, I’m really excited about a tool like Astro for that reason. I’m also a little bit worried that a tool like Astro becomes a band-aid that stops us from addressing some of the real systemic issues of always reaching for these tools. Because you can just keep doing what you’re doing and not really make any meaningful changes and temporarily reduce the impact of them, I don’t know that it really puts us in a better place as an industry in the long run. Especially since tools like Svelte and Astro are now working towards this idea that, rather than shipping multi-page apps, they’re going to ship multi-page apps that just progressively enhance themselves into single page apps with hydration and now we’re right back to we’ve got an SPA.

Chris: So, I mentioned some stuff has changed, I recently saw a talk from Rich Harris, who’s the creator of Svelte and SvelteKit, about this very thing and he’s very strongly of the belief that SPAs are better for users because you’re not fetching and rerunning all of the JavaScript every time the page loads. And I get that argument and SvelteKit does it in a really cool way where, rather than having a link element like you might get in Next.js or something like that, a React router or whatever, they just intercept traditional hyperlinks and do some checking to see if they point to your current page or an external site and behave accordingly.

Chris: The thing that nobody ever talks about when they talk about SPAs are better is all of the accessibility stuff that they tend to break that you then need to bolt back in. So, even if you’re like, “Okay, this library is going to handle intercepting the links and finding the page and doing all the rendering and figuring out what needs to change and what stays the same,” there’s this often missed piece around how do you let someone who’s using a screen reader know that the UI has changed and how do you do it in a way that’s not absolutely obnoxious. You don’t want to read the entire contents of the page so you can’t just slap an ARIA live attribute on there.

Chris: Do you shift focus to the H1 element on the page? What happens if the user didn’t put an H1 element on the page? Do you have some visually hidden element that you drop some text in saying page loaded so that they know? Do you make sure you shift focus back to the top so they’re not stranded halfway down the page if they’re a keyboard user? It’s one of those things where how you handle is very it depends, contextual. And I think it’s really tough for a library to implement a solution that works for all use cases. I think it’s optimistic to assume the developers will always do the right thing.

Chris: I mentioned at the very start that I’m excited about these tools but I also see them doing that let’s repeat the same mistakes all over again and this feels like that to me. I absolutely understand why, on certain very heavy sites, you might want to shift to an SPA model but there are also just so many places you can really do real harm to yourself or your users when going down that path. And so, I worry that these tools came up to solve a bunch of UI or UX and performance related issues with state-based UI just to then re-implement them in a different way eventually. That’s my soapbox on that. If you have any questions or comments, I’m happy to hear them.

Drew: So, as often happens when we talk, we get all the way to the end and conclude that we’re doomed.

Chris: We’re not. I think it’s mostly we’re headed in a right direction, Drew. I’m a little less doom and gloom than I was a few years ago. And as much as I just ragged on tools like Astro and Svelte, I think they’re going to do a lot of good for the industry. I just love the move to mostly HTML, sprinkle in some JavaScript, progressively enhance some things, that’s a beautiful thing. And even though I was just ragging on the whole SPA thing that these tools are doing, one of the things they also do that’s great is, if that JavaScript to enhance it into an SPA doesn’t load or fails for some reason, Astro and SvelteKit fall back to a multi-page app with server side HTML.
So, I think that promise of, what was it, isomorphic apps they used to call them a while ago, it may be closer to that vision being realized than we’ve ever gotten before. I still personally think that just building multi-page apps is often better but I’m probably in the minority here, I often feel like I’m the old man shouting at the cloud.

Drew: And yes, as often happens, it all comes round to progressive enhancement being a really great solution to all of our problems. Maybe not all of our problems but some of them around the web.

Chris: It’s going to cure global hunger, you watch.

Drew: So, I’ve been learning all about being web platform native. What have you been learning about lately, Chris?

Chris: I’ve been trying to finally dig into ESBuild, the build tool/compiler I’ve been using, Rollup, and a separate NPM SaaS compiler thing and my own cobbled together build tool for years. And then Rollup V3 came out and broke a lot of my old stuff if I upgrade to it so I’m still on Rollup two and this was the motivation for me to finally start looking at ESBuild which also has the ability, I learned, to not just compile JavaScript but also CSS and will take nasty CSS imports and concatenate them all into one file for you just like ES modules would.

Chris: So, now I’m over here thinking like, “Oh, is it finally time to drop SaaS for native CSS?” and, “Oh, all these old SaaS variables I have, I should probably convert those over to CSS variables.” And so, it’s created this whole daisy chain of rabbit hole for me in a very good way because this is the kind of thing that keeps what we do professionally interesting is learning new things.

Drew: You’ll be the Vanilla CSS guy before we know it.

Chris: That’s Steph Eckles. She is much better at that than I am. I reach for her stuff all the time but, yeah, maybe a little bit.

Drew: If you, dear Listener, would like to hear more from Chris, you can find his social links, blog posts, developer tips newsletter and more at gomakethings.com. And you can check out his podcast at vanillajspodcast.com or wherever you’re listening to this. Thanks for joining us today, Chris. Did you have any parting words?

Chris: No, Drew, just thank you so much for having me. I always enjoy our chat so it was great to be here.

Categories: Others Tags:

How To Become A Better Speaker At Conferences

July 11th, 2023 No comments

During my time curating the UX London and Leading Design events, I used to watch a few hundred presentations each year. I’d be looking at a range of things, including the speaker’s domain experience and credibility, their stage presence and ability to tell a good story, and whether their topic resonated with the current industry zeitgeist. When you watch that many a presentation, you start to notice patterns that can either contribute to an absolutely amazing talk or leave an audience feeling restless and disengaged. But before you even start worrying about a delivery, you need to secure yourself a spot on the stage. How? Follow me along, and let’s find out!

Choosing What To Speak About

I think one of the biggest misunderstandings people have about public speaking is the belief that you need to come up with a totally new and unique concept — one that nobody has spoken about before. As such, potentially amazing speakers will self-limit because they don’t have “something new to share.” While discovering a brand new concept at a conference is always great, I can literally count the number of times this has happened to me on the one hand. This isn’t because people aren’t constantly exploring new approaches.

However, in our heavily connected world, ideas tend to spread faster than a typical conference planning cycle, and the type of people who attend conferences are likely to be taped into the industry zeitgeist already. So even if the curator does find somebody with a groundbreaking new idea, by the time they finally get on stage, they’ve likely already tweeted about it, blogged about it, and potentially spoken about it at several other events.

I think the need to create something unique comes from an understandable sense of insecurity.

“Why would anybody want to listen to me unless I have something groundbreaking to share?”

The answer is actually more mundane than you might think. It’s the personal filter you bring to the topic that counts. Let’s say you want to do a talk about OKR’s (Objectives and Key Results) or Usability Testing — two topics which you might imagine have been “talked to death” over the years. However, people don’t know the specific way you tackled these subjects, the challenges you personally faced, and the roadblocks you overcame. There’s a good chance that people in your audience will already have some awareness of these techniques. Still, there’s also a good chance that they’ve been facing their own challenges and want nothing more than to hear how you navigate your way around them, hopefully in an interesting and engaging manner.

Also, let’s not forget that there are new people entering our industry every day. There are so many techniques I’ve made the mistake of taking for granted, only to realize that the people I’m talking to have not only never practiced them before but might not have even come across them; or if they have, they might have only the scantest knowledge about them, gleaned from social media and a couple of poorly written opinion pieces.

In fact, I think our industry is starting to atrophy as techniques we once thought were core ten years ago barely get a mention these days. So just because you think a subject is obvious doesn’t mean everybody feels the same, and there isn’t room for new voices or perspectives on the subject.

Another easy way to break into public speaking is to do some kind of case study. So think about an interesting project you did recently. What techniques did you use, what approach did you take, what problems did you encounter, and how did you go about solving them? The main benefit of a case study type talk is that you’ll know the subject extremely well, which also helps with the nerves (more on this later).

During the past few years, there were published many excellent, very detailed case studies on Smashing Magazine — take a look at this list for some inspiration.

Invest The Right Amount of Time Doing Prep

Another thing people get wrong about public speaking is feeling the need to write a new talk every time. This also comes from insecurity (and maybe a little bit of ego as well). We feel like once our talk is out in the world, everybody will have seen it. However, the sad reality is that the vast majority of people won’t be rushing to view your talk when the video comes online, and even if they do, there’s a good chance they’ll only have taken in a fraction of what you said if they remember any of it at all.

It’s also worth noting that talks are super context-sensitive. I remember watching a talk from former Adaptive Path founder Jeff Veen at least five times. I enjoyed every single outing because while the talk hadn’t changed, I had. I was in a different place in my career, having different conversations and struggling with different things. As such, the talk sparked whole new trains of thought, as well as reminded me of things I knew but had forgotten.

It should be mentioned that, like music or stand-up comedy, talks get better with practice. I generally find that it takes me three or four outings before the talk I’m giving really hits its stride. Only then have I learnt which parts resonate with the audience and which parts need more work; how to improve the structure and cadence, moving sections around for a better flow, and I’ve learnt the bits which people find funny (some international and some not), and how to use pacing and space to make the key ideas land. If you only give a talk once, you’ll be missing out on all this useful feedback and delivering something that’s, at best, 60% of what it could potentially be.

On a practical level, a 45-minute talk can take a surprisingly long time to put together. I reckon it takes me at least an hour of preparation for every minute of content. That’s at least a week’s worth of work, so throwing that away after a single outing is a huge waste. Of course, that’s not what people do. If the talk is largely disposable, they’ll put a lot less effort in, often writing their talk “the night before the event.” Unless you’re some sort of wunderkind, this will result in a mediocre talk, a mediocre performance, and a low chance of being asked to come back and speak again. Sadly this is one of the reasons we see a lot of the same faces on the speaker circuit. They’re the ones who put the effort in, deliver a good performance, and are rewarded with more invites. Fortunately, the quality bar at most conferences is so low that putting a little extra time into your prep can pay plenty of dividends.

Nailing The Delivery

As well as 45 minutes being a lot of content to create, it’s also a lot of content to sit through. No matter how interesting the subject is, a monotone delivery will make it very hard for your audience to stay engaged. As such, nailing the delivery is key. One way to do this is to see public speaking for what it is — a performance — and as the performer, you have a number of tools at your disposal. First of all, you can use your voice as an instrument and try varying things like speed, pitch, and volume. Want to get people excited? Use a fast and excitable tone. Want people to lean in and pay attention? Slow down and speak quietly. Varying the way you speak gives your talks texture and can help you hold people’s attention for longer.

Another thing you can use is the physical space. While most people (including myself) feel safe and comfortable behind the lectern, the best speakers use the entire stage to good effect, walking to the front of the stage to address the audience in a more human way or using different sides of the stage to indicate different timelines or parts of a conversation.

Storytelling is an art, so consider starting your talk in a way that grabs your audience’s attention.

This generally isn’t a 20-minute bio of who you are and why you deserve to be on the stage. One of the most interesting talks I ever saw started with one such lengthy bio causing a third of the audience to get up and walk out. I felt really bad for the speaker — who was visibly knocked — so I stuck with it, and I’m so glad I did! The talk turned out to be amazing once all the necessary cruft was removed.

“When presenting at work or [at a conference or] anywhere else, never assume the audience has pledged their undivided attention. They have pledged maybe 60 seconds and will divide their attention as they see fit after that. Open accordingly.”

Mike Davidson

A little trick I like to use is to start my talk in the middle of the story: “So all of a sudden, my air cut out. I was in a cave, underwater, in the pitch black, and with less than 20 seconds of air in my lungs.” Suddenly the whole audience will stop looking at their phones. “Wait, what?” they’ll think. What’s happening? Who is this person? Where are they? How did he get there? And what the hell does this have to do with design? You’ve suddenly created a whole series of open questions which the audience desperately wants to be closed, and you’ve just bought yourself five minutes of their undivided attention where you can start delivery.

Taking too long to get to the meat of the talk is a common problem. In fact, I regularly see speakers who have spent so long on the preamble that they end up rushing the truly helpful bits. One of the reasons folks get stuck like this is that they feel the need to bring everybody up to the same base level of knowledge before they jump into the good stuff. Instead, it’s much better to assume a base level of knowledge. If the talk stretches your audience’s knowledge, that’s fine. If it goes over some people’s heads, it might encourage them to look stuff up after the event. However, if you find yourself teaching people the absolute basics, there’s a good chance the more experienced members of the audience will zone out, and capturing their interest will become that much harder later on.

When speakers don’t give themselves enough time to prepare a good narrative, it’s easy to fall back on tried and tested patterns. One of these is the “listicle talk” where the speaker explains, “Here are twelve things I think are important, and I’m going to go through them one by one.” It’s a handy formula, but it makes people super-conscious of the time. (“Crikey, they’re still only at number five! I‘m not sure I’m going to make it through another seven of these points.”)

In a similar vein are the talks, which are little more than a series of bullet points that the speaker reads through. The problem is that the audience is likely to read through them much quicker than the speaker, so people basically know what you’re going to say in advance. As such, keep these sorts of lists in your speaker notes and pick some sort of title or image that illustrates the points you’re about to make instead.

Tame The Nerves

Public speaking is unnatural for us, so everybody feels some level of stress. I have one friend who is an absolutely amazing speaker on stage — funny, charming, and confident — but an absolute wreck moments before. In fact, it’s fairly common before going on stage to think, “Why the hell am I doing this to myself?” only to come off the stage 45 or 60 minutes later thinking, “That was great. When can I do it again!”.

One way to minimize these nerves is to memorize the first five minutes of your talk. If you can go on the stage with the first five minutes in the bag, the nerves will quickly subside, and you’ll be able to ease into your presentation some more. This is another reason why starting with a story can be helpful, as they’re easy to remember and will give you a reasonable amount of creative license.

A sure way to tame the nerves is to feel super-prepared and practiced; as such, it’s worth reading your talk out loud a bunch of times before you deliver it to an audience. It’s amazing how often something sounds logical when you say it in your head, but it doesn’t quite flow properly when said out loud. Practicing in front of people is very helpful, so consider asking friends or colleagues if you could practice on them. Also, consider doing a few dry runs with a local group before getting on a bigger stage. The more you know your material, the less nervous you’ll feel.

While some speakers like to brag about how little prep they’ve done or how little sleep they’ve got the night before, don’t get tricked into thinking that this is the standard approach. Often these folks are actually very nervous and are saying things like this in an attempt to preempt or excuse potential poor performance. It reminds me of the kids at school who used to claim they didn’t study and revise the material and ended up getting a B-. They almost certainly did some revision, albeit probably not enough. But this posturing was actually a way for them to manage their own shortcomings. “I bet I could have gotten an A if I’d put some extra work in.” Instead, make sure you’re well prepared, well rested, and set yourself up for success.

It’s worth mentioning that most people get nervous during public speaking, even if they like to tell you otherwise. As such, nerves are something you just need to get better at managing. One way to do this is to re-frame “nerves” (which have negative connotations) to something more positive like “excitement.” That feeling of excitement you get before giving a talk can actually be a positive thing if you don’t let it get out of hand. It’s basically your body’s way of getting you ready to perform.

However, this excess energy can bleed out in some less helpful ways, such as the “speaker square dance.” This is where speakers either shift their weight from one foot to the other or take a step forwards, a step to the side, and a step back, like some sort of a caged zoo animal. Unfortunately, this constant shifting can feel very unsettling and distracting for the audience, so if you can, try to plant your feet firmly and just move with deliberate intent when you want to make a point.

It’s also great if you can try to minimize the “ums” and “ahs.” People generally do this to give themselves pause while they’re thinking about the next thing they want to say. However, it can come across as if you are a little unprepared. Instead, do take actual breaks between concepts and sentences. At first, it can feel a bit weird doing this on stage, but think of it as an aural whitespace, making it easier for your audience to take in one concept before transitioning on to the next.

Note: I feel comfortable calling these behaviors out as they’re both things I personally do, and I’m working on fixing them — with mixed results so far.

Avoid The Clichés

At least once during every conference I attend, a speaker will say something jokingly along the lines of “I’m the only one standing between you and tea/lunch/beer.” It’s meant as a wry apology, and the first time I heard it, I gave a gentle chuckle. However, I’ve been at some conferences where three speakers in a row had made the same joke. Apart from a lack of originality, this also shows that the speakers haven’t actually been listening to the other talks, probably because they’ve been in their room or backstage, tweaking their slides. Sometimes this is necessary, but I always appreciate speakers who have been engaged with the content, make references to earlier talks, and don’t trot out the same old clichés as the previous speakers.

Note: I should also add that personally, I find the joke (“the only thing between you and beer”) by the last speaker of the day somewhat problematic, as it implies that people are here for the drink rather than the conference content, and because it also somewhat normalizes overconsumption.

One thing some speakers do in order to calm their nerves (and also to increase people’s engagement, as a side-effect) is through audience participation. If you get people from the audience interacting with each other for five minutes, it takes some of the pressure and focus off of you. It’s also five fewer minutes of content you need to prepare.

However, I see audience participation go wrong far more often than it goes right. This happens especially in front of Brits and Northern Europeans who would rather curl up into a ball and die rather than risk the social awkwardness of talking to their neighbors.

I remember seeing one American speaker walk up the aisle at a European conference encouraging the audience to whoop and cheer while they high-fived everybody or at least attempted to high-five everybody. Although this sort of hype-building might have worked well in Vegas, the assembled audience of Northern Europeans found the whole episode deeply embarrassing, and the speaker never truly recovered for the rest of the talk.

And, on the opposite side of things, if you do get your audience interacting, it can be quite hard to get them to stop a few minutes later! I have seen far too many speakers asking people to introduce themselves to their neighbors, only to cut them off 30 or so seconds later. So if you have such an activity planned, make sure you leave enough time for it to become a meaningful connection, and have a strategy on how you’re going to bring people’s focus back to you.

Another (negative) thing I see a lot of speakers do is make jokes about how they didn’t write their talk till last night or didn’t get to bed till late because they were out drinking. While it’s good to appear to be vulnerable and human, if played wrong, the message you might actually be sending is that you don’t really care about the audience, so be careful.

How To Get Invited Back?

Organizing conferences can be stressful work. You’re trying to coordinate with a bunch of different people with widely different workloads, communication styles, and response times. People are super quick to say “Yes!” to a speaking gig, but then they might go dark for months on end. This is really difficult if you’re trying to get enough information to launch your event site and start selling tickets. It’s even harder if you’re trying to organize things like travel and accommodation and you are seeing the price of everything going up.

As such, you can make conference organizers’ lives a lot easier by responding to their emails in a timely manner, sending them your talk descriptions, bio information, and headshots when asked, and confirming or booking your travel details enough time in advance so that the prices don’t double in size.

Speakers who are a pleasure to work with get recommended and invited back. Speakers who don’t respond to emails, send in overly-short descriptions or leave booking travel to the last minute often don’t. In fact, I’m a member of several conference organiser Slack groups where this sort of behaviour is regularly talked about, causing invites for certain speakers to dry up quickly. This is, sadly, another reason why you see the same speakers talking at events time and again. Not necessarily because they’re the best speakers, but because they’re reliable and don’t give the organizers a heart attack.

Conclusion

I know this article has covered a lot of public speaking do’s and don’t’s. But before wrapping things up, it’s worth mentioning that speaking is also a lot of fun. It’s a great way to attend conferences you might not have otherwise been able to afford; you get to meet speakers whose work you might have been following for years and learn a ton of new things. It also provides a great sense of personal accomplishment, being able to share what you’ve learnt with others and “pay it forward.”

While it’s easy to assume that all speakers are extroverts, (the art of) speaking is actually surprisingly good for introverts, too. A lot of people find it quite awkward to navigate conferences, go up to strangers, and make small talk. Speaking pretty much solves that problem as people immediately have something they can talk to you about, so it’s super fun walking around after your talk and chatting with people.

All that being said, please don’t feel pressured into becoming a speaker. I think a lot of people think that they need to add a “conference speaker” to their LinkedIn bio in order to advance their careers. However, some of the best, most successful people I know in our industry don’t speak at public events at all, so it’s definitely not an impediment.

But if you do want to start sharing your experience with others, now is a good time. Sure, the number of in-person conferences has dropped since the start of the pandemic, but the ones that are still around are desperate to find new, interesting voices from a diverse set of backgrounds. So if it’s something you’re keen to explore, why not put yourself out there? You’ll have nothing to lose but potentially a lot to gain as a result.

Further Reading

Here are a few additional resources on the topic of speaking at conferences. I hope you will find something useful there, too.

  • Breaking into the speaker circuit,” by Andy Budd
    Here are some more thoughts on breaking into public speaking by yours truly. As somebody who both organizes and often speaks at events, I’ve got a good insight into the workings of the conference circuit. This is probably why I regularly get emails from people looking for advice on breaking into the speaking circuit. So rather than repeating the same advice via email, I thought I’d write a quick article I could point people to.
  • Confessions of a Public Speaker,” by Scott Berkun
    In this highly practical book, author and professional speaker Scott Berkun reveals the techniques behind what great communicators do and shows how anyone can learn to use them well. For managers and teachers — and anyone else who talks and expects someone to listen — the Confessions of a Public Speaker provides an insider’s perspective on how to effectively present ideas to anyone. It’s a unique and instructional romp through the embarrassments and triumphs Scott has experienced over fifteen years of speaking to audiences of all sizes.
  • Demystifying Public Speaking,” by Lara Hogan (A Book Apart), with foreword by Eric Meyer
    Whether you’re bracing for a conference talk or a team meeting, Lara Hogan helps you identify your fears and face them so that you can make your way to the stage, big or small.
  • Slide:ology: The Art and Science of Presentation Design,” by Nancy Duarte
    No matter where you are on the organizational ladder, the odds are high that you’ve delivered a high-stakes presentation to your peers, your boss, your customers, or the general public. Presentation software is one of the few tools that requires professionals to think visually on an almost daily basis. But unlike verbal skills, effective visual expression is not easy, natural, or actively taught in schools or business training programs. This book fills that void.
  • Presentation Zen: Simple Ideas on Presentation Design and Delivery,” by Garr Reynolds
    A best-selling author and popular speaker, Garr Reynolds, is back in this newly revised edition of his classic, best-selling book in which he showed readers there is a better way to reach the audience through simplicity and storytelling and gave them the tools to confidently design and deliver successful presentations.
  • How To — Public Speaking,” a video talk by Ze Frank
    You may benefit a lot from this video that Ze Frank made several years ago.
  • Getting Started In Public Speaking: Global Diversity CFP Day,” by Rachel Andrew (Smashing Magazine)
    The Global Diversity CFP Day (Call For Proposals, sometimes also known as a Call For Papers) is aimed to help more people submit their ideas to conferences and get into public speaking. In this article, Rachel Andrew rounds up some of the best takeaways along with other useful resources for new speakers.
  • Getting The Most Out Of Your Web Conference Experience,” by Jeremy Girard (Smashing Magazine)
    To be a web professional is to be a lifelong learner, and the ever-changing landscape of our industry requires us to continually update and expand our knowledge so that our skills do not become outdated. One of the ways we can continue learning is by attending professional web conferences. But with so many seemingly excellent events to choose from, how do you decide which is right for you?
  • Don’t Pay To Speak At Commercial Events,” by Vitaly Friedman (Smashing Magazine)
    The state of commercial web conferences is utterly broken. What lurks behind the scenes of such events is a widely spread, toxic culture despite the hefty ticket price. And more often than not, speakers bear the burden of all of their conference-related expenses, flights, and accommodation from their own pockets. This isn’t right and shouldn’t be acceptable in our industry.
  • How to make meaningful connections at in-person conferences,” by Grace Ling (Figma Config)
    This is an excellent Twitter post about how to make meaningful connections at in-person conferences — a few concise, valuable, and practical tips.
Categories: Others Tags: