Archive

Archive for the ‘’ Category

Impact of AI and Cloud Computing on the Future of Finance

May 6th, 2024 No comments

Have you ever wondered if your money will be managed by AI and not by a bank? What if your bank doesn’t exist in a real place and just on some massive supercomputer situated thousands of kilometers away? This might happen someday, so let’s see how it happens!!

In this article, we will examine the meaning of AI and cloud computing and how they currently influence and will transform the future of finance. 

Investigating probable challenges and exploring detailed case studies, such as JP Morgan, Goldman Sachs, and Citigroup. Illustrating how AI and cloud computing, growing at a CAGR of 16.40 % (2024-2029) and 28.46% (2024-2030), will create innovation and the possibility of a dazzling global financial future. 

Overview of AI and Cloud Computing:

Before seeing the future, let’s look at AI and cloud computing and how they relate to finance. AI stands for Artificial Intelligence; in a nutshell, it means “teaching computers to think and learn on their own.” Instead of following just a set of fixed instructions, AI helps computers analyze data, understand patterns, and make decisions based on that information. AI has the potential to reach a market volume of US$826.70bn by 2030, indicating its extensive outreach in finance is inevitable.

Cloud Computing, on the other hand, means the on-demand delivery of computing services, such as servers, unlimited storage, databases, etc. It offers services at unrivaled speed, with minimal charges, and with time flexibility. With a market potential of 1.44 trillion USD by 2029, cloud computing will eventually take over the finance world. 

In Finance, AI and cloud computing are codependent on each other, as cloud computing provides the infrastructure for AI to function. Furthermore, AI escalates cloud computing services by providing advanced analytics and decision-making support.

Demystifying the Impacts of AI and Cloud Computing on Finance

Discussing the influence of AI and cloud services on finance and how they will affect the future. With insight into concepts like predictive analytics, fraud detection, and algorithmic trading we will understand how AI and cloud computing will contribute to these technologies. 

  1. Personalized Financial Services And Cost Regulation 

Personalization in finance means delivering financial services and products to meet individual customers’ distinctive needs and choices. The data becomes immense, requiring heavy storage capacity with cost efficiency, and effective service tailoring can only be done by AI models. 

AI:

The role of chatbots in AI helps automate the interrogation and response process of many finance apps and websites, eventually reducing time and saving organizations money. Hence, operational costs are cut down. AI establishes its technologies for answering queries, guiding customers through financial processes, and offering suggestive recommendations based on the user’s history and patterns.

CC:

DeFi (Decentralized Finance) exhibits a great example of personalization in finance. It helps eliminate intermediaries and utilize decentralized networks, ensuring distributed low costs. Moreover, cloud computing assists in storing and processing enormous amounts of customer data. Personalized financial services include digital financial advising, investment and expenditures planning, savings framework, and many more, thus striving for better customer satisfaction. 

  1. Self-Operation of Financial Processes

Integrating AI and cloud computing has revolutionized traditional financial processes by self-operating repetitive tasks. Automation has been the backbone of AI, and with the help of cloud services, it aims to achieve greater heights in finance.

AI: 

The global artificial intelligence market was valued at $136.55 billion in 2022. AI algorithms, such as Robotic Process Automation (RPA) and linear and logistic regression, can be trained to provide outstanding results, hence reducing the need for manual human intervention and automating data entry, transaction reconciliation, financial reporting, and compliance documentation.

CC:

Cloud computing provides the necessary infrastructure for deploying and scaling AI-powered automation solutions, enabling financial institutions to streamline operations and reduce functional costs. Cloud computing provides and maintains the prerequisite infrastructure for deploying and scaling AI-powered robotized solutions, enabling financial institutions to carry out their tasks effectively and efficiently.

  1. Fraud Detection and Security

Frauds in finance are frequent and prevalent, like the infamous WorldCom scam, the Ponzi Scheme, and many others. Security has been a prolonged issue since the start of finance around 3000 BC. With progressive technology, many revolutionary steps have been taken, but with advanced technology, the risk of breaching has also increased. 

AI:  

The fraud detection systems, which AI powers analyze patterns, irregularities, and distrustful behaviors in the financial data of users to investigate potential fraud and cases. “AI makes fraud detection faster, more reliable, and more efficient where traditional fraud-detection models fail.” The contribution of AI in cybersecurity has been rapidly increasing with applications like threat detection, vulnerability assessment, and risk management. 

CC: 

Cloud services protect from unauthorized access, cyber-attacks, and the storage of confidential and sensitive financial data. Furthermore, the latest improvements in the cloud allow AI fraud detection systems to function more efficiently and valuably. 

  1. Predictive Analytics and Decision Making

In finance, prediction is everything. Predicting what stock will go up and down, how much losses or gains you can get from a trade, or which company will crash. These are some examples of prediction. Organizations have recently integrated AI into cloud services to make these predictions. Hence, finding future trends for their customers and clients from historical and real-time data. We will see how these two technologies help in predictive analytics and decision-making.

AI: 

AI analyzes customer data to predict future behaviors. Various financial institutions use AI-driven predictive databases. Many applications are made by these databases, like portfolio regulation, credit risk assessment, loan underwriting, and customer filtration acc. to demographics and behavior.

CC: 

Cloud servers store big chunks of data, and their quick access to information helps them make decisions faster. The decision-making process is fastened by real-time data analysis, on-demand scalability, and accessibility. With heavy investments, cloud computing will exemplify these factors in the coming years, hence creating more data-driven decisions. 

  1. Shaping Future Banking Services and Customer Experience

Whether AI’s super effective systems or the cloud’s unlimited storage, the users want ease of access and comfort with the app or program. Services play a pivotal role in shaping the future of finance. The collaboration of cloud offerings and AI automation helps improve banking services and customer experience.

AI: 

Customer experience in finance includes all interactions between the company and the customer. AI majorly ameliorates customer interactions with its smooth and fast learning intelligence. AI deploys chatbots, virtual assistants, and recommendation engines. The automation and data extraction done by AI models help shape the future of financial services.

CC:

Cloud computing’s scalability function facilitates deploying AI-generated solutions for organizations and users. The astonishing speed of cloud computing accelerates and ensures secure and consistent cloud services to AI algorithms for better customer engagement. Cloud can help financial institutions foster loyalty and drive business success. 

  1. Algorithmic Trading and Risk Management

The Global Algorithmic Trading market size is projected to grow from $2.19 billion in 2023 to $3.56 billion by 2030. With such a potential, the possibilities of uncertainties and threats also become imminent. Thus, unifying these technologies provides a seamless experience for algorithmic trading and managing associated risks. 

AI: 

AI algorithms and models analyze market data and trading opportunities and gauge market sentiments for algorithmic trading. Machine learning techniques learn from data and adapt to transforming conditions with high speed and frequency. AI helps enhance risk management through real-time analytics, predictive modeling, and scenario forecasting skills. Various risks, such as market, credit, and operational risks, are mitigated, identified, and assessed in a timely manner.  

CC: 

Cloud computing provides global infrastructure, compliance, and security. It also streamlines complex trading algorithms, making trading and risk management more effective and scalable. Many risks in finance, like data security, disaster recovery, global accessibility, etc., can be neutralized by cloud computing, which also provides a storage facility, a prerequisite for risk management processes. 

Current Implications of AI and Cloud Services in Finance

JP Morgan Chase 

Numerous companies use AI finance for various applications, including fraud detection and risk management; one of these companies is JPMorgan Chase. AI and machine learning help JP assist employees, speed up responses, and help clients. OmniAI is their in-house innovation; it extracts insights from big piles of data and creates data-driven value for clients and customers. CEO Jamie Dimon said that AI is going to make the employees’ lives more qualitative by cutting down the work week by three and a half days for some.

Goldman Sachs

Goldman Sachs says that “the generative AI could raise global GDP by 7%”. GS is using AI with a different approach; they are utilizing it to generate and test codes, making their developer’s work more tranquil and effortless; they use cloud infrastructure for quantitative trading, investment management, and enhancing operational efficiency. 

Citigroup

Meanwhile, Citigroup wields AI to predict analytics from big chunks of data, and the cloud lets them do algorithmic trading (a program that follows a set of instructions for placing a trade all by itself). They are going to modernize the company’s systems using AI, and it’s going to cost them millions of dollars, according to Stuart Riley (Citi’s CCIO).

Other financial giants like Ant Group and HSBC use AI and On-Demand Computing to provide anti-money laundering and wealth management services.

Exploring Probable Challenges and Adaptive Strategies

AI and cloud computing have a bright future but the bright light can be harmful sometimes. In this section, we will look at probable challenges and tactical solutions that can arise with the onset of AI and cloud services.

  1. Data Privacy and Security Issues

Data gets breached. With ever-evolving technology, new ways of hacking and breaching have also come into existence. The breaching of data becomes a significant security concern. Storing Personally Identifiable Information (PII) and confidential data in the cloud becomes jeopardized by unauthorized access, data breaches, and cyber-attacks. With better security and accountability, we can eradicate these concerns.

  1. Ethical Risks and Social Issues

AI usage raises ethical concerns, including biases because of biased input or data. AI will replace jobs with computers, servers, and algorithms, which can create socio-economic disparities. There should be higher management for taking accountability for AI algorithms, as algorithmic mistakes can be havoc and wide-ranging.

  1. Cost Management and ROI

While AI and cloud computing services offer potential cost savings and operational efficiencies, managing infrastructure, licensing fees, and talent acquisition costs can be challenging. Intensive financial deployment and investment in managing the infrastructure of cloud servers have been a requisite as the finance industry is booming daily. Financial institutions that use AI must assess the return on investment for a clear and concise track of expenditures and revenues. 

  1. Connectivity

Connectivity is a necessity for the effective and absolute use of cloud computing. Without a proper internet connection, the services (Infrastructure-as-a-Service, Platforms-as-a-Service, and Software-as-a-Service) will be compromised and result in massive outages of cloud functions. Ensuring consistent internet connectivity throughout the system is essential for the smooth running of AI algorithms in finance. 

The approach and eradication of these challenges require an excellent technical team, risk management professionals, and extraordinary top-level leadership. With progressive technology and improved security measures, financial institutions can utilize AI and cloud computing to their fullest, ensuring low costs and high reliability with clients’ and customer’s trust.

Conclusion

In conclusion, we stand on the verge of a new era of finance powered by AI and cloud computing that is possessing astounding speed. By optimizing the potential of these ubiquitous and transformative technologies, we can lead the way in creating a better and more economically driven world. 

Financial institutions’ involvement and collaboration will be enhanced as they will be the trailblazers for emulative organizations. Continuous learning and innovation will give early adapters a competitive advantage. 

These technologies will show us the new face of finance through cautious growth, responsible accountability, and rectifying probable challenges.

The post Impact of AI and Cloud Computing on the Future of Finance appeared first on noupe.

Categories: Others Tags:

Exciting New Tools for Designers, May 2024

May 6th, 2024 No comments

This year, we’ve seen a wave of groundbreaking apps and tools. AI is reshaping the industry, enhancing productivity, and helping us work smarter, not harder.

Categories: Designing, Others Tags:

How To Harness Mouse Interaction Data For Practical Machine Learning Solutions

May 6th, 2024 No comments

Mouse data is a subcategory of interaction data, a broad family of data about users generated as the immediate result of human interaction with computers. Its siblings from the same data family include logs of key presses or page visits. Businesses commonly rely on interaction data, including the mouse, to gather insights about their target audience. Unlike data that you could obtain more explicitly, let’s say via a survey, the advantage of interaction data is that it describes the actual behavior of actual people.

Collecting interaction data is completely unobtrusive since it can be obtained even as users go about their daily lives as usual, meaning it is a quantitative data source that scales very well. Once you start collecting it continuously as part of regular operation, you do not even need to do anything, and you’ll still have fresh, up-to-date data about users at your fingertips — potentially from your entire user base, without them even needing to know about it. Having data on specific users means that you can cater to their needs more accurately.

Of course, mouse data has its limitations. It simply cannot be obtained from people using touchscreens or those who rely on assistive tech. But if anything, that should not discourage us from using mouse data. It just illustrates that we should look for alternative methods that cater to the different ways that people interact with software. Among these, the mouse just happens to be very common.

When using the mouse, the mouse pointer is the de facto conduit for the user’s intent in a visual user interface. The mouse pointer is basically an extension of your arm that lets you interact with things in a virtual space that you cannot directly touch. Because of this, mouse interactions tend to be data-intensive. Even the simple mouse action of moving the pointer to an area and clicking it can yield a significant amount of data.

Mouse data is granular, even when compared with other sources of interaction data, such as the history of visited pages. However, with machine learning, it is possible to investigate jumbles of complicated data and uncover a variety of complex behavioral patterns. It can reveal more about the user holding the mouse without needing to provide any more information explicitly than normal.

For starters, let us venture into what kind of information can be obtained by processing mouse interaction data.

What Are Mouse Dynamics?

Mouse dynamics refer to the features that can be extracted from raw mouse data to describe the user’s operation of a mouse. Mouse data by itself corresponds with the simple mechanics of mouse controls. It consists of mouse events: the X and Y coordinates of the cursor on the screen, mouse button presses, and scrolling, each dated with a timestamp. Despite the innate simplicity of the mouse events themselves, the mouse dynamics using them as building blocks can capture user’s behavior from a diverse and emergently complex variety of perspectives.

If you are concerned about user privacy, as well you should be, mouse dynamics are also your friend. For the calculation of mouse dynamics to work, raw mouse data does not need to inherently contain any details about the actual meaning of the interaction. Without the context of what the user saw as they moved their pointer around and clicked, the data is quite safe and harmless.

Some examples of mouse dynamics include measuring the velocity and the acceleration at which the mouse cursor is moving or describing how direct or jittery the mouse trajectories are. Another example is whether the user presses and lets go of the primary mouse button quickly or whether there is a longer pause before they release their press. Four categories of over twenty base measures can be identified: temporal, spatial, spatial-temporal, and performance. Features do not need to be just metrics either, with other approaches using a time series of mouse events.

Temporal mouse dynamics:

  • Movement duration: The time between two clicks;
  • Response time: The time it takes to click something in response to a stimulus (e.g., from the moment when a page is displayed);
  • Initiation time: The time it takes from an initial stimulus for the cursor to start moving;
  • Pause time: The time measuring the cursor’s period of idleness.

Spatial mouse dynamics:

  • Distance: Length of the path traversed on the screen;
  • Straightness: The ratio between the traversed path and the optimal direct path;
  • Path deviation: Perpendicular distance of the traversed path from the optimal path;
  • Path crossing: Counted instances of the traversed and optimal path intersecting;
  • Jitter: The ratio of the traversed path length to its smoothed version;
  • Angle: The direction of movement;
  • Flips: Counted instances of change in direction;
  • Curvature: Change in angle over distance;
  • Inflection points: Counted instances of change in curvature.

Spatial-temporal mouse dynamics:

  • Velocity: Change of distance over time;
  • Acceleration: Change of velocity over time;
  • Jerk: Change of acceleration over time;
  • Snap: Change in jerk over time;
  • Angular velocity: Change in angle over time.

Performance mouse dynamics:

  • Clicks: The number of mouse button events pressing down or up;
  • Hold time: Time between mouse down and up events;
  • Click error: Length of the distance between the clicked point and the correct user task solution;
  • Time to click: Time between the hover event on the clicked point and the click event;
  • Scroll: Distance scrolled on the screen.

Note: For detailed coverage of varied mouse dynamics and their extraction, see the paper “Is mouse dynamics information credible for user behavior research? An empirical investigation.”

The spatial angular measures cited above are a good example of how the calculation of specific mouse dynamics can work. The direction angle of the movements between points A and B is the angle between the vector AB and the horizontal X axis. Then, the curvature angle in a sequence of points ABC is the angle between vectors AB and BC. Curvature distance can be defined as the ratio of the distance between points A and C and the perpendicular distance between point B and line AC. (Definitions sourced from the paper “An efficient user verification system via mouse movements.”)

Even individual features (e.g., mouse velocity by itself) can be delved into deeper. For example, on pages with a lot of scrolling, horizontal mouse velocity along the X-axis may be more indicative of something capturing the user’s attention than velocity calculated from direct point-to-point (Euclidean) distance in the screen’s 2D space. The maximum velocity may be a good indicator of anomalies, such as user frustration, while the mean or median may tell you more about the user as a person.

From Data To Tangible Value

The introduction of mouse dynamics above, of course, is an oversimplification for illustrative purposes. Just by looking at the physical and geometrical measurements of users’ mouse trajectories, you cannot yet tell much about the user. That is the job of the machine learning algorithm. Even features that may seem intuitively useful to you as a human (see examples cited at the end of the previous section) can prove to be of low or zero value for a machine-learning algorithm.

Meanwhile, a deceptively generic or simplistic feature may turn out unexpectedly quite useful. This is why it is important to couple broad feature generation with a good feature selection method, narrowing the dimensionality of the model down to the mouse dynamics that help you achieve good accuracy without overfitting. Some feature selection techniques are embedded directly into machine learning methods (e.g., LASSO, decision trees) while others can be used as a preliminary filter (e.g., ranking features by significance assessed via a statistical test).

As we can see, there is a sequential process to transforming mouse data into mouse dynamics, into a well-tuned machine learning model to field its predictions, and into an applicable solution that generates value for you and your organization. This can be visualized as the pipeline below.

Machine Learning Applications Of Mouse Dynamics

To set the stage, we must realize that companies aren’t really known for letting go of their competitive advantage by divulging the ins and outs of what they do with the data available to them. This is especially true when it comes to tech giants with access to potentially some of the most interesting datasets on the planet (including mouse interaction data), such as Google, Amazon, Apple, Meta, or Microsoft. Still, recording mouse data is known to be a common practice.

With a bit of grit, you can find some striking examples of the use of mouse dynamics, not to mention a surprising versatility in techniques. For instance, have you ever visited an e-commerce site just to see it recommend something specific to you, such as a gendered line of cosmetics — all the while, you never submitted any information about your sex or gender anywhere explicitly?

Mouse data transcends its obvious applications, as is replaying the user’s session and highlighting which visual elements people interact with. A surprising amount of internal and external factors that shape our behavior are reflected in data as subtle indicators and can thus be predicted.

Let’s take a look at some further applications. Starting some simple categorization of users.

Example 1: Biological Sex Prediction

For businesses, knowing users well allows them to provide accurate recommendations and personalization in all sorts of ways, opening the gates for higher customer satisfaction, retention, and average order value. By itself, the prediction of user characteristics, such as gender, isn’t anything new. The reason for basing it on mouse dynamics, however, is that mouse data is generated virtually by the truckload. With that, you will have enough data to start making accurate predictions very early.

If you waited for higher-level interactions, such as which products the user visited or what they typed into the search bar, by the time you’d have enough data, the user may have already placed an order or, even worse, left unsatisfied.

The selection of the machine learning algorithm matters for a problem. In one published scientific paper, six various models have been compared for the prediction of biological gender using mouse dynamics. The dataset for the development and evaluation of the models provides mouse dynamics from participants moving the cursor in a broad range of trajectory lengths and directions. Among the evaluated models — Logistic regression, Support vector machine, Random forest, XGBoost, CatBoost, and LightGBM — CatBoost achieved the best F1 score.

Putting people into boxes is far from everything that can be done with mouse dynamics, though. Let’s take a look at a potentially more exciting use case — trying to predict the future.

Example 2: Purchase Prediction

Another e-commerce application predicts whether the user has the intent to make a purchase or even whether they are likely to become a repeat customer. Utilizing such predictions, businesses can adapt personalized sales and marketing tactics to be more effective and efficient, for example, by catering more to likely purchasers to increase their value — or the opposite, which is investigating unlikely purchasers to find ways to turn them into likely ones.

Interestingly, a paper dedicated to the prediction of repeat customership reports that when a gradient boosting model is validated on data obtained from a completely different online store than where it was trained and tuned, it still achieves respectable performance in the prediction of repeat purchases with a combination of mouse dynamics and other interaction and non-interaction features.

It is plausible that though machine-learning applications tend to be highly domain-specific, some models could be used as a starting seed, carried over between domains, especially while still waiting for user data to materialize.

Additional Examples

Applications of mouse dynamics are a lot more far-reaching than just the domain of e-commerce. To give you some ideas, here are a couple of other variables that have been predicted with mouse dynamics:

The Mouse-Shaped Caveat

When you think about mouse dynamics in-depth, some questions will invariably start to emerge. The user isn’t the only variable that could determine what mouse data looks like. What about the mouse itself?

Many brands and models are available for purchase to people worldwide. Their technical specifications deviate in attributes such as resolution (measured in DPI or, more accurately, CPI), weight, polling rate, and tracking speed. Some mouse devices have multiple profile settings that can be swapped between at will. For instance, the common CPI of an office mouse is around 800-1,600, while a gaming mouse can go to extremes, from 100 to 42,000. To complicate things further, the operating system has its own mouse settings, such as sensitivity and acceleration. Even the surface beneath the mouse can differ in its friction and optical properties.

Can we be sure that mouse data is reliable, given that basically everyone potentially works under different mouse conditions?

For the sake of argument, let’s say that as a part of a web app you’re developing, you implement biometric authentication with mouse dynamics as a security feature. You sell it by telling customers that this form of auth is capable of catching attackers who try to meddle in a tab that somebody in the customer’s organization left open on an unlocked computer. Recognizing the intruder, the app can sign the user out of the account and trigger a warning sent to the company. Kicking out the real authorized user and sounding the alarm just because somebody bought a new mouse would not be a good look. Recalibration to the new mouse would also produce friction. Some people like to change their mouse sensitivity or use different computers quite often, so frequent calibration could potentially present a critical flaw.

We found that up until now, there was barely anything written about whether or how mouse configuration affects mouse dynamics. By mouse configuration, we refer to all properties of the environment that could impact mouse behavior, including both hardware and software.

From the authors of papers and articles about mouse dynamics, there is barely a mention of mouse devices and settings involved in development and testing. This could be seen as concerning. Though hypothetically, there might not be an actual reason for concern, that is exactly the problem. There was just not even enough information to make a judgment on whether mouse configuration matters or not. This question is what drove the study conducted by UXtweak Research (as covered in the peer-reviewed paper in Computer Standards & Interfaces).

The quick answer? Mouse configuration does detrimentally affect mouse dynamics. How?

  1. It may cause the majority of mouse dynamics values to change in a statistically significant way between different mouse configurations.
  2. It may lower the prediction performance of a machine learning model if it was trained on a different set of mouse configurations than it was tested on.

It is not automatically guaranteed that prediction based on mouse dynamics will work equally well for people on different devices. Even the same person making the exact same mouse movements does not necessarily produce the same mouse dynamics if you give them a different mouse or change their settings.

We cannot say for certain how big an impact mouse configuration can have in a specific instance. For the problem that you are trying to solve (specific domain, machine learning model, audience), the impact could be big, or it could be negligible. But to be sure, it should definitely receive attention. After all, even a deceptively small percentage of improvement in prediction performance can translate to thousands of satisfied users.

Tackling Mouse Device Variability

Knowledge is half the battle, and so it is also with the realization that mouse configuration is not something that can be just ignored when working with mouse dynamics. You can perform tests to evaluate the size of the effect that mouse configuration has on your model’s performance. If, in some configurations, the number of false positives and false negatives rises above levels that you are willing to tolerate, you can start looking for potential solutions by tweaking your prediction model.

Because of the potential variability in real-world conditions, differences between mouse configurations can be seen as a concern. Of course, if you can rely on controlled conditions (such as in apps only accessible via standardized kiosks or company-issued computers and mouse devices where all system mouse settings are locked), you can avoid the concern altogether. Given that the training dataset uses the same mouse configuration as the configuration used in production, that is. Otherwise, that may be something new for you to optimize.

Some predicted variables can be observed repeatedly from the same user (e.g., emotional state or intent to make a purchase). In the case of these variables, to mitigate the problem of different users utilizing different mouse configurations, it would be possible to build personalized models trained and tuned on the data from the individual user and the mouse configurations they normally use. You also could try to normalize mouse dynamics by adjusting them to the specific user’s “normal” mouse behavior. The challenge is how to accurately establish normality. Note that this still doesn’t address situations when the user changes their mouse or settings.

Where To Take It From Here

So, we arrive at the point where we discuss the next steps for anyone who can’t wait to apply mouse dynamics to machine learning purposes of their own. For web-based solutions, you can start by looking at MouseEvents in JavaScript, which is how you’ll obtain the elementary mouse data necessary.

Mouse events will serve as the base for calculating mouse dynamics and the features in your model. Pick any that you think could be relevant to the problem you are trying to solve (see our list above, but don’t be afraid to design your own features). Don’t forget that you can also combine mouse dynamics with domain and application-specific features.

Problem awareness is key to designing the right solutions. Is your prediction problem within-subject or between-subject? A classification or a regression? Should you use the same model for your whole audience, or could it be more effective to tailor separate models to the specifics of different user segments?

For example, the mouse behavior of freshly registered users may differ from that of regular users, so you may want to divide them up. From there, you can consider the suitable machine/deep learning algorithm. For binary classification, a Support vector machine, Logistic regression, or a Random Forest could do the job. To delve into more complex patterns, you may wish to reach for a Neural network.

Of course, the best way to uncover which machine/deep learning algorithm works best for your problem is to experiment. Most importantly, don’t give up if you don’t succeed at first. You may need to go back to the drawing board a few times to reconsider your feature engineering, expand your dataset, validate your data, or tune the hyperparameters.

Conclusion

With the ongoing trend of more and more online traffic coming from mobile devices, some futurist voices in tech might have you believe that “the computer mouse is dead”. Nevertheless, those voices have been greatly exaggerated. One look at statistics reveals that while mobile devices are excessively popular, the desktop computer and the computer mouse are not going anywhere anytime soon.

Classifying users as either mobile or desktop is a false dichotomy. Some people prefer the desktop computer for tasks that call for exact controls while interacting with complex information. Working, trading, shopping, or managing finances — all, coincidentally, are tasks with a good amount of importance in people’s lives.

To wrap things up, mouse data can be a powerful information source for improving digital products and services and getting yourself a headway against the competition. Advantageously, data for mouse dynamics does not need to involve anything sensitive or in breach of the user’s privacy. Even without identifying the person, machine learning with mouse dynamics can shine a light on the user, letting you serve them more proper personalization and recommendations, even when other data is sparse. Other uses include biometrics and analytics.

Do not underestimate the impact of differences in mouse devices and settings, and you may arrive at useful and innovative mouse-dynamics-driven solutions to help you stand out.

Categories: Others Tags:

Combining CSS :has() And HTML  For Greater Conditional Styling

May 2nd, 2024 No comments

Even though the CSS :has() pseudo-class is relatively new, we already know a lot about it, thanks to many, many articles and tutorials demonstrating its powerful ability to conditionally select elements based on their contents. We’ve all seen the card component and header examples, but the conditional nature of :has() actually makes it adept at working with form controls, which are pretty conditional in nature as well.

Let’s look specifically at the element. With it, we can make a choice from a series of s. Combined with :has(), we are capable of manipulating styles based on the selected .

<select>
  <option value="1" selected>Option 1</option>
  <option value="2">Option 2</option>
  <option value="3">Option 3</option>
  <option value="4">Option 4</option>
  <option value="5">Option 5</option>
</select>

This is your standard usage, producing a dropdown menu that contains options for user selection. And while it’s not mandatory, I’ve added the selected attribute to the first to set it as the initial selected option.

Applying styles based on a user’s selection is not a new thing. We’ve had the Checkbox Hack in our pockets for years, using the :checked CSS pseudo-class to style the element based on the selected option. In this next example, I’m changing the element’s color and the background-color properties based on the selected .

See the Pen demo 01 – Using the :has selector on a dropdown menu by Amit Sheen.

But that’s limited to styling the current element, right? If a particular is :checked, then we style its style. We can write a more complex selector and style child elements based on whether an is selected up the chain, but that’s a one-way road in that we are unable to style up parent elements even further up the chain.

That’s where :has() comes in because styling up the chain is exactly what it is designed to do; in fact, it’s often called the “parent selector” for this reason (although “family selector” may be a better descriptor).

For example, if we want to change the background-color of the element according to the value of the selected , we select the element if it has a specific [value] that is :checked.

See the Pen demo 02 – Using the :has selector on a dropdown menu by Amit Sheen.

Just how practical is this? One way I’m using it is to style mandatory elements without a valid selected . So, instead of applying styles if the element :has() a :checked state, I am applying styles if the required element does :not(:has(:checked)).

See the Pen demo 02.1 – Using the :has selector on a dropdown menu by Amit Sheen.

But why stop there? If we can use :has() to style the element as the parent of an , then we can also use it to style the parent of the , as well as its parent, in addition to its parent, and even its parent… all the way up the chain to the :root element. We could even bring :has() all the way up the chain and sniff out whether any child of the document :root :has() a particular that is :checked:

:root:has(select [value="foo"]:checked) {
  // Styles applied if <option value="foo"> is <select>-ed
}

This is useful for setting a custom property value dynamically or applying a set of styles for the whole page. Let’s make a little style picker that illustrates the idea of setting styles on an entire page.

See the Pen demo 03 – Using the :has selector on a dropdown menu by Amit Sheen.

Or perhaps a theme picker:

See the Pen demo 04 – Using the :has selector on a dropdown menu by Amit Sheen.

How that last example works is that I added a class to each element and referenced that class inside the :has() selector in order to prevent unwanted selections in the event that there are multiple elements on the page.

And, of course, we don’t have to go all the way up to the :root element. If we’re working with a specific component, we can scope :has() to that component like in the following demo of a star rating component.

See the Pen demo 05 – Using the :has selector on a dropdown menu by Amit Sheen.

Watch a short video tutorial I made on using CSS to create 3D animated stars.

Conclusion

We’d be doing :has() a great disservice if we only saw it as a “parent selector” rather than the great conditional operator it is for applying styles all the way up the chain. Seen this way, it’s more of a modern upgrade to the Checkbox Hack in that it sends styles up like we were never able to do before.

There are endless examples of using :has() to create style variations of a component according to its contents. We’ve even seen it used to accomplish the once-complicated linked card pattern. But now you have an example for using it to create dropdown menus that conditionally apply styles (or don’t) to a page or component based the currently selected option — depending on how far up the chain we scope it.

I’ve used this technique a few different ways — e.g., as form validation, a style picker, and star ratings — but I’m sure there are plenty of other ways you can imagine how to use it in your own work. And if you are using :has() on a element for something different or interesting, let me know because I’d love to see it!

Further Reading On SmashingMag

Categories: Others Tags:

Using AI to Predict Design Trends

May 1st, 2024 No comments

Design trends evolve at a blistering pace, especially in web design. On multi-month projects, you might work on a cutting-edge design after the kick-off meeting, only to launch a dated-looking site.

Categories: Designing, Others Tags:

Longing For May (2024 Wallpapers Edition)

April 30th, 2024 No comments

Inspiration lies everywhere, and as a matter of fact, we discovered one of the best ways to spark new ideas: desktop wallpapers. Since more than 13 years already, we challenge you, our dear readers, to put your creative skills to the test and create wallpaper calendars for our monthly wallpapers posts. No matter if you’re into illustration, lettering, or photography, the wallpapers series is the perfect opportunity to get your ideas flowing and create a small artwork to share with people all around the world. Of course, it wasn’t any different this month.

In this post, you’ll find desktop wallpapers created by artists and designers who took on the creativity challenge. They come in versions with and without a calendar for May 2024 and can be downloaded for free. As a little bonus goodie, we also compiled a selection of favorites from our wallpapers archives at the end of the post. Maybe you’ll spot one of your almost-forgotten favorites from the past in here, too? A big thank-you to everyone who shared their designs with us this month! Happy May!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.
  • Submit a wallpaper!
    Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent.

A Symphony Of Dedication On Labour Day

“On Labour Day, we celebrate the hard-working individuals who contribute to the growth of our communities. Whether in busy urban areas or peaceful rural settings, this day recognizes the unsung heroes driving our advancement. Let us pay tribute to the workers, craftsmen, and visionaries shaping our shared tomorrow.” — Designed by PopArt Studio from Serbia.

Navigating The Amazon

“We are in May, the spring month par excellence, and we celebrate it in the Amazon jungle.” — Designed by Veronica Valenzuela Jimenez from Spain.

Popping Into Spring

“Spring has sprung, and what better metaphor than toast popping up and out of a fun-colored toaster!” — Designed by Stephanie Klemick from Emmaus Pennsylvania, USA.

Duck

Designed by Madeline Scott from the United States.

Cruising Into Spring

“When I think of spring, I think of finally being able to drive with the windows down and enjoying the fresh air!” — Designed by Vanessa Mancuso from the United States.

Lava Is In The Air

Designed by Ricardo Gimenes from Sweden.

Love Myself

Designed by Design-Studio from India.

Bat Traffic

Designed by Ricardo Gimenes from Sweden.

Springtime Sips

“May is a month where the weather starts to warm and reminds us summer is approaching, so I created a bright cocktail-themed wallpaper since sipping cocktails in the sun is a popular warm weather activity!” — Designed by Hannah Coates from Baltimore, MD.

Hello May

“The longing for warmth, flowers in bloom, and new beginnings is finally over as we welcome the month of May. From celebrating nature on the days of turtles and birds to marking the days of our favorite wine and macarons, the historical celebrations of the International Workers’ Day, Cinco de Mayo, and Victory Day, to the unforgettable ‘May the Fourth be with you’. May is a time of celebration — so make every May day count!” — Designed by PopArt Studio from Serbia.

ARRR2-D2

Designed by Ricardo Gimenes from Sweden.

May Your May Be Magnificent

“May should be as bright and colorful as this calendar! That’s why our designers chose these juicy colors.” — Designed by MasterBundles from Ukraine.

The Monolith

Designed by Ricardo Gimenes from Sweden.

Blooming May

“In spring, especially in May, we all want bright colors and lightness, which was not there in winter.” — Designed by MasterBundles from Ukraine.

The Mushroom Band

“My daughter asked me to draw a band of mushrooms. Here it is!” — Designed by Vlad Gerasimov from Georgia.

Poppies Paradise

Designed by Nathalie Ouederni from France.

Lake Deck

“I wanted to make a big painterly vista with some mountains and a deck and such.” — Designed by Mike Healy from Australia.

Make A Wish

Designed by Julia Versinina from Chicago, USA.

Enjoy May!

“Springtime, especially Maytime, is my favorite time of the year. And I like popsicles — so it’s obvious isn’t it?” — Designed by Steffen Weiß from Germany.

Celestial Longitude Of 45°

“Lixia is the 7th solar term according to the traditional East Asian calendars, which divide a year into 24 solar terms. It signifies the beginning of summer in East Asian cultures. Usually begins around May 5 and ends around May 21.” — Designed by Hong, Zi-Cing from Taiwan.

Stone Dahlias

Designed by Rachel Hines from the United States.

Understand Yourself

“Sunsets in May are the best way to understand who you are and where you are heading. Let’s think more!” — Designed by Igor Izhik from Canada.

Sweet Lily Of The Valley

“The ‘lily of the valley’ came earlier this year. In France, we celebrate the month of May with this plant.” — Designed by Philippe Brouard from France.

Today, Yesterday, Or Tomorrow

Designed by Alma Hoffmann from the United States.

Add Color To Your Life!

“This month is dedicated to flowers, to join us and brighten our days giving a little more color to our daily life.” — Designed by Verónica Valenzuela from Spain.

The Green Bear

Designed by Pedro Rolo from Portugal.

Lookout At Sea

“I wanted to create something fun and happy for the month of May. It’s a simple concept, but May is typically the time to adventure out into the world and enjoy the best of Spring.” — Designed by Alexander Jubinski from the United States.

Tentacles

Designed by Julie Lapointe from Canada.

Spring Gracefulness

“We don’t usually count the breaths we take, but observing nature in May, we can’t count our breaths being taken away.” — Designed by Ana Masnikosa from Belgrade, Serbia.

Geo

Designed by Amanda Focht from the United States.

Blast Off!

“Calling all space cadets, it’s time to celebrate National Astronaut Day! Today we honor the fearless explorers who venture beyond our planet and boldly go where no one has gone before.” — Designed by PopArt Studio from Serbia.

Colorful

Designed by <a target="_blank" href="https://www.lotum.de>Lotum from Germany.

<a target="_blank" href="https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/e8daeb22-0fff-4b2a-b51a-2a6202c6e26e/may-12-colorful-31-full.png>

Who Is Your Mother?

“Someone who wakes up early in the morning, cooks you healthy and tasty meals, does your dishes, washes your clothes, sends you off to school, sits by your side and cuddles you when you are down with fever and cold, and hugs you when you have lost all hopes to cheer you up. Have you ever asked your mother to promise you never to leave you? No. We never did that because we are never insecure and our relationship with our mothers is never uncertain. We have sketched out this beautiful design to cherish the awesomeness of motherhood. Wishing all a happy Mothers Day!” — Designed by Acodez IT Solutions from India.

Asparagus Say Hi!

“In my part of the world, May marks the start of seasonal produce, starting with asparagus. I know spring is finally here and summer is around the corner when locally-grown asparagus shows up at the grocery store.” — Designed by Elaine Chen from Toronto, Canada.

May The Force Be With You

“Yoda is my favorite Star Wars character and ‘may’ has funny double meaning.” — Designed by Antun Hirsman from Croatia.

Birds Of May

“Inspired by a little-known ‘holiday’ on May 4th known as ‘Bird Day’. It is the first holiday in the United States celebrating birds. Hurray for birds!” — Designed by Clarity Creative Group from Orlando, FL.

Categories: Others Tags:

Lessons Learned After Selling My Startup

April 29th, 2024 No comments

August 2021 marks a milestone for me. That’s when we signed an acquisition agreement to sell Chatra, a profitable live chat platform. I co-founded it after shutting down my first startup after a six-year struggle. Chatra took me and the team six years to finish — that’s six years of learning, experimenting, sometimes failing, and ultimately winning big.

Acquisitions happen all the time. But what does it look like to go through one, putting the thing you built and nurtured up for sale and ceding control to someone else to take over? Sometimes, these things are complicated and contain clauses about what you can and can’t say after the transaction is completed.

So, I’ve curated a handful of the most valuable takeaways from starting, growing, and selling the company. It took me some time to process everything; some lessons were learned immediately, while others took time to sink in. Ultimately, though, it’s a recollection of my personal journey. I hope sharing it can help you in the event you ever find yourself in a similar pair of shoes.

Keeping The Band Together

Rewind six years before the Chatra acquisition. My first startup, Getwear, ran out of steam, and I — along with everyone else — was ready to jump ship.

But we weren’t ready to part ways. My co-founder-partner was a close childhood friend with whom I would sell pirated CDs in the late 90s. Now, I don’t think it’s the most honest way to make a living, but it didn’t bother us much in high school. It also contributed to a strong bond between us, one that led to the launch of Getwear and, later, Chatra.

That partnership and collaboration were too precious to let go; we knew that our work wasn’t supposed to end at Getwear and that we’d have at least one more try together. The fact that we struggled together before is what allowed us to pull through difficult times later. Our friendship allowed us to work through stress, difficulties, and the unavoidable disagreements that always come up.

That was a big lesson for me: It’s good to have a partner you trust along for the ride. We were together before Chatra, and we saw it all the way through to the end. I can’t imagine how things would have been different had I partnered with someone new and unfamiliar, or worse, on my own.

Building Business Foundations

We believed Getwear would make us millionaires. So when it failed, that motivation effectively evaporated. We were no longer inspired to take on ambitious plans, but we still had enough steam to start a digital analog of a döner kebab shop — a simple, sought-after tech product just to pay our bills.

This business wasn’t to be built on the back of investment capital; no, it was bootstrapped. That means we made do with a small, independent, fully-remote team. Remember, this is in 2015. The global pandemic had yet to happen, and a fully remote team was still a novelty. And it was quite a change from how we ran Getwear, which was stocked with an R&D department, a production office, and even a factory in Mumbai. A small distributed team seemed the right approach to keep us nimble as we set about defining our path forward as a company.

Finding our purpose required us to look at the intersection of what the market needs and what we know and can do well. Building a customer support product was an obvious choice: at Getwear, we heavily relied on live chat to help users take their body measurements and place their orders.

We were familiar with existing products on the market. Besides, we already had experience building a conversational support product: we had built an internal tool to facilitate communication between our Mumbai-based factory and an overseas customer-facing team. The best thing about that was that it was built on a relatively obscure framework offering real-time messaging out of the box.

There were maybe 20 established competitors in the space back in 2015, but that didn’t dissuade us. If there was enough room for 20 products to do business, there must be enough for 21. I assumed we should treat competition as a market validation rather than an obstacle.

Looking back, I can confidently say that it’s totally possible to compete (and succeed) in a crowded market.

Product-wise, Getwear was very innovative; no one had ever built an online jeans customizer as powerful as ours. We designed the UX from scratch without relying much on the best practices.

With Chatra, we went down a completely different route: We had improved the established live chat product category via features that were, at that time, commonly found in other types of software but hadn’t made their way to our field. That was the opportunity we seized.

The existing live chat platforms felt archaic in that the interfaces were clunky and reminiscent of Windows 95, the user flows were poorly thought out, and the dated user experience resulted in lost conversation histories.

Slack was a new product at this time and was all the rage with its fresh approach to user interfaces and conversational onboarding. Products like Facebook Messenger and Telegram (which is popular in Eastern Europe and the Middle East) were already standard bearers and formed user expectations for how a messaging experience should work on mobile. We learned a lot from these products and found in them the blueprint to design a modern chat widget and dashboard for agents.

We certainly stood on the shoulders of giants, and there’s nothing wrong with stealing like an artist: in fact, both Steve Jobs and Bill Gates did it.

The takeaway?

A product does not have to be new to redefine and disrupt a market. It’s possible to lead by introducing modern standards and designs rather than coming up with something radically different.

Making A Go-To-Market Strategy

Once we were clear about what we were building and how to build it, the time came to figure out a strategy for bringing our product to market.

Two things were very clear and true to us up front:

  1. We needed to launch and start earning immediately — in months rather than years — being a bootstrapped company and all.
  2. We didn’t have money for things like paid acquisition, brand awareness, or outbound sales representatives to serve as the front line for customer engagement.

Both conclusions, taken together, helped us decide to focus our efforts on small businesses that need fewer features in a product and onboard by self-service. Marketing-wise, that meant we’d need to find a way around prohibitively expensive ads.

Enter growth hacking! The term doesn’t resonate now the way it did in 2015: fresh, aggressive, and effective. As a user-facing website widget, we had a built-in acquisition channel by way of a “powered by Chatra” link. For it to be an effective marketing tool, we had to accumulate a certain number of customers. Otherwise, who’s going to see the link in the first place?

We combined unorthodox techniques to acquire new customers, like web-scraping and email address discovery with cold outreach.

Initially, we decided to go after our competitors’ customers. But the only thing we got out of targeting them with emails was their rightful anger.

In fact, a number of customers complained directly to the competitors, and the CEO of a prominent live chat company demanded we cease communicating with their users.

More than that, he actually requested that we donate to a well-known civil liberty NGO, something we wholeheartedly agreed to, considering it was indeed the right thing to do.

So, we decided to forget about competition and target potential customers (who owned e-commerce websites) using automation for lead research, email sending, and reply processing. We managed to do it on a massive scale with very few resources. By and large, cold outreach has been the single most effective marketing tool we have ever used. And contrary to common assumption, it is not a practice reserved purely for enterprise products.

Once we acquired a significant user mass, the widget link became our Number One acquisition channel. In lean startup terminology, a viral engine of growth is a situation when existing customers start generating leads and filling the marketing funnel for you. It’s where we all want to be, but the way to get there is often murky and unreliable. But my experience tells me that it is possible and can be planned.

For this strategy to work, it has to be based on natural user interactions. With widgets, the mechanic is quite apparent, but not so much with other products. Still, you can do well with serious planning and running experiments to help make informed decisions that achieve the best possible results.

For example, we were surprised that the widget link performed way better in tests when we changed it from “Powered by Chatra” to “Get Chatra!”. We’re talking big increases with minor tweaks. The small details really do matter!

Content marketing was another avenue we explored for generating leads. We had already done the cold outreach and had a good viral engine going with the widget link. Content marketing, in contrast, was an attempt to generate leads at the “top” of the funnel, independent of any outbound marketing or our customers’ websites. We produced books and guides that were well-researched, written, and designed to bring in potential customers while supporting existing ones with resources to get the most out of Chatra.

Sadly, these efforts failed to attract many new leads. I don’t want to say not to invest in quality content; it’s just that this is not a viable short-term growth strategy.

Increasing Lifetime Customer Value

It took six months of development to launch and another year to finally break even. By then, we had achieved a product-market fit with consistent organic growth; it was time to focus on metrics and unit economics. Our challenge was to limit customer churn and find ways to increase the lifetime value of existing customers.

If there’s an arch-enemy to SaaS, it’s churn. Mitigating churn is crucial to any subscription business, as longer subscriptions generate more revenue. Plus, it’s easier to prevent churn than it is to acquire new customers.

We found it helpful to distinguish between avoidable churn and unavoidable (i.e., “natural”) churn. The latter concerns customer behavior beyond our control: if an e-commerce store shuts down, they won’t pay for services. And we had nothing to do with them shutting down — it’s just the reality of life that most small businesses fail. No quick-fix strategy could ever change that; we just had to deal with it.

Chatra’s subscription pricing was fairly inexpensive, yet we enjoyed a relatively high customer lifetime value (cLTV). Many customers tended to stay for a long time — some, for years. Our high cLTV helped us justify higher customer acquisition costs (CAC) for paid ads in the Shopify app store once we decided to run them. Running the ads allowed us to improve our Shopify app store search position. And because of that, we improved and kept our position as a top app within our category. That, I believe, was one of the factors that the company Brevo considered when they later decided to acquire our business.

We tried improving the free-to-paid subscription conversion rate by targeting those who actively used the product but remained on a free plan for an extended period. We offered them an upgraded plan subscription for just one dollar per year. And to our surprise, that failed to convince many people to upgrade. We were forced to conclude that there are two types of customers: those who pay and those who do not (and will not).

From that point forward, things got even weirder. For example, we ran several experiments with subscription pricing and found that we could increase subscription prices from $11 per seat to $19 without adversely affecting either the visitor-to-user or the free-to-paid conversion rates! Apparently, price doesn’t matter as much as you might think. It’s possible to raise prices without adversely affecting conversions, at least in our experience with a freemium pricing model.

We also released additional products we could cross-sell to existing customers. One was Livebar, an app for in-browser notifications on recent online shopping purchases. Another was Yeps, a simple announcement bar that sticks to the top of a webpage. Product-wise, both were good. But despite our efforts to bring awareness to them in all our communications with Chatra customers, they never really took off. We’ve closed the first and sold the second for a price that barely justified the development and ongoing support we were putting into it. We were wrong to assume that if we have a loyal audience, we could automatically sell them another product.

Contemplating An Exit

Chatra was a lean company. As a SaaS business, we had a perfect cost-revenue ratio and gained new customers mainly through viral dynamics and self-onboarding. These didn’t increase our costs much but did indeed bring in extra subscription dollars. The engine worked almost without any effort on our side.

After a few years, the company could mostly function on auto-pilot, giving us — the founders — time and resources to pay our bills and run business experiments. We were enjoying a good life. Our work was a success!

We gave up on an exit strategy even before starting, so we didn’t pay much attention to the acquisition offers we routinely received; most weren’t enticing enough to pull us away. Even those sent by people known in the industry were way too small: the best offer we got was a valuation of 2.5 times our Annual Recurring Revenue (ARR), which was a non-starter for us.

Then, we received an email with another offer. The details were slim, but we decided to at least entertain the idea and schedule a time to chat. I replied that we wouldn’t consider anything lower than an industry-standard venture-backed SaaS valuation (which was about eight times ARR at the time). The response, surprisingly, read: “Let’s talk. Are you ready to sign a non-disclosure agreement?”

My biggest concern was that transferring ownership might lead to the Chatra team being laid off and the product termination. I didn’t want to let down our existing customers! The buyer understood the situation and assured us that Chatra would remain a separate line of business, at least for an extended period. No one on the team would lose their job. The buyer also planned to fork Chatra rather than close it, at least initially.

Still, letting go of it was difficult, and at times, I even felt the urge to blow up the negotiations.

So, why sell at all? We did it for three reasons:

  • First, we felt stuck in the mature stage of the business lifecycle and missed the feeling of creating new things.
  • Second, we (rightfully) knew that the good times could not last forever; we would be wise to avoid putting all our eggs in one basket.
  • Third was a bit of pride. I genuinely wanted to go through the acquisition process, which has always seemed like a rite of passage for entrepreneurs.

Chatra was growing, cash-flow positive, and economic tailwinds seemed to blow our way. On the flip side, however, we had little left to do as founders. We didn’t want to go upmarket and compete with massive players like Intercom and Drift. We were happy in our niche, but it didn’t offer enough growth or expansion opportunities. We felt near the end of the line.

Looking back, I see how fortunate we were. The market took a huge hit soon after the acquisition, to the extent that I’m sure we would not have been able to fetch equally enticing offers within the next two years.

I want to stress that the offer we got was very, very generous. Still I often kick myself for not asking for more, as a deep-pocketed buyer is unlikely to turn away simply because we were trying to increase the company’s valuation. The additional ask would have been negligible to the buyer, but it could have been very meaningful for us.

Different acquisitions wind up looking different in the end. If you’re curious what a transaction looks like, ours was split into three payouts:

  1. An initial, fixed payment on the closing date;
  2. Several flexible payouts based on reaching post-acquisition milestones;
  3. An escrow amount deposited with an escrow agent for the possibility of something going wrong, like legal claims.

We assumed this structure was non-negotiable and didn’t try to agree on a different distribution that would move more money to the initial payment. Why? We were too shy to ask and were sure we’d complete all requirements on time. Accepting a significant payment delay essentially credited the buyer for the amount of the payouts while leaving me and my co-founder vulnerable to uncertainty.

We should’ve been bold and negotiated more favorable terms. After all, it represented the last time we’d have to battle for Chatra. I consider that a lesson learned for next time.

Conclusion

Parting ways with Chatra wasn’t easy. The team became my second family, and every product pixel and bit of code was dear to my heart. And yes, I do still feel nostalgia for it from time to time. But I certainly enjoy the freedom that comes with the financial gains.

One thing I absolutely want to mention before closing this out is that

Having an “exit” under my belt actually did very little to change my personal well-being or sense of self-worth. The biggest lesson I took away from the acquisition is that success is the process of doing things, not the point you can arrive at.

I don’t yet know where the journey will take me from here, but I’m confident that there will be both a business challenge and a way of helping others on their own founder journey. That said, I sincerely hope that my experience gives you a good deal of insight into the process of selling a company. It’s one of those things that often happens behind closed doors. But by shedding a little light on it — at least this one reflection — perhaps you will be more prepared than I was and know what to look for.

Categories: Others Tags:

The End Of The Free Tier

April 26th, 2024 No comments

I love free tiers, and I am not the only one. Everyone loves free things — they’re the best thing in life, after all. But maybe we have grown too accustomed to them, to the extent that a service switching from a “freemium” model to a fully paid plan would probably feel outrageous to you. Nowadays, though, the transition from free to paid services seems inevitable. It’s a matter of when a service drops its free tier rather than if it will.

Companies need to make money. As developers, we probably understand the most that a product comes with costs; there are startup funds, resources, and salaries spent to maintain and support the product against a competitive globalized market.

If I decided to take something I made and ship it to others, you darn well know I would charge money for it, and I assume you’re the same. At the same time, I’m typically more than happy to pay for something, knowing it supports the people who made it.

We get that, and we surely don’t go walk into a grocery store complaining that nothing they have is free. It’s just how things work.

What exactly, then, is so infuriating about a service offering a free tier and later deciding to transition to a priced one?

It’s Positioning, Not Money

It’s not so much about the money as it is the positioning. Who wouldn’t feel somewhat scammed, having invested time and resources into something that was initially advertised as “free” only to be blindsided behind a paywall?

Most of the time, the feeling is less anger than it is mildly annoying. For example, if your favorite browser suddenly became a paid premium offering, you would most likely switch to the next best option. But what happens when the free tier for a hosted product or service is retired? Switching isn’t as easy when hundreds of thousands of developers server their projects in a free-tier hosting plan.

The practice of offering a free tier only to remove it seems like a common practice on the web that won’t go away any time soon. It’s as though companies ditch them once (1) the product becomes mature enough to be a feature-rich offering or (2) the company realizes free customers are not converting into paid customers.

It has been a source of endless complaints, and one only needs to look back at PlanetScale’s recent decision to remove its free-tier database plan, which we will get deeper into in a bit. Are free tiers removed because of their unsustainable nature, or is it to appease profit-hungry companies? I want to explore the why and how of free tiers, better approaches for marketing “free” services, and how to smoothly retire a free tier when it inevitably goes away.

Glossary

Before we wade further into these waters, I think it’s worth having a baseline understanding of pricing concepts that are relevant to the discussion.

A free tier is one of several flavors:

  • Free trial opt-in
    Permits users to try out the product for a limited period without providing payment details. Once the trial ends, so does access to the product features.
  • Free trial opt-out
    Requires users to provide payment information during registration en route to a free trial that, once it ends, automatically converts to a paid account.
  • Freemium model
    Offers access to a product’s “core” features but requires upgrading to a paid account to unlock other features and benefits.
  • Reverse trial model
    Users start with access to the premium tier upon registration and then transition to a freemium tier after the trial period ends.

Case Study: PlanetScale

Let’s start this conversation by looking at PlanetScale and how it killed its free tier at the beginning of the year. Founded in 2018, PlanetScale launched its database as a service in 2021 and has raised $105 million in venture capital and seed funding, becoming one of the fastest-growing tech companies in North America by 2023. In March of this year, CEO Sam Lambert announced the removal of PlanetScale’s hobby tier.

In short, the decision was made to provide “a reliable and sustainable platform for our customers” by not “giving away endless amounts of free resources to keep growing,” which, of course, leaves everyone in the freemium tier until April 8 to either pay for one of the next plans at the outrageous starting price of $39 per month or migrate to another platform.

Again, a company needs steady revenue and a reliable business plan to stay afloat. But PlanetScale gave mixed signals when they stated in the bespoke memo that “[e]very unprofitable company has a date in the future where it could disappear.” Then they went on to say they are “the main database for companies totaling more than $50B in market cap,” and they “have been recognized […] as one of the fastest growing tech companies in the US.”

In non-bureaucratic speak, PlanetScale says that the product is failing from one side of its mouth and that the company is wildly successful from the other.

The company is doing great. In November 2023, PlanetScale was ranked as the 188th fastest-growing company in North America by Deloitte Technology Fast 500™. Growth doesn’t necessarily equal revenue, but “to be eligible for Technology Fast 500 recognition, […] [c]ompanies must have base-year operating revenues of at least US $50,000, and current-year operating revenues of at least US $5 million.”

PlanetScale’s decision can only be interpreted as “we want more money,” at least to me. There’s nothing about its current performance that suggests it needs the revenue to keep the company alive.

That’s a punch below the waist for the developer community, especially considering that those on the free tier are likely independent bootstrappers who need to keep their costs low. And let’s not overlook that ending the free tier was accompanied by a round of layoffs at the company.

PlanetScale’s story is not what worries me; it’s that retiring freemium plans is becoming standard practice, as we have seen with the likes of other big PaaS players, including Heroku and Railway.

That said, the PlanetScale case is perhaps the most frustrating because the cheapest alternative to the free tier they now offer is a whopping $39 per month. Compare that to the likes of others in that space, such as Heroku ($7 per month) and Railway ($5 per month).

Is This How A Free Tier Works?

With zero adoption, the value of a new service can’t be seen behind a paywall. Launching any kind of product or service with a freemium pricing model is often used to bring awareness to the product and entice early adopters who might convert into paying customers to help offset the costs of those on the free plan. It’s the old Pareto, or 80/20, rule, where 20% of paying customers ought to pay for the 80% of free users.

A conversion rate is the percentage of users that upgrade from a free tier to a paid one, and an “average” rate depends on the type of free tier or trial being offered.

In a freemium model — without sales assist — a good conversion rate is somewhere between 3–5%, but that’s optimistic. Conversion rates are often way lower in reality and perhaps the toughest to improve for startups with few or no customers. Early on, startups often have so few paying customers that they will have to operate at a loss until figuring out a way to land paying customers who can subsidize the ones who aren’t paying anything.

The longer a company operates at a loss, the more likely it races to generate the highest possible growth before undoubtedly having to cut benefits for free users.

A lot of those free users will feel misled and migrate to another service, but once the audience is big enough, a company can afford to lose free customers in favor of the minority that will switch to premium. Take Evernote, for example. The note-taking app allowed free users to save 100,000 notes and 250 notebooks only to do an about-face in 2023 and limit free users to 50 notes and one notebook.

In principle, a free tier serves the same purpose for SaaS (Software as a System) and PaaS (Product as a System) offerings, but the effects differ. For one, cloud computing costs lots of money, so offering an AWS wrapper in a free tier is significantly harder to sustain. The real difference between SaaS and PaaS, however, is clear when the company decides to kill off its free tier.

Let’s take Zoom as a SaaS example: there is a basic tier that gives you up to 40 minutes of free meeting time, and that is plenty for people who simply don’t need much beyond that. If Zoom were to remove its free tier, free users would most likely move to other freemium alternatives like Google Meet rather than upgrade to one of Zoom’s paid tiers. Those customers have invested nothing in Zoom that locks them in, so the cost of switching to another meeting app is only the learning curve of what app they switch to.

This is in contrast to a PaaS; if the free tier is removed, switching providers introduces costs since a part of your architecture lives in the provider’s free tier. Besides the effort needed to migrate to another provider, moving data and servers can be an expensive operation, thanks to data egress fees. Data egress fees are obscure charges that cloud providers make customers pay for moving data from one service to another. They charge you to stop paying!

Thankfully, there is an increased awareness of this issue through the European Union’s Data Act that requires cloud providers located in Europe to remove barriers that prevent customers from easily switching between companies, including the removal of artificial egress fees.

The Ethics Of The Free Tier

Is it the developer’s fault for hosting a project on a free pricing tier, considering that it can be rolled out at any moment? I have two schools of thought on this: principle and consequential.

  • Principle
    On the one hand, you shouldn’t have to expect a company to pull the rug out from under you by removing a free tier, especially if the company aims to be a reliable and sustainable platform.
  • Consequential
    On the other hand, you don’t expect someone to cut a red light and hit you when you are driving, but you still look at both sides of the street. So it is with using a free tier. Even if it is “immoral” for a company to remove the tier, a developer ought to have a backup plan in the event that it happens, especially as the disappearance of free tiers becomes more prevalent in the industry.

I think it boils down to a matter of transparency. No free tier is advertised as something that may disappear, even if it will in the future. In this case, a free tier is supposed to be another tier with fewer benefits than the paid plan offerings but just as reliable as the most expensive plan, so no user should expect to migrate their projects to other providers any time soon.

What’s The Alternative?

Offering customers a free tier only to remove it once the company gets a “healthy enough” share of the market is just wrong, particularly if it was never attached to an up-front sunset date.

Pretending that the purpose of a free tier is the same as a free trial is unjust since it surely isn’t advertised that way.

If a company wants to give people a taste of how a product or service works, then I think there are far better and more sincere alternatives to the free-tier pricing model:

  • Free trials (opt-in)
    Strapi is an open-source CMS and a perfect example of a service offering a free trial. In 2023, the company released a cloud provider to host Strapi CMS with zero configuration. Even though I think Strapi Cloud is on the pricey side, I still appreciate having a 14-day free trial over a free tier that can or maybe will be removed later. The free trial gives users enough time to get a feel for the product, and there’s no credit card required that would lock someone in (because, let’s face it, some companies count on you forgetting to cancel your free subscription before payments kick in).

  • Free credits
    I have used Railway to host Node.js + Postgres in the past. I think that its “free tier” is the best example of how to help customers try the service: the cheapest plan is a relatively affordable $5 per month, and a new subscriber is credited with $5 to start the project and evaluate the service, again, without the requirement of handing over credit card information or pulling any rugs out from under people. Want to continue your service after the free credits are exhausted? Buy more credits!

Railway is a particular case because it used to have a free tier, but it was withdrawn on June 2, 2023. However, the company removed it with a level of care and concern for customers that PlanetScale lacked and even gave customers who relied on the free tier a trial account with a number of free credits. It is also important to note (and I can’t get over it) that PlanetScale’s new cheapest plan is $39 per month, while Railway was able to limit the damage to $5 per month.

Free Tiers That I Use

I don’t want this article to be just a listicle of free services but rather the start of a conversation about the “free-tier dilemma”. I also want to share some of the free tiers I use, even for small but production-ready projects.

Supabase

You can make pretty much any imaginable web app using Supabase as the back-end since it brings a PostgreSQL database, authentication, real-time subscriptions, and storage in a central dashboard — complete with a generous allocation of database usage in its free tier.

Railway

I have been using Railway to host Strapi CMS for a long time. Aside from its beautiful UI, Railway includes seamless deployment workflows, automatic scaling, built-in CI/CD pipelines, and integration with popular frameworks and databases thanks to its hundreds of templates. It doesn’t include a free tier per se, but you can get the full feel of Railway with the $5 credit they offer.

GitHub Pages

I use GitHub Pages the way I know many of you do as well: for static pages and technical demos. I have used it before to make live examples for my blog posts. So, it’s more of a playground that I use to make a few artifacts when I need to deploy something fast, but I don’t rely on it for anything that would be of consequence if it were to suddenly go away.

Netlify

Beyond hosting, Netlify offers support for almost all modern frameworks, not to mention that they toss in lots of additional perks, including solid documentation, continuous deployment, templates, an edge network, and analytics — all of which are available in a free tier that pleases almost anyone’s needs.

Conclusion

If it isn’t totally clear where I fall on the free pricing tier situation, I’m not advocating that we end the practice, but for more transparency on the side of the companies that offer free tier plans and increased awareness on the side of developers like myself.

I believe that the only way it makes sense to offer a free tier for a SaaS/PaaS is for the company providing it to view it as part of the core product, one that cannot be sunset without a clear and transparent exit strategy, clearly communicated up-front during any sort of registration process. Have a plan for users to painlessly switch services. Allow the customer to make an informed choice and accept responsibility from there.

Free tiers should attract users rather than trap them, and there is an abysmal difference between replacing a free tier for $5 per month with one that costs nearly $40. Taking away the service is one thing; charging exorbitant rates on top of it only adds insult to injury.

We can do better here, and there are plenty of alternatives to free tiers for effectively marketing a product.

Further Reading On SmashingMag

Categories: Others Tags:

Furthering Your Education: Top Resources for Going Back to Grad School After Working

April 26th, 2024 No comments

The traditional path of going straight from undergrad to graduate school isn’t for everyone. Many people choose to enter the workforce first to gain real-world experience before deciding to pursue an advanced degree. If you’ve been working for several years and are now considering going back to school at 30 (or any age really), there are plenty of resources available to help make that transition smoother.

Start With Your Employer

Your current company may actually be one of the best resources for going back to graduate school. Many employers, especially larger corporations, offer tuition assistance programs that can help cover a portion of the costs for approved degree programs. Even if you plan to use the degree to change careers afterwards, taking advantage of this benefit while still employed can save you thousands of dollars that would otherwise need to be paid out-of-pocket or through loans. 

The first step is to check with your HR department or review your company’s benefits policies regarding tuition reimbursement or assistance. You’ll likely need to submit a proposal for why the degree is relevant to your current role or the company’s needs. Having your manager’s support can go a long way in getting approval. 

Additionally, your manager and colleagues who have pursued further education can provide helpful insight into balancing coursework with your current job responsibilities. Get their advice on realistic course loads, how to discuss priorities with professors when deadlines conflict with work, and strategies for staying productive when juggling it all. They can also give you a realistic preview of what to expect in terms of your workload capacity.

Some employers may allow more flexible schedules or the option to go part-time while you’re in school. However, recognize that work responsibilities and client/customer needs will still likely take priority. You may need to get creative about making up hours earlier in the week or putting in evening/weekend time to account for classes and studying during normal business hours. Discussing expectations upfront with your boss can help ensure you have their full support when responsibilities occasionally overlap.

University Resources

Once you start researching and applying to graduate programs, the schools themselves become a valuable resource. Admissions counselors can fill you in on credential requirements for each program, accepted work experience that may count toward application requirements, program formats (part-time, evening, online options), and other support services for students returning to academia after a break.  

Most universities now have educational resources dedicated specifically to “non-traditional” or “adult” students who are coming back to school after years in the workforce. These offices can connect you with advisors who understand the unique challenges of being a “re-entry” student. They can help you get up to speed with the latest instructional technologies, refresh your academic writing skills, provide tutoring, and point you toward financial aid and scholarship opportunities for returning students.

Having a built-in community of people in similar circumstances can make the transition much easier. Most schools will have student organizations, mentorship programs, and peer networks specifically for non-traditional and adult learners. Being able to share experiences, swap tips on balancing life and coursework, or simply vent with others in the same situation can provide a huge support system.

Professional Associations

If you’re staying in your current field, professional associations related to your industry can point you toward recommended graduate programs and alternative credentials that may be valued by employers. For example, if you work in finance, an organization like the CFA Institute can steer you toward respected credentials like the Chartered Financial Analyst certification.

Many of these associations also offer continuing education courses, live and online seminars, training programs and other resources that can help bridge the gap between working full-time and going back to the classroom. You can start updating your academic skills and reintroducing concepts through these lower-stakes offerings before committing to a full graduate program. They also provide great networking opportunities to connect with other professionals interested in going back to school.

Student Loan Resources  

Of course, paying for graduate school is one of the biggest considerations when going back after years of earning a steady paycheck. After exhausting all avenues for employer tuition benefits and researching extensive grant and scholarship offerings for “non-traditional” grad students, you may still need to take out loans to cover costs.

Resources like the U.S. Department of Education’s Federal Student Aid website can walk you through the FAFSA process for grants and federal loans. They provide loan calculators to estimate borrowing needs, help compare lender options, and look into potential loan forgiveness programs based on your future career plans. 

There are also extensive private student loan options to supplement federal aid packages. Sites like Credible.com allow you to compare interest rates and terms across multiple private lenders at once. Different university websites, such as the University of Phoenix’s, provide detailed pages on the options available to students to finance their education. University financial aid offices are another great resource, as counselors can provide the most up-to-date information on aid packages, work-study options, graduate assistantships, and tuition payment plans specific to their school.

Building Your Support System

In addition to the concrete resources from employers, universities, professional groups and lenders, one of the most important things to establish when going back to grad school after years of working is a personal support system. Pursuing an advanced degree while working full-time is extremely demanding. You’ll need a strong network of friends, family, colleagues, and fellow students who understand the unique pressures you’re facing.

Rely on them for everything from pep talks during crunch times to helping cover household responsibilities when your schedule is overloaded. Let your inner circle know upfront that you’ll be making temporary sacrifices and may need to pass on some social events to focus on academics. But also schedule little breaks and rewards along the way to avoid burnout. Having a strong cheering squad to lean on can make all the difference in persevering through the journey.

Time Management Strategies

One of the biggest challenges of being a working student is finding enough hours in the day for all your responsibilities. Developing strong time management skills is crucial. Utilize tools like calendar blocking to dedicate set times exclusively for your classes, studying, work projects, and personal time. Eliminate distractions like social media during your designated academic blocks.

It can also be helpful to try different study tactics to maximize efficiency – techniques like working in timed intervals, or active recall with flashcards instead of just re-reading notes. Over time you’ll find the specific strategies that help you stay focused and retain more information in less time.

Don’t be afraid to ask for help or accommodations when you need them. Most professors understand the workload adult students are balancing and are willing to be flexible on deadlines in cases of work obligations or personal emergencies. Set realistic expectations with periodic breaks built in, so you can make it through each semester at a manageable pace.

Don’t let a few years out of the classroom deter you from pursuing a graduate degree and advancing your career. While it may require strategic planning and self-discipline, going back to school at 30 or anytime after entering the workforce is absolutely achievable. With some diligent research into all the available support services, resources, and financial assistance, you can map out the path that works best for your lifestyle and professional goals. The temporary balancing act of being a working student will in all likelihood pay off in the long run.

Featured image by Redd F on Unsplash

The post Furthering Your Education: Top Resources for Going Back to Grad School After Working appeared first on noupe.

Categories: Others Tags:

Conducting Accessibility Research In An Inaccessible Ecosystem

April 25th, 2024 No comments

Ensuring technology is accessible and inclusive relies heavily on receiving feedback directly from disabled users. You cannot rely solely on checklists, guidelines, and good-faith guesses to get things right. This is often hindered, however, by a lack of accessible prototypes available to use during testing.

Rather than wait for the digital landscape to change, researchers should leverage all the available tools they can use to create and replicate the testing environments they need to get this important research completed. Without it, we will continue to have a primarily inaccessible and not inclusive technology landscape that will never be disrupted.

Note: I use “identity first” disability language (as in “disabled people”) rather than “people first” language (as in “people with disabilities”). Identity first language aligns with disability advocates who see disability as a human trait description or even community and not a subject to be avoided or shamed. For more, review “Writing Respectfully: Person-First and Identity-First Language”.

Accessibility-focused Research In All Phases

When people advocate that UX Research should include disabled participants, it’s often with the mindset that this will happen on the final product once development is complete. One primary reason is because that’s when researchers have access to the most accessible artifact with which to run the study. However,

The real ability to ensure an accessible and inclusive system is not by evaluating a final product at the end of a project; it’s by assessing user needs at the start and then evaluating the iterative prototypes along the way.

Prototype Research Should Include Disabled Participants

In general, the iterative prototype phase of a project is when teams explore various design options and make decisions that will influence the final project outcome. Gathering feedback from representative users during this phase can help teams make informed decisions, including key pivots before significant development and testing resources are used.

During the prototype phase of user testing, the representative users should include disabled participants. By collecting feedback and perspectives of people with a variety of disabilities in early design testing phases, teams can more thoughtfully incorporate key considerations and supplement accessibility guidelines with real-world feedback. This early-and-often approach is the best way to include accessibility and inclusivity into a process and ensure a more accessible final product.

If you instead wait to include disabled participants in research until a product is near final, this inevitably leads to patchwork fixes of any critical feedback. Then, for feedback not deemed critical, it will likely get “backlogged” where the item priorities compete with new feature updates. With this approach, you’ll constantly be playing catch-up rather than getting it right up front and in an elegant and integrated way.

Accessibility Research Can’t Wait Until The End

Not only does research with disabled participants often occur too late in a project, but it is also far too often viewed as separate from other research studies (sometimes referred to as the “main research”). It cannot be understated that this reinforces the notion of separate-and-not-equal as compared to non-disabled participants and other stakeholder feedback. This has a severe negative impact on how a team will view the priority of inclusive design and, more broadly, the value of disabled people. That is, this reinforces “ableism”, a devaluing of disabled people in society.

UX Research with diverse participants that include a wide variety of disabilities can go a long way in dismantling ableist views and creating vitally needed inclusive technology.

The problem is that even when a team is on board with the idea, it’s not always easy to do inclusive research, particularly when involving prototypes. While discovery research can be conducted with minimal tooling and summative research can leverage fully built and accessible systems, prototype research quickly reveals severe accessibility barriers that feel like they can’t be overcome.

Inaccessible Technology Impedes Accessibility Research

Most technology we use has accessibility barriers for users with disabilities. As an example, the WebAIM Million report consistently finds that 96% of web homepages have accessibility errors that are fixable and preventable.

Just like websites, web, and mobile applications are similarly inaccessible, including those that produce early-stage prototypes. Thus, the artifacts researchers might want to use for prototype testing to help create accessible products are themselves inaccessible, creating a barrier for disabled research participants. It quickly becomes a vicious cycle that seems hard to break.

The Limitations Of Figma

Currently, the most popular industry tool for initial prototyping is Figma. These files become the artifacts researchers use to conduct a research study. However, these files often fall short of being accessible enough for many participants with disabilities.

To be clear, I absolutely applaud the Figma employees who have worked very hard on including screen reader support and keyboard functionality in Figma prototypes. This represents significant progress towards removing accessibility barriers in our core products and should not be overlooked. Nevertheless, there are still limitations and even blockers to research.

For one, the Figma files must be created in a way that will mimic the website layout and code. For example, for screen reader navigation to be successful, the elements need to be in their correct reading order in the Layers panel (not solely look correct visually), include labeled elements such as buttons (not solely items styled to look like buttons), and include alternative text for images. Often, however, designers do not build iterative prototypes with these considerations in mind, which prevents the keyboard from navigating correctly and the screen reader from providing the necessary details to comprehend the page.

In addition, Figma’s prototypes do not have selectable, configurable text. This prevents key visual adjustments such as browser zoom to increase text size, dark mode, which is easier for some to view, and selecting text to have it read aloud. If a participant needs these kinds of adjustments (or others I list in the table below), a Figma prototype will not be accessible to them.

Table: Figma prototype limitations per assistive technology

Assistive Technology Disability Category Limitation
Keyboard-only navigation Mobility Must use proper element type (such as button or input) in expected page order to ensure operability
Screen reader Vision Must include structure to ensure readability:

  • Including elements in logical order to ensure correct reading order
  • Alternative text added to images
  • Descriptive names added for buttons
Dark mode/High contrast mode Low Vision
Neurodiversity
Not available
Browser zoom Low Vision
Neurodiversity
Mobility
Not available
Screen reader used with mouse hover
Read aloud software with text selection
Vision
Neurodiversity
Cannot be used
Voice control
Switch control device
Mobility Cannot be used

Inclusive Research Is Needed Regardless

Having accessibility challenges with a prototype doesn’t mean we give up on the research. Instead, it means we need to get creative in our approach. This research is too important to keep waiting for the ideal set-up, particularly when our findings are often precisely what’s needed to create accessible technology.

Part of crafting a research study is determining what artifact to use during the study. Thus, when considering prototype research, it is a matter of creating the artifact best suited for your study. If this isn’t going to be, say, a Figma file you receive from designers, then consider what else can be used to get the job done.

Working Around the Current State

Being able to include diverse perspectives from disabled research participants throughout a project’s creation is possible and necessary. Keeping in mind your research questions and the capabilities of your participants, there are research methods and strategies that can be made accessible to gather authentic feedback during the critical prototype design phase.

With that in mind, I propose five ways you can accomplish prototype research while working around inaccessible prototypes:

  1. Use a survey.
  2. Conduct a co-design session.
  3. Test with a similar system.
  4. Build your own rapid prototype.
  5. Use the Wizard of Oz method.

Use a Survey Instead

Not all research questions at this phase need a full working prototype to be answered, particularly if they are about the general product features or product wording and not the visual design. Oftentimes, a survey tool or similar type of evaluation can be just as effective.

For example, you can confirm a site’s navigation options are intuitive by describing a scenario with a list of navigation choices while also testing if key content is understandable by confirming the user’s next steps based on a passage of text.

Image description

+

Acme Company Website Survey

Complete this questionnaire to help us determine if our site will be understandable.

  1. Scenario: You want to find out this organization’s mission statement. Which menu option do you choose?
    [List of radio buttons]
    • Home
    • About
    • Resources
    • Find an Office
    • Search
  2. The following describes directions for applying to our grant. After reading, answer the following question:

    The Council’s Grant serves to advance Acme’s goals by sponsoring community events. In determining whether to fund an event, the Council also considers factors including, but not limited to:

    • Target audiences
    • Alignment with the Council’s goals and objectives
    • Evaluations measuring participant satisfaction

To apply, download the form below.

Based on this wording, what would you include in your grant application?
[Input Field]

Just be sure you build a WCAG-compliant survey that includes accessible form layouts and question types. This will ensure participants can navigate using their assistive technologies. For example, Qualtrics has a specific form layout that is built to be accessible, or check out these accessibility tips for Google Forms. If sharing a document, note features that will enhance accessibility, such as using the ribbon for styling in Microsoft Word.

Tip: To find accessibility documentation for the software you’re using, search in your favorite search engine for the product name plus the word “accessibility” to find a product’s accessibility documentation.

Conduct Co-design Sessions

The prototyping phase might be a good time to utilize co-design and participatory design methods. With these methods, you can co-create designs with participants using any variety of artifacts that match the capabilities of your participants along with your research goals. The feedback can range from high-level workflows to specific visual designs, and you can guide the conversation with mock-ups, equivalent systems, or more creative artifacts such as storyboards that illustrate a scenario for user reaction.

For the prototype artifacts, these can range from low- to high-fidelity. For instance, participants without mobility or vision impairments can use paper-and-pencil sketching or whiteboarding. People with somewhat limited mobility may prefer a tablet-based drawing tool, such as using an Apple pencil with an iPad. Participants with visual impairments may prefer more 3-dimensional tools such as craft supplies, modeling clay, and/or cardboard. Or you may find that simply working on a collaborative online document offers the best accessibility as users can engage with their personalized assistive technology to jot down ideas.

Notably, the types of artifacts you use will be beneficial across differing user groups. In fact, rather than limiting the artifacts, try to offer a variety of ways to provide feedback by default. By doing this, participants can feel more empowered and engaged by the activity while also reassuring them you have created an inclusive environment. If you’re not sure what options to include, feel free to confirm what methods will work best as you recruit participants. That is, as you describe the primary activity when they are signing up, you can ask if the materials you have will be operable for the participant or allow them to tell you what they prefer to use.

The discussion you have and any supplemental artifacts you use then depend on communication styles. For example, deaf participants may need sign language interpreters to communicate their views but will be able to see sample systems, while blind participants will need descriptions of key visual information to give feedback. The actual study facilitation comes down to who you are recruiting and what level of feedback you are seeking; from there, you can work through the accommodations that will work best.

I conducted two co-design sessions at two different project phases while exploring how to create a wearable blind pedestrian navigation device. Early in the project, when we were generally talking about the feature set, we brought in several low-fidelity supplies, including a Braille label maker, cardboard, clay, Velcro, clipboards, tape, paper, and pipe cleaners. Based on user feedback, I fashioned a clipboard hanging from pipe cleaners as one prototype.

Later in the project when we were discussing the size and weight, we taped together Arduino hardware pieces representing the features identified by the participants. Both outcomes are pictured below and featured in a paper entitled, “What Not to Wearable: Using Participatory Workshops to Explore Wearable Device Form Factors for Blind Users.”

Ultimately, the benefit of this type of study is the participant-led feedback. In this way, participants are giving unfiltered feedback that is less influenced by designers, which may lead to more thoughtful design in the end.

Test With an Equivalent System

Very few projects are completely new creations, and often, teams use an existing site or application for project inspiration. Consider using similar existing systems and equivalent scenarios for your testing instead of creating a prototype.

By using an existing live system, participants can then use their assistive technology and adaptive techniques, which can make the study more accessible and authentic. Also, the study findings can range from the desirability of the available product features to the accessibility and usability of individual page elements. These lessons can then inform what design and code decisions to make in your system.

One caveat is to be aware of any accessibility barriers in that existing system. Particularly for website and web applications, you can look for accessibility documentation to determine if the company has reported any WCAG-conformance accessibility efforts, use tools like WAVE to test the system yourself, and/or mimic how your participants will use the system with their assistive technology. If there are workarounds for what you find, you may be able to avoid certain parts of the application or help users navigate past the inaccessible parts. However, if the site is going to be completely unusable for your participants, this won’t be a viable option for you.

If the system is usable enough for your testing, however, you can take the testing a step further by making updates on the fly if you or someone you collaborate with has engineering experience. For example, you can manipulate a website’s code with developer tools to add, subtract, or change the elements and styling on a page in real-time. (See “About browser developer tools”.) This can further enhance the feedback you give to your teams as it may more closely match your team’s intended design.

Build a Rapid Website Prototype

Notably, when conducting research focused on physical devices and hardware, you will not face the same obstacles to inaccessibility as with websites and web applications. You can use a variety of materials to create your prototypes, from cardboard to fabric to 3D printed material. I’ve sewn haptic vibration modules to a makeshift leather bracelet when working with wearables, for instance.

However, for web testing, it may be necessary to build a rapid prototype, especially to work around inaccessible artifacts such as a Figma file. This will include using a site builder that allows you to quickly create a replica of your team’s website. To create an accessible website, you’ll need a site builder with accessibility features and capabilities; I recommend WordPress, SquareSpace, Webflow, and Google Sites.

I recently used Google Sites to create a replica of a client’s draft pages in a matter of hours. I was adamant we should include disabled participants in feedback loops early and often, and this included after a round of significant visual design and content decisions. The web agency building the client’s site used Figma but not with the required formatting to use the built-in screen reader functionality. Rather than leave out blind user feedback at such a crucial time in the project, I started with a similar Google Sites template, took a best guess at how to structure the elements such as headings, recreated the anticipated column and card layouts as best I could, and used placeholder images with projected alt text instead of their custom graphics.

The screen reader testing turned into an impromptu co-design session because I could make changes in-the-moment to the live site for the participant to immediately test out. For example, we determined that some places where I used headings were not necessary, and we talked about image alt text in detail. I was able to add specific design and code feedback to my report, as well as share the live site (and corresponding code) with the team for comparison.

The downside to my prototype was that I couldn’t create the exact 1-to-1 visual design to use when testing with the other disabled participants who were sighted. I wanted to gather feedback on colors, fonts, and wording, so I also recruited low vision and neurodiverse participants for the study. However, my data was skewed because those participants couldn’t make the visual adjustments they needed to fully take in the content, such as recoloring, resizing, and having text read aloud. This was unfortunate, but we at least used the prototype to spark discussions of what does make a page accessible for them.

You may find you are limited in how closely you can replicate the design based on the tools you use or lack of access to developer assistance. When facing these limitations, consider what is most important to evaluate and determine if a paired-down version of the site will still give you valuable feedback over no site at all.

Use Wizard of Oz

The Wizard of Oz (WoZ) research method involves the facilitators mimicking system interactions in place of a fully working system. With WoZ, you can create your system’s approximate functionality using equivalent accessible tools and processes.

As an example, I’ll refer you to the talk by an Ally Financial research team that used this method for participants who used screen readers. They pre-programmed screen reader prompts into a clickable spreadsheet and had participants describe aloud what keyboard actions they would take to then trigger the corresponding prompt. While not the ideal set-up for the participants or researchers, it at least brought screen reader user feedback (and recognition of the users themselves) to the early design phases of their work. For more, review their detailed talk “Removing bias with wizard of oz screen reader usability testing”.

This isn’t just limited to screen reader testing, however. In fact, I’ve also often used Wizard of Oz for Voice User Interface (VUI) design. For instance, when I helped create an Alexa “skill” (their name for an app on Amazon speech-enabled devices), our prototype wouldn’t be ready in time for user testing. So, I drafted an idea to use a Bluetooth speaker to announce prompts from a clickable spreadsheet instead. When participants spoke a command to the speaker (thinking it was an Alexa device), the facilitator would select the appropriate pre-recorded prompt or a generic “I don’t understand” message.

Any system can be mimicked when you break down its parts and pieces and think about the ultimate interaction for the user. Creating WoZ set-ups can take creativity and even significant time to put together, but the outcomes can be worth it, particularly for longer-term projects. Once the main pieces are created, the prototype set-up can be edited and reused indefinitely, including during the study or between participants. Also, the investment in an easily edited prototype pays off exponentially if it uncovers something prior to finishing the entire product. In fact, that’s the main goal of this phase of testing: to help teams know what to look out for before they go through the hard work of finishing the product.

Inclusive Research Can No Longer Wait

Much has been documented about inclusive design to help teams craft technology for the widest possible audience. From the Web Content Accessibility Guidelines that help define what it means to be accessible to the Microsoft Inclusive Design Toolkits that tell the human stories behind the guidelines, there is much to learn even before a product begins.

However, the best approach is with direct user feedback. With this, we must recognize the conundrum many researchers are facing: We want to include disabled participants in UX research prior to a product being complete, but often, prototypes we have available for testing are inaccessible. This means testing with something that is essentially broken and will negatively impact our findings.

While it may feel like researchers will always be at a disadvantage if we don’t have the tools we need for testing, I think, instead, it’s time for us to push back. I propose we do this on two fronts:

  1. We make the research work as best we can in the current state.
  2. We advocate for the tools we need to make this more streamlined.

The key is to get disabled perspectives on the record and in the dataset of team members making the decisions. By doing this, hopefully, we shift the culture to wanting and valuing this feedback and bringing awareness to what it takes to make it happen.

Ideally, the awareness raised from our bootstrap efforts will lead to more people helping reduce the current prototype barriers. For some of us, this means urging companies to prioritize accessibility features in their roadmaps. For those working within influential prototype companies, it can mean getting much-needed backing to innovate better in this area.

The current state of our inaccessible digital ecosystem can sometimes feel like an entanglement too big to unravel. However, we must remain steadfast and insist that this does not remain the status quo; disabled users are users, and their diverse and invaluable perspectives must be a part of our research outcomes at all phases.

Categories: Others Tags: