Home > Designing, Others > Consistent Backends and UX: Why Should You Care?

Consistent Backends and UX: Why Should You Care?

Article Series

  1. Why should you care?
  2. What can go wrong? (Coming soon)
  3. What are the barriers to adoption? (Coming soon)
  4. How do new algorithms help? (Coming soon)

More than ever, new products aim to make an impact on a global scale, and user experience is rapidly becoming the determining factor for whether they are successful or not. These properties of your application can significantly influence the user experience:

  1. Performance & low latency
  2. The application does what you expect
  3. Security
  4. Features and UI

Let’s begin our quest toward the perfect user experience!

1) Performance & Low Latency

Others have said it before; performance is user experience (1, 2). When you have caught the attention of potential visitors, a slight increase in latency can make you lose that attention again.

2) The application does what you expect

What does ‘does what you expect’ even mean? It means that if I change my name in my application to ‘Robert’ and reload the application, my name will be Robert and not Brecht. It seems important that an application delivers these guarantees, right?

Whether the application can deliver on these guarantees depends on the database. When pursuing low latency and performance, we end up in the realm of distributed databases where only a few of the more recent databases deliver these guarantees. In the realm of distributed databases, there might be dragons, unless we choose a strongly (vs. eventually) consistent database. In this series, we’ll go into detail on what this means, which databases provide this feature called strong consistency, and how it can help you build awesomely fast apps with minimal effort.

3) Security

Security does not always seem to impact user experience at first. However, as soon as users notice security flaws, relationships can be damaged beyond repair.

4) Features and UI

Impressive features and great UI have a great impact on the conscious and unconscious mind. Often, people only desire a specific product after they have experienced how it looks and feels.

If a database saves time in setup and configuration, then the rest of our efforts can be focused on delivering impressive features and a great UI. There is good news for you; nowadays, there are databases that deliver on all of the above, do not require configuration or server provisioning, and provide easy to use APIs such as GraphQL out-of-the-box.

What is so different about this new breed of databases? Let’s take a step back and show how the constant search for lower latency and better UX, in combination with advances in database research, eventually led to a new breed of databases that are the ideal building blocks for modern applications.

The Quest for distribution

I. Content delivery networks

As we mentioned before, performance has a significant impact on UX. There are several ways to improve latency, where the most obvious is to optimize your application code. Once your application code is quite optimal, network latency and write/read performance of the database often remain the bottleneck. To achieve our low latency requirement, we need to make sure that our data is as close to the client as possible by distributing the data globally. We can deliver the second requirement (write/read performance) by making multiple machines work together, or in other words, replicating data.

Distribution leads to better performance and consequently to good user experience. We’ve already seen extensive use of a distribution solution that speeds up the delivery of static data; it’s called a Content Delivery Network (CDN). CDNs are highly valued by the Jamstack community to reduce the latency of their applications. They typically use frameworks and tools such as Next.js/Now, Gatsby, and Netlify to preassemble front end React/Angular/Vue code into static websites so that they can serve them from a CDN.

Unfortunately, CDNs aren’t sufficient for every use case, because we can’t rely on statically generated HTML pages for all applications. There are many types of highly dynamic applications where you can’t statically generate everything. For example:

  1. Applications that require real-time updates for instantaneous communication between users (e.g., chat applications, collaborative drawing or writing, games).
  2. Applications that present data in many different forms by filtering, aggregating, sorting, and otherwise manipulating data in so many ways that you can’t generate everything in advance.

II. Distributed databases

In general, a highly dynamic application will require a distributed database to improve performance. Just like a CDN, a distributed database also aims to become a global network instead of a single node. In essence, we want to go from a scenario with a single database node…

…to a scenario where the database becomes a network. When a user connects from a specific continent, he will automatically be redirected to the closest database. This results in lower latencies and happier end users.

If databases were employees waiting by a phone, the database employee would inform you that there is an employee closer by, and forward the call. Luckily, distributed databases automatically route us to the closest database employee, so we never have to bother the database employee on the other continent.

Distributed databases are multi-region, and you always get redirected to the closest node.

Besides latency, distributed databases also provide a second and a third advantage. The second is redundancy, which means that if one of the database locations in the network were completely obliterated by a Godzilla attack, your data would not be lost since other nodes still have duplicates of your data.

Distributed databases provide redundancy which can save your application when things go wrong.
Distributed databases divide the load by scaling up automatically when the workload increases.

Last but not least, the third advantage of using a distributed database is scaling. A database that runs on one server can quickly become the bottleneck of your application. In contrast, distributed databases replicate data over multiple servers and can scale up and down automatically according to the demands of the applications. In some advanced distributed databases, this aspect is completely taken care of for you. These databases are known as “serverless”, meaning you don’t even have to configure when the database should scale up and down, and you only pay for the usage of your application, nothing more.

Distributing dynamic data brings us to the realm of distributed databases. As mentioned before, there might be dragons. In contrast to CDNs, the data is highly dynamic; the data can change rapidly and can be filtered and sorted, which brings additional complexities. The database world examined different approaches to achieve this. Early approaches had to make sacrifices to achieve the desired performance and scalability. Let’s see how the quest for distribution evolved.

Traditional databases’ approach to distribution

One logical choice was to build upon traditional databases (MySQL, PostgreSQL, SQL Server) since so much effort has already been invested in them. However, traditional databases were not built to be distributed and therefore took a rather simple approach to distribution. The typical approach to scale reads was to use read replicas. A read replica is just a copy of your data from which you can read but not write. Such a copy (or replica) offloads queries from the node that contains the original data. This mechanism is very simple in that the data is incrementally copied over to the replicas as it comes in.

Due to this relatively simple approach, a replica’s data is always older than the original data. If you read the data from a replica node at a specific point in time, you might get an older value than if you read from the primary node. This is called a “stale read”. Programmers using traditional databases have to be aware of this possibility and program with this limitation in mind. Remember the example we gave at the beginning where we write a value and reread it? When working with traditional database replicas, you can’t expect to read what you write.

You could improve the user experience slightly by optimistically applying the results of writes on the front end before all replicas are aware of the writes. However, a reload of the webpage might return the UI to a previous state if the update did not reach the replica yet. The user would then think that his changes were never saved.

The first generation of distributed databases

In the replication approach of traditional databases, the obvious bottleneck is that writes all go to the same node. The machine can be scaled up, but will inevitably bump into a ceiling. As your app gains popularity and writes increase, the database will no longer be fast enough to accept new data. To scale horizontally for both reads and writes, distributed databases were invented. A distributed database also holds multiple copies of the data, but you can write to each of these copies. Since you update data via each node, all nodes have to communicate with each other and inform others about new data. In other words, it is no longer a one-way direction such as in the traditional system.

However, these kinds of databases can still suffer from the aforementioned stale reads and introduce many other potential issues related to writes. Whether they suffer from these issues depends on what decision they took in terms of availability and consistency.

This first generation of distributed databases was often called the “NoSQL movement”, a name influenced by databases such as MongoDB and Neo4j, which also provided alternative languages to SQL and different modeling strategies (documents or graphs instead of tables). NoSQL databases often did not have typical traditional database features such as constraints and joins. As time passed, this name appeared to be a terrible name since many databases that were considered NoSQL did provide a form of SQL. Multiple interpretations arose that claimed that NoSQL databases:

  • do not provide SQL as a query language.
  • do not only provide SQL (NoSQL = Not Only SQL)
  • do not provide typical traditional features such as joins, constraints, ACID guarantees.
  • model their data differently (graph, document, or temporal model)

Some of the newer databases that were non-relational yet offered SQL were then called “NewSQL” to avoid confusion.

Wrong interpretations of the CAP theorem

The first generation of databases was strongly inspired by the CAP theorem, which dictates that you can’t have both Consistency and Availability during a network Partition. A network partition is essentially when something happens so that two nodes can no longer talk to each other about new data, and can arise for many reasons (e.g., apparently sharks sometimes munch on Google’s cables). Consistency means that the data in your database is always correct, but not necessarily available to your application. Availability means that your database is always online and that your application is always able to access that data, but does not guarantee that the data is correct or the same in multiple nodes. We generally speak of high availability since there is no such thing as 100% availability. Availability is mentioned in digits of 9 (e.g. 99.9999% availability) since there is always a possibility that a series of events cause downtime.

Visualization of the CAP theorem, a balance between consistency and availability in the event of a network partition. We generally speak of high availability since there is no such thing as 100% availability.

But what happens if there is no network partition? Database vendors took the CAP theorem a bit too generally and either chose to accept potential data loss or to be available, whether there is a network partition or not. While the CAP theorem was a good start, it did not emphasize that it is possible to be highly available and consistent when there is no network partition. Most of the time, there are no network partitions, so it made sense to describe this case by expanding the CAP theorem into the PACELC theorem. The key difference is the three last letters (ELC) which stand for Else Latency Consistency. This theorem dictates that if there is no network partition, the database has to balance Latency and Consistency.

According to the PACELC theorem, increased consistency results in higher latencies (during normal operation).

In simple terms: when there is no network partition, latency goes up when the consistency guarantees go up. However, we’ll see that reality is still even more subtle than this.

How is this related to User Experience?

Let’s look at an example of how giving up consistency can impact user experience. Consider an application that provides you with a friendly interface to compose teams of people; you drag and drop people into different teams.

Once you drag a person into a team, an update is triggered to update that team. If the database does not guarantee that your application can read the result of this update immediately, then the UI has to apply those changes optimistically. In that case, bad things can happen:

  • The user refreshes the page and does not see his update anymore and thinks that his update is gone. When he refreshes again, it is suddenly back.
  • The database did not store the update successfully due to a conflict with another update. In this case, the update might be canceled, and the user will never know. He might only notice that his changes are gone the next time he reloads.

This trade-off between consistency and latency has sparked many heated discussions between front-end and back-end developers. The first group wanted a great UX where users receive feedback when they perform actions and can be 100% sure that once they receive this feedback and respond to it, the results of their actions are consistently saved. The second group wanted to build a scalable and performant back end and saw no other way than to sacrifice the aforementioned UX requirements to deliver that.

Both groups had valid points, but there was no golden bullet to satisfy both. When the transactions increased and the database became the bottleneck, their only option was to go for either traditional database replication or a distributed database that sacrificed strong consistency for something called “eventual consistency”. In eventual consistency, an update to the database will eventually be applied on all machines, but there is no guarantee that the next transaction will be able to read the updated value. In other words, if I update my name to “Robert”, there is no guarantee that I will actually receive “Robert” if I query my name immediately after the update.

Consistency Tax

To deal with eventual consistency, developers need to be aware of the limitations of such a database and do a lot of extra work. Programmers often resort to user experience hacks to hide the database limitations, and back ends have to write lots of additional layers of code to accommodate for various failure scenarios. Finding and building creative solutions around these limitations has profoundly impacted the way both front- and back-end developers have done their jobs, significantly increasing technical complexity while still not delivering an ideal user experience.

We can think of this extra work required to ensure data correctness as a “tax” an application developer must pay to deliver good user experiences. That is the tax of using a software system that doesn’t offer consistency guarantees that hold up in todays webscale concurrent environments. We call this the Consistency Tax.

Thankfully, a new generation of databases has evolved that does not require you to pay the Consistency Tax and can scale without sacrificing consistency!

The second generation of distributed databases

A second generation of distributed databases has emerged to provide strong (rather than eventual) consistency. These databases scale well, won’t lose data, and won’t return stale data. In other words, they do what you expect, and it’s no longer required to learn about the limitations or pay the Consistency Tax. If you update a value, the next time you read that value, it always reflects the updated value, and different updates are applied in the same temporal order as they were written. FaunaDB, Spanner, and FoundationDB are the only databases at the time of writing that offer strong consistency without limitations (also called Strict serializability).

The PACELC theorem revisited

The second generation of distributed databases has achieved something that was previously considered impossible; they favor consistency and still deliver low latencies. This became possible due to intelligent synchronization mechanisms such as Calvin, Spanner, and Percolator, which we will discuss in detail in article 4 of this series. While older databases still struggle to deliver high consistency guarantees at lower latencies, databases built on these new intelligent algorithms suffer no such limitations.

Database designs influence the attainable latency at high consistency greatly.

Since these new algorithms allow databases to provide both strong consistency and low latencies, there is usually no good reason to give up consistency (at least in the absence of a network partition). The only time you would do this is if extremely low write latency is the only thing that truly matters, and you are willing to lose data to achieve it.

intelligent algorithms result in strong consistency and relatively low latencies

Are these databases still NoSQL?

It’s no longer trivial to categorize this new generation of distributed databases. Many efforts are still made (1, 2) to explain what NoSQL means, but none of them still make perfect sense since NoSQL and SQL databases are growing towards each other. New distributed databases borrow from different data models (Document, Graph, Relational, Temporal), and some of them provide ACID guarantees or even support SQL. They still have one thing in common with NoSQL: they are built to solve the limitations of traditional databases. One word will never be able to describe how a database behaves. In the future, it would make more sense to describe distributed databases by answering these questions:

  • Is it strongly consistent?
  • Does the distribution rely on read-replicas, or is it truly distributed?
  • What data models does it borrow from?
  • How expressive is the query language, and what are its limitations?

Conclusion

We explained how applications can now benefit from a new generation of globally distributed databases that can serve dynamic data from the closest location in a CDN-like fashion. We briefly went over the history of distributed databases and saw that it was not a smooth ride. Many first-generation databases were developed, and their consistency choices–which were mainly driven by the CAP theorem–required us to write more code while still diminishing the user experience. Only recently has the database community developed algorithms that allow distributed databases to combine low latency with strong consistency. A new era is upon us, a time when we no longer have to make trade-offs between data access and consistency!

At this point, you probably want to see concrete examples of the potential pitfalls of eventually consistent databases. In the next article of this series, we will cover exactly that. Stay tuned for these upcoming articles:

Article Series

  1. Why should you care?
  2. What can go wrong? (Coming soon)
  3. What are the barriers to adoption? (Coming soon)
  4. How do new algorithms help? (Coming soon)

The post Consistent Backends and UX: Why Should You Care? appeared first on CSS-Tricks.

Categories: Designing, Others Tags:
  1. No comments yet.
  1. No trackbacks yet.
You must be logged in to post a comment.