There have been a couple of viral tweets about this lately, one from Adam Argyle and one from Mathias Bynes. This is a nice change that makes CSS a bit more clear. Before, every single color function actually needs two functions, one for transparency and one without, this eliminates that need and brings the syntax more-inline with CSS grammar overall.
Lemme make some code block from Mathias’ tweet here.
If you need IE 11 support, you can preprocess it (or not use it). PostCSS’s preset-envdoes it as well as the very specific plugin postcss-color-rgb (weird it doesn’t do HSL also).
If you don’t like it, you literally never need to use it. No browser will ever pull support for such an important feature.
The reason to switch is muscle memory and consistent-looking codebases as new color functions (e.g, lab, lch, and color) will only support this new syntax.
There is a weird hybrid between old and new. You can pass an opacity value to rgb() and it still works like rgb(255, 0, 0, 0.5);.
If you need it in Sass (which is apparently a pain to support), there is a weird workaround. I would guess Sass will get around to supporting it. If they can’t, this is the kind of barb that drives people away from projects.
Prettier, which is in the business of cleaning up your code from the perspective of spacing and syntax, could intervene here and convert syntax, but it’s not going to (the Prettier stance is to not change the AST).
I imagine DevTools will start showing colors in this format, which will drive adoption.
You have all the potential in the world to be the most sought-after graphic designer in history.
Re-read that sentence again and believe it.
There are a lot of factors that go into being the best designer you can be, but one thing stands true; You could be the best designer in your field, but no one knows about you.
One reason you might not be wildly succeeding right now is that you’re not promoting yourself and your work.
Or you’re not taking advantage of all the places you could be publishing your work.
5 Platforms That Will Help You Grow
Today I want to go over my 5 go-to platforms where I publish my work when I create osmething I’m proud of.
And you should do the same!
Without further ado, let’s get into it.
1. Dribbble
Okay, I know that you were expecting to see this as number one, but that’s for good reason!
If you’re not posting your work to Dribbble, every time you create something bomb, then you’re doing it wrong.
If by the smallest chance in the world, you don’t know what Dribbble is, then let me break it down for you.
According to Wikipedia, Dribbble is a self-promotion and social networking platform for digital designers and creatives.
It serves as a design portfolio platform, jobs and recruiting site and is one of the largest platforms for designers to share their work online.
The company is fully remote with no headquarters.
A place where designers gain inspiration, feedback, community, and jobs and is your best resource to discover and connect with designers worldwide.
Let me just show you what 2018 held for Dribbble.
You can easily sign up and upload your work. It’s free to join and to post JPG images, but for more advanced project, you might have to upgrade to pro.
Lots of people have been connected with contractors and found jobs through this platform, so I highly recommend!
2. Behance
Are you surprised to see Behance right after Dribbble?
I suppose that none of us are.
If you have Dribbble, you need to have Behance as well.
I just feel like the two really are the perfect combo.
Behance is owned by Adobe and is a social media platform that was created for designers to showcase their work and get feedback from other designers
You might also be found by your next client, who knows!
Behance is very similar to Dribbble, in the fact that you just search for a keyword and find loads of inspirational designs and you can find likeminded designers.
Behance is also free of cost, and we all love that.
3. Hunie
If you’re always second-guessing your work, then you need to get on Hunie.
Hunie is a platform where you can submit your work strictly to get constructive criticism back from professionals.
It’s great to have an awesome community of other designers who are ready and willing to help you improve.
All you have to do is request an invite and then submit your work for advice and opinions.
You will get the opportunity to connect with others and grow, and also be able to give your opinion and help to other designers.
4. Pinterest
If you’re sleeping on Pinterest, then you’ve got another thing coming.
Pinterest is such a powerful tool for gaining a following and exposure.
It’s completely free, and although it’s not made specifically for designers, that doesn’t mean you can’t engage with awesome other designers on the platform.
Use the right keywords and find out what’s trending and post your work!
5. Instagram
I’m almost certain you have an Instagram.
You should be using Instagram to your advantage!
I wouldn’t necessarily say you should post your work on your personal profile, but I have definitely created a secondary profile.
Make a profile that’s dedicated to sharing your work.
The power of hashtags in incredible.
Whenever you upload your work, make sure you use about 5 meaningful hashtags to describe your work so you can be on the search page!
You can also promote your work very inexpensively and get thousands of impressions and likes from people who genuinely will love your work.
Wrapping up
Your work deserves to be seen, because it’s awesome.
All you need to do is put yourself out there!
Use these platforms daily to gain inspiration from others, and to grow your own following and community.
Let me know what other platforms you guys use to promote your work.
If there’s one industry that is known for openly embracing innovative technologies and techniques, it is undeniably Gaming.
The Gaming industry, over the last decade, has never seen being sitting still and letting the opportunities pass it by. It has, in fact, been declared as the earliest adopter of technologies that eventually go mainstream. A clear example of this is the integration of mobility solutions in the sector, aka, in the form of eSports.
eSports, as stated by a source, is a sports field that is organized via video games. It mainly involves players competing against each other in tournaments for prize money, but not by physically entering the region. Rather, by tapping on their smartphones. This gaming field is also popularly known as e-gaming, competitive gaming, pro-gaming, or organized play.
Because of this convenience and leisure, this industry is growing at an astounding rate. Various eSports apps like Fortnite, Counter-Strike, Tom Clancy’s Ghost Recon: Wildlands, and Call of Duty have become the most-favourite mobility solutions. And the eSports market size, which was worth USD 694.2 Mn in 2017, is anticipated to be valued USD 2,714.8 Mn by 2023, with a CAGR of 18.61% during the forecasted period.
However, there’s a challenge.
It’s true that young and zealous game players are spending most of their time on these eSports platforms. But, at the same time, they are demanding the entrepreneurs and investors for providing more life-like experiences. In other words, eSports users are showing the desire to feel those actions and venues as if they are the main player in the game itself.
This is where Augmented Reality (AR) and Virtual Reality (VR) are coming into play.
These technologies are providing endless possibilities and opportunities to the eSports business owners, which is further making them interested in investing in the AR and VR based gaming market. Those markets, which as per the top surveys, are expected to rise to $284.93 Bn and $45.09 Bn by the year 2023 and 2025, respectively.
Wondering what the potential of AR and VR is in the eSports domain?
Let us talk about the same in this article.
Benefits of Using AR/VR in eSports Space
1. Setting the foundation of new games
The foremost reason why eSports companies are showing interest in the adoption of AR and VR in their business process is that these technologies are evolving the newer form of games. These technologies are changing the formats and even adding new elements into the existing games – making them more exciting, intriguing and profitable.
2. Better gathering and presentation of data
Another field where AR and VR technologies are making a disruptive change is that of gathering and presentation of data.
The technology, unlike the traditional data collection methods, enables sports and gaming organizations to make complete use of virtual space in addition to making the best of ‘traditional’ environment and perspective. This makes it possible for them to gather and show all the details in the environment with every individual player. This accessibility of data will further make their experience an overwhelming one.
3. Engagement of a wider audience
As mentioned in the previous pointer, the introduction of augmented and virtual reality in the eSports gaming industry is also offering the facility of customer engagement.
The technologies are allowing eSports agencies to turn unusual gaming entertainers and those who watch tournaments and competitions occasionally into regular eSports app users. These technologies are enticing them with creative and 3D content that makes it nearly impossible for users to resist the gaming interface and later, feel compelled to install and use the application.
4. Improvement of eSports content strategy
Earlier, the content strategy created by eSports organizations was confined to traditional and 2D resources. Both startups and established companies in this industry were restricted to think of how to get the attention of their users, especially when they have a lower scope with the advertisements since people either skip them or hate using such applications.
These technologies empower entrepreneurs and marketers to extend their business potential across different spaces and this way, get the best perks of revolutionizing their content strategy.
5. Concept of Positional audio
The fantasy sports app developers and organizations are also harnessing the power of AR and VR technologies to relish the benefits of positional audio.
Meaning, the process of introducing different sounds and effects into the gaming environment whenever one finds out that eSports application users are not completely immersed into the experience.
6. Sensation of immersive experience
Virtual and Augmented reality is also giving game players the utmost advantage of sensing the immersion in the environment. Meaning, the technology provides players with the ability to see the structure of cars or the exterior of any house before entering them, the facility to look at the depth of a street when walking across it, or sense the distance while maintaining the horizon and densities.
And this way, recreate the phenomenon of reality by adding stress and competition without causing visual fatigue for participants. A real example of which is Pokemon go app.
7. Higher Revenue generation
What’s more, the collaboration of Augmented reality, Virtual reality and eSports is showing a rise in the customer engagement and retention level, which is eventually providing better ways to earn money.
8. Lower Competition
Lastly, since not all eSport companies are familiar with the plethora of benefits associated with the advent of these cutting edge technologies into their business model and have invested in the same, this is further providing agencies with the golden opportunity to enjoy competitive advantages.
Seeing these advantages of Virtual and Augmented reality in the eSports industry, various entrepreneurs have shown the eagerness to integrate their power into their business. They are looking for the best platforms where they can develop their own AR-based or VR-based eSports solutions.
If you too are thinking the same, it is necessary for you to know that building such solutions is a challenging task. You have to overcome varied difficulties such as a change in business model and lack of a higher-speed and lower latency network, for which it is impeccable to hire eSports app developers who have prior knowledge and experience of working with these technologies.
The state of the world around us can greatly impact website design. From emotional changes that correlate to the feel of a design to information and data to deliver, the impact of the worldwide COVID-19 pandemic is making its way into projects (intentionally or not).
Here’s what’s trending in design this month.
1. “Unbalanced” Use of Space
Space can be a huge influence on a design project. It impacts visual flow and can drive engagement.
As of late, many designs have taken a more balanced and symmetrical approach to using space, but that’s changing again with more websites that feature an “unbalanced” use of space. (And it’s quite nice.)
We are calling this trend “unbalanced” because open space may or may not use a counter-weight to keep the design from feeling lopsided. There are a lot of different ways to do it: with white space, in images, with layers and how elements are stacked, and with patterns. This can be a rather challenging technique because responsive breakpoints can dramatically change how space appears if you aren’t careful.
Each of these projects tackles “unbalanced” space in a different and equally neat way.
Yukon 1000 creates two different bits of unbalanced space in two different ways:
On the left side of the screen with background white space that the image does not extend into and includes text layers.
In the image itself with a distinct shape and plenty of open space on the water.
Both of the elements are unbalanced when they stand alone, but combined the open space on the left and open space in the image pull the eye back to the middle content area of the design.
The Art of Tea website uses space in the image with nice stacking and the use of shapes in the positioning of objects in unbalanced space. Note how packed the left side of the image is compared to the bleakness of the right side. Even the text only extends partway across.
This aesthetic helps pull your eyes first across the screen to read the text, and then down past the scroll.
Growcase could be a case study in the use of space itself, but focus on the top of the screen. The brand identity stretches about one-third of the way across and is met by nothing other than a small social media/portfolio link trio.
There’s also a lot of space below it and negative spaces with each of the portfolio blocks. The black areas pull you through the design, whether you are ready to move through it or not.
2. New Image Slider Concepts
It has been all that long ago that website designers had all but declared sliders to be dead.
They’re back.
But they look a lot different and are much more interesting.
Trending in slider design is using sliding elements with interesting animations and shapes that don’t look like sliders. What’s interesting is the commonalities between them – even though they look spectacularly different.
Each of these slider examples:
Uses a circle pointer that expands when you hover on a click element
Visual cues, such as navigation arrows, are present
Each site features some type of portfolio content
Slider images are allowed to stand alone and aren’t covered up with words or other elements
Additionally, each example has a few fun tricks of its own.
Clarity uses a slider that moves automatically based on mouse position and all of the images, appear within the circle frame.
Revise Concept also uses an automatic scrolling slider with images that move in a diagonal fashion. You can also use the arrows to scroll forward and back on your own.
Prezman only previews the slider on the homepage, but click over and it expands to full screen to show a variety of portfolio pieces with simple, but neat, animation.
3. Coronavirus Data
The most obvious sign of the coronavirus on website design is in the number of websites that contain information about the virus. Trending seems to be website designs that provide different types of data visualizations about COVID-19.
Google has a page (that’s simple in true Google form), but these examples take the data to new visual levels. Data visualizations vary in data presented — local versus worldwide — and in how the information is put together.
While these sites all have distinct purposes, they provide nice examples of how to handle large amounts of data well in a visual format.
Coronavirus Now takes a variety of data and uses virtual jars of balls to show proportional relationships between three data points. What’s really neat about this design is the use of animation and labelling to make data understandable.
COVID-19 Global Data Dashboard breaks down different information and data points in a variety of chart types – bar, circle, and more. Using different visualizations can make it easy for people who understand and comprehend data in different ways better understand the information.
COVID-19 Community Vulnerability Index Map uses a mapping system to show where potential outbreaks could happen. Using map data can be a more understandable way to visualize complicated data.
Conclusion
While two of these website design trends have nothing to do with the coronavirus, we will start to see more things that are influenced by this worldwide health issue. Pay attention, and it’s likely that design will start to shift to elements with fewer large groups of people, face mask use and imagery will be more mainstream, and even colors might fade to more subdued hues. We are already seeing the impact on design and it is likely to continue to influence trends.
The Ionic Framework is an open-source UI toolkit for building fast, high-quality applications using web technologies with integrations for popular frameworks like Angular and React. Ionic enables cross-platform development using either Cordova or Capacitor, with the latter featuring support for desktop application development using Electron.
In this article, we will explore Ionic with the React integration by building an app that displays comics using the Marvel Comics API and allows users to create a collection of their favorites. We’ll also learn how to integrate native capabilities into our app with Capacitor and generate builds for a native platform.
If you have not worked with Ionic in the past, or you’re curious to find out how Ionic works with React, this tutorial is for you.
Prerequisites
Before you can start building apps with the Ionic Framework, you will need the following:
a Marvel developer account with an API key. You can get one here
Here’s a picture of what we’ll be building:
Installing Ionic CLI
Ionic apps are created and developed primarily through the Ionic command line interface (CLI). The CLI offers a wide range of dev tools and help options as you develop your hybrid app. To proceed with this guide, you will need to make sure the CLI is installed and accessible from your terminal.
Open a new terminal window and run the following command:
npm install -g @ionic/cli
This will install the latest version of the Ionic CLI and make it accessible from anywhere on your computer. If you want to confirm that the install was successful, you can run the following command:
ionic --version
This command will output the installed Ionic version on your computer and it should be similar to this:
6.4.1
You can now bootstrap Ionic apps for the officially supported framework integrations — Angular and React — using any of the prebuilt templates available.
Starting An Ionic React Application
Creating an Ionic React application is easy using the CLI. It provides a command named start that generates files for a new project based on the JavaScript framework you select. You can also choose to start off with a prebuilt UI template instead of the default blank “Hello world” app.
This command will create a new Ionic React app using the tabs template. It also adds a Capacitor integration to your app. Capacitor is a cross-platform app runtime that makes running web apps natively on iOS, Android, and desktop easy.
Navigate your terminal to the newly created directory and run start the server.
cd marvel-client
ionic serve
Now point your browser to http://localhost:8100 to see your app running.
Note: If you have used create-react-app (CRA) before, your current project’s directory structure should feel very familiar. That’s because, in order to keep the development experience familiar, Ionic React projects are created using a setup similar to that found in a CRA app. React Router is also used to power app navigation under the hood.
Creating A React Component
You are going to create a reusable React component in this step. This component will receive data and display information about a comic. This step also aims to help demonstrate that Ionic React is still just React.
Delete the files for the ExploreContainer component from src/components and remove its imports from the .tsx files in the src/pages directory.
Your ComicCard component receives props containing details of a comic and renders the information using an IonCard component. Cards in Ionic are usually composed using other subcomponents. In this file, you are using the IonCardTitle and IonCardSubtitle components to render the comic title and series information within a IonCardHeader component.
Consuming The Marvel API
To use your newly created component you would have to fetch some data from the Marvel API. For the purpose of this guide, you are going to use the axios package to make your HTTP Requests. You can install it by running the following command:
yarn add axios
Next, add the following folder to your src directory:
# ~/Desktop/marvel-client/src
mkdir -p services
Then, cd into the services directory and create a file named api.ts:
Be sure to replace the value of API_KEY with your own API key. If you don’t have one, you can request one by signing up at the Marveldeveloper website. You also need to set up your account to allow requests from your local development server by adding localhost* to your Marvel authorized referrers list (see the image below):
You now have an axios instance configured to use the Marvel API. The api.ts file has only one export, which hits the GET /comics endpoint and returns a collection of comics. You are limiting the results to only those that are available digitally. You will now proceed to use the API Service in your application.
Open the Tab1.tsx file and replace the contents with the following:
The file above is an example of a page in Ionic. Pages are components that can be accessed with a route/URL. To ensure transitions between pages work properly, it is necessary to have the IonPage component be the root component in your page.
IonHeader is a component meant to exist at the top of a page. It’s not required for all pages, but it can contain useful components like the page title, the IonBackButton component for navigating between pages, or the IonSearchBar. IonContent is the main content area for your pages. It’s responsible for providing the scrollable content that users will interact with, plus any scroll events that could be used in your app.
Inside your component, you have a function called fetchComics() — called once inside the useEffect() hook — which makes a request to get comics from the Marvel API by calling the getComics() function you wrote earlier. It saves the results to your component’s state via the useState() hook. The IonSpinner component renders a spinning icon while your app is making a request to the API. When the request is completed, you pass the results to the ComicCard component you created earlier.
At this point your app should look like this:
In the next step, you will learn how to use Capacitor plugins in your app by enabling offline storage.
Creating a Personal Collection of Marvel Comics
Your app looks good so far, but it isn’t very useful as a mobile app. In this step you will extend your app’s functionality by allowing users to ‘star’ comics, or save them as favorites. You will also make information about the saved favorites available to view offline by using the Capacitor Storage plugin.
First, create a file named util.ts in your src directory:
# ~/Desktop/marvel-client/src
touch util.ts
Now, open the file and paste the following contents:
The Storage plugin provides a key-value store for simple data, while the Toast plugin provides a notification pop-up for displaying important information to a user.
The updateFavourites() function in this file takes a single argument, a Comic object, and adds it to the device storage if it doesn’t exist, or removes it from the device storage if it was already saved. getFavourites() returns the user’s saved comics, while checkFavourites() accepts a single argument, a Comic resource ID, and looks it up in the saved comics, returning true if it exists, or false otherwise.
Next, open the ComicCard.tsx file and make the following changes to allow your app’s users to save their favorite comics:
Your ComicCard component now has a IonButton component that, when clicked, calls the updateFavourites() function you wrote earlier. Remember that the function acts like a toggle, removing the comic if it was already saved, or else saving it. Don’t forget to add the imports for the new Ionic components, IonButton, IonCardContent and IonIcon, just added to this component.
Now for the final part of this step, where you will be rendering saved comics in their own page. Replace the contents of the Tab2.tsx file with the following:
This page is quite similar to the Tab1 page but, instead of making an API request to get comics, you are accessing locally saved data. You are also using the Ionic life cycle hook, useIonViewWillEnter(), instead of a useEffect() hook, to make a call to the function that reads saved comics and updates the component’s state. The useIonViewWillEnter() hook gets called just as the page being navigated to enters into view.
Your application now makes use of a few native plugins to improve its functionality. In the next step, you will learn how to generate a native project for Android and create a native app using Android Studio.
Note: You can delete the files related to*Tab3*and remove the import and related*IonTab*component in the*App.tsx*file.
Generating A Native Project
Ionic comes with support for cross-platform app runtimes such as Capacitor and Cordova. These frameworks help you to build and run apps developed using Ionic on a native device or emulator. For the purpose of this guide, you will be using Capacitor to generate native project files.
Before proceeding to adding a platform, you will need to generate a production build of your application. Run the following command in your project’s root directory to do so:
ionic build
Now let’s add Capacitor to your project and generate the assets required to build a native application. Capacitor provides a CLI which can be accessed in your project by using npx or from the ionic CLI as shown below:
Using npx
npx cap add android
This command adds the android platform to your project. Other possible platform values are ios and electron.
Using ionic
Since you initialized your project using the --capacitor flag earlier, Capacitor has already been initialized with your project’s information. You can proceed to adding a platform by running the following command:
ionic capacitor add android
This command will install the required dependencies for the android platform. It will also generate files required for a native Android project and copy over the assets you built earlier when running ionic build.
If you have installed Android Studio, you can now open your project in Android Studio by running:
ionic capacitor open android
Finally, build your project:
Conclusion
In this guide, you have learned how to develop hybrid mobile applications using Ionic Framework’s React integration. You also learned how to use Capacitor for building native apps, specifically for the Android platform. Check out the API docs as there are a lot more UI components available to be used in Ionic apps that we didn’t explore. You can find the code on GitHub.
You should really look at everything Amelia does, but I get extra excited about her interactive blog posts. Her latest about creating a gauge with SVG in React is unreal. Just the stuff about understanding viewBox is amazing and that’s like 10% of it.
Every week users submit a lot of interesting stuff on our sister site Webdesigner News, highlighting great content from around the web that can be of interest to web designers.
The best way to keep track of all the great stories and news being posted is simply to check out the Webdesigner News site, however, in case you missed some here’s a quick and useful compilation of the most popular designer news that we curated from the past week.
Note that this is only a very small selection of the links that were posted, so don’t miss out and subscribe to our newsletter and follow the site daily for all the news.
Design Resources – Curated List of Tools for Designers
Google is Shutting Down Another Social Network You’ve Never Heard of
Help Me Decide Please
Top 10 Remote Work Apps & Tools
New Media Queries You Need to Know
Coronavirus Country Comparator
Dopely Colors – Super Fast Color Palette Generator
Contra Open Source Wireframe Kit
Five Happy Links
Create a CSS Only Image Gallery
Disney is Claiming Anyone Who Uses a Twitter Hashtag is Agreeing to a Disney TOS
Alpine.js: The JavaScript Framework That’s Used like JQuery, Written like Vue, and Inspired by TailwindCSS
How to Choose the Right Colors for your Illustrations
5 Catchy Trends in Digital Illustration in 2020
Introducing Insomnia Designer
Stay Home Icon
Unpacking the Golden Ratio in Design
Google Meet Premium Video Conferencing is Now Free for Everyone
User Interface Design Inspiration
How to Pick a Font for any Design
Designers Share the Best Advice They’ve Ever been Given
The Hero Generator
Designing with Letters
Mega List of Remote Job Websites & Freelance Websites
The Importance of Great User Onboarding
Want more? No problem! Keep track of top design news from around the web with Webdesigner News.
I keep running across these super useful one page sites, and they keep being by the same person! Like this one with over 100 vanilla JavaScript DOM manipulation recipes, this similar one full of one-liners, and this one with loads of layouts. For that last one, making 91 icons for all those design patterns is impressive alone. High five, Phuoc.
This is my favorite sort of marketing. Some of the products aren’t free, like the React PDF Viewer. How do you get people to know about your paid thing? Give a bunch of useful stuff away for free and have the paid thing sitting right next to it.
Integration tests are a natural fit for interactive websites, like ones you might build with React. They validate how a user interacts with your app without the overhead of end-to-end testing.
This article follows an exercise that starts with a simple website, validates behavior with unit and integration tests, and demonstrates how integration testing delivers greater value from fewer lines of code. The content assumes a familiarity with React and testing in JavaScript. Experience with Jest and React Testing Library is helpful but not required.
There are three types of tests:
Unit tests verify one piece of code in isolation. They are easy to write, but can miss the big picture.
End-to-end tests (E2E) use an automation framework — such as Cypress or Selenium — to interact with your site like a user: loading pages, filling out forms, clicking buttons, etc. They are generally slower to write and run, but closely match the real user experience.
Integration tests fall somewhere in between. They validate how multiple units of your application work together but are more lightweight than E2E tests. Jest, for example, comes with a few built-in utilities to facilitate integration testing; Jest uses jsdom under the hood to emulate common browser APIs with less overhead than automation, and its robust mocking tools can stub out external API calls.
Another wrinkle: In React apps, unit and integration are written the same way, with the same tools.
Getting started with React tests
I created a simple React app (available on GitHub) with a login form. I wired this up to reqres.in, a handy API I found for testing front-end projects.
You can log in successfully:
…or encounter an error message from the API:
The code is structured like this:
LoginModule/
├── components/
⎪ ├── Login.js // renders LoginForm, error messages, and login confirmation
⎪ └── LoginForm.js // renders login form fields and button
├── hooks/
⎪ └── useLogin.js // connects to API and manages state
└── index.js // stitches everything together
Option 1: Unit tests
If you’re like me, and like writing tests — perhaps with your headphones on and something good on Spotify — then you might be tempted to knock out a unit test for every file.
Even if you’re not a testing aficionado, you might be working on a project that’s “trying to be good with testing” without a clear strategy and a testing approach of “I guess each file should have its own test?”
That would look something like this (where I’ve added unit to test file names for clarity):
I went through the exercise of adding each of these unit tests on on GitHub, and created a test:coverage:unit script to generate a coverage report (a built-in feature of Jest). We can get to 100% coverage with the four unit test files:
100% coverage is usually overkill, but it’s achievable for such a simple codebase.
Let’s dig into one of the unit tests created for the onLogin React hook. Don’t worry if you’re not well-versed in React hooks or how to test them.
This test was fun to write (because React Hooks Testing Library makes testing hooks a breeze), but it has a few problems.
First, the test validates that a piece of internal state changes from 'pending' to 'resolved'; this implementation detail is not exposed to the user, and therefore, probably not a good thing to be testing. If we refactor the app, we’ll have to update this test, even if nothing changes from the user’s perspective.
Additionally, as a unit test, this is just part of the picture. If we want to validate other features of the login flow, such as the submit button text changing to “Loading,” we’ll have to do so in a different test file.
Option 2: Integration tests
Let’s consider the alternative approach of adding one integration test to validate this flow:
I implemented this test and a test:coverage:integration script to generate a coverage report. Just like the unit tests, we can get to 100% coverage, but this time it’s all in one file and requires fewer lines of code.
Here’s the integration test covering a successful login flow:
test('successful login', async () => {
// mock a successful API response
jest
.spyOn(window, 'fetch')
.mockResolvedValue({ json: () => ({ token: '123' }) });
const { getByLabelText, getByText, getByRole } = render(<LoginModule />);
const emailField = getByLabelText('Email');
const passwordField = getByLabelText('Password');
const button = getByRole('button');
// fill out and submit form
fireEvent.change(emailField, { target: { value: 'test@email.com' } });
fireEvent.change(passwordField, { target: { value: 'password' } });
fireEvent.click(button);
// it sets loading state
expect(button.disabled).toBe(true);
expect(button.textContent).toBe('Loading...');
await waitFor(() => {
// it hides form elements
expect(button).not.toBeInTheDocument();
expect(emailField).not.toBeInTheDocument();
expect(passwordField).not.toBeInTheDocument();
// it displays success text and email address
const loggedInText = getByText('Logged in as');
expect(loggedInText).toBeInTheDocument();
const emailAddressText = getByText('test@email.com');
expect(emailAddressText).toBeInTheDocument();
});
});
I really like this test, because it validates the entire login flow from the user’s perspective: the form, the loading state, and the success confirmation message. Integration tests work really well for React apps for precisely this use case; the user experience is the thing we want to test, and that almost always involves several different pieces of code working together.
This test has no specific knowledge of the components or hook that makes the expected behavior work, and that’s good. We should be able to rewrite and restructure such implementation details without breaking the tests, so long as the user experience remains the same.
I’m not going to dig into the other integration tests for the login flow’s initial state and error handling, but I encourage you to check them out on GitHub.
So, what does need a unit test?
Rather than thinking about unit vs. integration tests, let’s back up and think about how we decide what needs to be tested in the first place. LoginModule needs to be tested because it’s an entity we want consumers (other files in the app) to be able to use with confidence.
The onLogin hook, on the other hand, does not need to be tested because it’s only an implementation detail of LoginModule. If our needs change, however, and onLogin has use cases elsewhere, then we would want to add our own (unit) tests to validate its functionality as a reusable utility. (We’d also want to move the file because it wouldn’t be specific to LoginModule anymore.)
There are still plenty of use cases for unit tests, such as the need to validate reusable selectors, hooks, and plain functions. When developing your code, you might also find it helpful to practice test-driven development with a unit test, even if you later move that logic higher up to an integration test.
Additionally, unit tests do a great job of exhaustively testing against multiple inputs and use cases. For example, if my form needed to show inline validations for various scenarios (e.g. invalid email, missing password, short password), I would cover one representative case in an integration test, then dig into the specific cases in a unit test.
Other goodies
While we’re here, I want to touch on few syntactic tricks that helped my integration tests stay clear and organized.
Big waitFor Blocks
Our test needs to account for the delay between the loading and success states of LoginModule:
const button = getByRole('button');
fireEvent.click(button);
expect(button).not.toBeInTheDocument(); // too soon, the button is still there!
We can do this with DOM Testing Library’s waitFor helper:
But, what if we want to test some other items too? There aren’t a lot of good examples of how to handle this online, and in past projects, I’ve dropped additional items outside of the waitFor:
// wait for the button
await waitFor(() => {
expect(button).not.toBeInTheDocument();
});
// then test the confirmation message
const confirmationText = getByText('Logged in as test@email.com');
expect(confirmationText).toBeInTheDocument();
This works, but I don’t like it because it makes the button condition look special, even though we could just as easily switch the order of these statements:
// wait for the confirmation message
await waitFor(() => {
const confirmationText = getByText('Logged in as test@email.com');
expect(confirmationText).toBeInTheDocument();
});
// then test the button
expect(button).not.toBeInTheDocument();
It’s much better, in my opinion, to group everything related to the same update together inside the waitFor callback:
await waitFor(() => {
expect(button).not.toBeInTheDocument();
const confirmationText = getByText('Logged in as test@email.com');
expect(confirmationText).toBeInTheDocument();
});
Interestingly, an empty waitFor will also get the job done, because waitFor has a default timeout of 50ms. I find this slightly less declarative than putting your expectations inside of the waitFor, but some indentation-averse developers may prefer it:
await waitFor(() => {}); // or maybe a custom util, `await waitForRerender()`
expect(button).not.toBeInTheDocument(); // I pass!
For tests with a few steps, we can have multiple waitFor blocks in row:
const button = getByRole('button');
const emailField = getByLabelText('Email');
// fill out form
fireEvent.change(emailField, { target: { value: 'test@email.com' } });
await waitFor(() => {
// check button is enabled
expect(button.disabled).toBe(false);
});
// submit form
fireEvent.click(button);
await waitFor(() => {
// check button is no longer present
expect(button).not.toBeInTheDocument();
});
Inline it comments
Another testing best practice is to write fewer, longer tests; this allows you to correlate your test cases to significant user flows while keeping tests isolated to avoid unexpected behavior. I subscribe to this approach, but it can present challenges in keeping code organized and documenting desired behavior. We need future developers to be able to return to a test and understand what it’s doing, why it’s failing, etc.
For example, let’s say one of these expectations starts to fail:
it('handles a successful login flow', async () => {
// beginning of test hidden for clarity
expect(button.disabled).toBe(true);
expect(button.textContent).toBe('Loading...');
await waitFor(() => {
expect(button).not.toBeInTheDocument();
expect(emailField).not.toBeInTheDocument();
expect(passwordField).not.toBeInTheDocument();
const confirmationText = getByText('Logged in as test@email.com');
expect(confirmationText).toBeInTheDocument();
});
});
A developer looking into this can’t easily determine what is being tested and might have trouble deciding whether the failure is a bug (meaning we should fix the code) or a change in behavior (meaning we should fix the test).
My favorite solution to this problem is using the lesser-known test syntax for each test, and adding inline it-style comments describing each key behavior being tested:
test('successful login', async () => {
// beginning of test hidden for clarity
// it sets loading state
expect(button.disabled).toBe(true);
expect(button.textContent).toBe('Loading...');
await waitFor(() => {
// it hides form elements
expect(button).not.toBeInTheDocument();
expect(emailField).not.toBeInTheDocument();
expect(passwordField).not.toBeInTheDocument();
// it displays success text and email address
const confirmationText = getByText('Logged in as test@email.com');
expect(confirmationText).toBeInTheDocument();
});
});
These comments don’t magically integrate with Jest, so if you get a failure, the failing test name will correspond to the argument you passed to your test tag, in this case 'successful login'. However, Jest’s error messages contain surrounding code, so these it comments still help identify the failing behavior. Here’s the error message I got when I removed the not from one of my expectations:
For even more explicit errors, there’s package called jest-expect-message that allows you to define error messages for each expectation:
expect(button, 'button is still in document').not.toBeInTheDocument();
Some developers prefer this approach, but I find it a little too granular in most situations, since a single it often involves multiple expectations.
Next steps for teams
Sometimes I wish we could make linter rules for humans. If so, we could set up a prefer-integration-tests rule for our teams and call it a day.
But alas, we need to find a more analog solution to encourage developers to opt for integration tests in a situation, like the LoginModule example we covered earlier. Like most things, this comes down to discussing your testing strategy as a team, agreeing on something that makes sense for the project, and — hopefully — documenting it in an ADR.
When coming up with a testing plan, we should avoid a culture that pressures developers to write a test for every file. Developers need to feel empowered to make smart testing decisions, without worrying that they’re “not testing enough.” Jest’s coverage reports can help with this by providing a sanity check that you’re achieving good coverage, even if the tests are consolidated that the integration level.
I still don’t consider myself an expert on integration tests, but going through this exercise helped me break down a use case where integration testing delivered greater value than unit testing. I hope that sharing this with your team, or going through a similar exercise on your codebase, will help guide you in incorporating integration tests into your workflow.
The concept of an “incremental build” is that, when using some kind of generator that builds all the files that make for a website, rather than rebuilding 100% of those files every single time, it only changes the files that need to be changed since the last build. Seems like an obviously good idea, but in practice I’m sure it’s extremely tricky. How do you know what exactly which files will change and which won’t before building?
I don’t have the answer to that, but Gatsby has it figured out. Faster local builds is half the joy, the other half is that deployment also becomes faster, as the files that need to move around are far fewer.
I’d say incremental builds are a pretty damn big deal. I like seeing these hurdles get cleared Jamstack-land. I’m linking to the Netlify blog post here as getting it going on Netlify requires you to enable their “build plugins” feature which is also a real ahead-of-the-game feature, allowing you to run code during different parts of CI/CD with a really clean syntax.