Why npm Scripts?
The following is a guest post by Damon Bauer. There has been a growing sentiment (for instance) that using node packages directly, with the command line interfaces they provide, is a good route to take. As opposed to abstracting the functionality away behind a task runner. Partly: you use npm anyway, npm provides scripting functionality, why not just use that? But there is more to it than that. Damon will walk us through the thinking, but also exactly how to accomplish many of the most important tasks in a front end development build process.
I’ve started using npm scripts in my projects for about the last six months. Before that, I used Gulp and before that, Grunt. They’ve served me well and helped me perform my work faster and more efficiently by automating many of the things I used to do by hand. However, I started to feel that I was fighting the tools rather than focusing on my own code.
Grunt, Gulp, Broccoli, Brunch and the like all require you to fit your tasks into their paradigms and configurations. Each has it’s own syntaxes, quirks and gotchas that you need to learn. This adds code complexity, build complexity and makes you focus on fixing tooling rather than writing code.
These build tools rely on plugins that wrap a core command line tool. This creates another layer of abstraction away from the core tool, which means more potential for bad things to happen.
Here are three problems I’ve seen multiple times:
- If a plugin doesn’t exist for the command line tool you want to use, you’re out of luck (unless you write it).
- A plugin you’re trying to use wraps an older version of the tool you want to use. Features and documentation don’t always match between the plugin you’re using and the current version of the core tool.
- Errors aren’t always handled well. If a plugin fails, it might not pass along the error from the core tool, resulting in frustration and not really knowing how to debug the problem.
But, bear in mind…
Let me say this: if you are happy with your current build system and it accomplishes all that you need it to do, you can keep using it! Just because npm scripts are becoming more popular doesn’t mean you should to jump ship. Keep focusing on writing your code instead of learning more tooling. If you start to get the feeling that you’re fighting with your tools, that’s when I’d suggest considering npm scripts.
If you’ve decided you want to investigate or start using npm scripts, keep reading! You’ll find plenty of example tasks in the rest of this post. Also, I’ve created npm-build-boilerplate with all of these tasks that you can use as a starting point. Let’s get to it!
Writing npm scripts
We’ll be spending a majority of our time in a `package.json` file. This is where all of our dependencies and scripts will live. Here’s a stripped down version from my boilerplate project:
{
"name": "npm-build-boilerplate",
"version": "1.0.0",
"scripts": {
...
},
"devDependencies": {
...
}
}
We’ll build up our `package.json` file as we go along. Our scripts will go into the scripts
object, and any tools we want to use will be installed and put into the devDependencies
object.
Before we begin, here’s a sample structure of the project I’ll be referring to throughout this post:
Compile SCSS to CSS
I’m a heavy user of SCSS, so that’s what I’ll be working with. To compile SCSS to CSS, I turn to node-sass. First, we need to install node-sass
; do this by running the following in your command line:
npm install --save-dev node-sass
This will install node-sass
in your current directory and add it to the devDependencies
object in your `package.json`. This is especially useful when someone else runs your project because they will have everything they need to get the project running. Once installed, we can use it on the command line:
node-sass --output-style compressed -o dist/css src/scss
Let’s break down what this command does. Starting at the end, it says: look in the `src/scss` folder for any SCSS files; output (-o
flag) the compiled CSS to `dist/css`; compress the output (using the --output-style
flag with “compressed” as the option).
Now that we’ve got that working the the command line, let’s move it to an npm script. In your `package.json` scripts
object, add it like so:
"scripts": {
"scss": "node-sass --output-style compressed -o dist/css src/scss"
}
Now, head back to the command line and run:
npm run scss
You will see the same output as running the node-sass
command directly in the command line.
Any time we create an npm script in the remainder of this post, you can run it by using a command like the above.
Just replace scss
with the name of the task you want to run.
As you will see, many of the command line tools we’ll use have numerous options you can use to configure it exactly you see fit. For instance, here’s the list of (node-sass options). Here’s a different setup show how to pass multiple options:
"scripts": {
"scss": "node-sass --output-style nested --indent-type tab --indent-width 4 -o dist/css src/scss"
}
Autoprefix CSS with PostCSS
Now that we’re compiling Scss to CSS, we can automatically add vendor prefixes using Autoprefixer & PostCSS. We can install multiple modules at the same time, separating them with spaces:
npm install --save-dev postcss-cli autoprefixer
We’re installing two modules because PostCSS doesn’t do anything by default. It relies on other plugins like Autoprefixer to manipulate the CSS you give it.
With the necessary tools installed and saved to devDependencies
, add a new task in your scripts
object:
"scripts": {
...
"autoprefixer": "postcss -u autoprefixer -r dist/css/*"
}
This task says: Hey postcss
, use (-u
flag) autoprefixer
to replace (-r
flag) any `.css` files in `dist/css` with vendor prefixed code. That’s it! Need to change the default browser support for autoprefixer? It’s easy to add to the script:
"autoprefixer": "postcss -u autoprefixer --autoprefixer.browsers '> 5%, ie 9' -r dist/css/*"
Again, there’s lot of options you can use to configure your own build: postcss-cli and autoprefixer.
Linting JavaScript
Keeping a standard format and style when authoring code is important to keep errors to a minimum and increase developer efficiency. “Linting” helps us do that automatically, so let’s add JavaScript linting by using eslint.
Once again, install the package; this time, let’s use a shortcut:
npm i -D eslint
This is the same as:
npm install --save-dev eslint
Once installed, we’ll set up some basic rules to run our code against using eslint
. Run the following to start a wizard:
eslint --init
I’d suggest choosing “Answer questions about your style” and answering the questions it asks. This will generate a new file in the root of your project that eslint
will check your code against.
Now, let’s add a lint task to our `package.json` scripts
object:
"scripts": {
...
"lint": "eslint src/js"
}
Our lint task is 13 characters long! It looks for any JavaScript files in the src/js
folder and runs them against the configuration it generated earlier. Of course, you can get crazy with the options.
Uglifying JavaScript files
Let’s work on combining and minifying our JavaScript files, which we can use uglify-js to do. We’ll need to install uglify-js
first:
npm i -D uglify-js
Then, we can set up our uglify task in `package.json`:
"scripts": {
...
"uglify": "mkdir -p dist/js && uglifyjs src/js/*.js -m -o dist/js/app.js"
}
One of the great things about npm scripts is that they are essentially an alias for a command line task that you want to run over and over. This means that you can use standard command line code right in your script! This task uses two standard command line features, mkdir
and &&
.
The first half of this task, mkdir -p dist/js
says: Create a folder structure (mkdir
), but only if it doesn’t exist already (-p
flag). Once that completes successfully, run the uglifyjs
command. The &&
lets you chain multiple commands together, running each one sequentially if the previous command completes successfully.
The second half of this task tells uglifyjs
to start with all of the JS files (`*.js`) in `src/js/`, apply the “mangle” command (-m
flag), and output the result to `dist/js/app.js`. Once again, check the documentation for the tool in question for a full list of options.
Let’s update our uglify
task to create a compressed version of `dist/js/app.js`. Chain another uglifyjs
command and passing the “compress” (-c
) flag:
"scripts": {
...
"uglify": "mkdir -p dist/js && uglifyjs src/js/*.js -m -o dist/js/app.js && uglifyjs src/js/*.js -m -c -o dist/js/app.min.js"
}
Compressing Images
Let’s now turn our attention to compressing images. According to httparchive.org, the average page weight of the top 1000 URLs on the internet is 1.9mb, with images accounting for 1.1mb of that total. One of the best things you can do to increase page speed is reduce the size of your images.
Install imagemin-cli:
npm i -D imagemin-cli
Imagemin is great because it will compress most types of images, including GIF, JPG, PNG and SVG. You can pass it a folder of images and it will crunch all of them, like so:
"scripts": {
...
"imagemin": "imagemin src/images dist/images -p",
}
This task tells imagemin
to find and compress all images in `src/images` and put them in `dist/images`. The -p
flag is passed to create “progressive” images when possible. Check the documentation for all available options.
SVG Sprites
The buzz surrounding SVG has increased in the last few years, and for good reason. They are crisp on all devices, editable with CSS, and screen reader friendly. However, SVG editing software usually leaves extraneous and unnecessary code. Luckily, svgo can help by removing all of that (we’ll install it below).
You can also automate the process of combining and spriting your SVGs to make a single SVG file (more on that technique here). To automate this process, we can install svg-sprite-generator.
npm i -D svgo svg-sprite-generator
The pattern is probably familiar to you now: once installed, add a task in your `package.json` scripts
object:
"scripts": {
...
"icons": "svgo -f src/images/icons && mkdir -p dist/images && svg-sprite-generate -d src/images/icons -o dist/images/icons.svg"
}
Notice the icons
task does three things, based on the presence of two &&
directives. First, we use svgo
, passing a folder (-f
flag) of SVGs; this will compress all SVGs inside the folder. Second, we’ll make the dist/images
folder if it doesn’t already exist (using the mkdir -p
command). Finally, we use svg-sprite-generator
, passing it a folder of SVGs (-d
flag) and a path where we want the SVG sprite to output (-o
flag).
Serve and Automatically Inject Changes with BrowserSync
One of the last pieces to the puzzle is BrowserSync. A few of the things it can do: start a local server, automatically inject updated files into any connected browser, and sync clicks & scrolls between browsers. Install it and add a task:
npm i -D browser-sync
"scripts": {
...
"serve": "browser-sync start --server --files 'dist/css/*.css, dist/js/*.js'"
}
Our BrowserSync task starts a server (--server
flag) using the current path as the root by default. The --files
flag tells BrowserSync to watch any CSS or JS file in the `dist` folder; whenever something in there changes, automatically inject the changed file(s) into the page.
You can open multiple browsers (even on different devices) and they will all get updated file changes in real time!
Grouping tasks
With all of the tasks from above, we’re able to:
- Compile SCSS to CSS and automatically add vendor prefixes
- Lint and uglify JavaScript
- Compress images
- Convert a folder of SVGs to a single SVG sprite
- Start a local server and automatically inject changes into any browser connected to the server
Let’s not stop there!
Combining CSS tasks
Let’s add a task that combines the two CSS related tasks (preprocessing Sass and running Autoprefixer), so we don’t have to run each one separately:
"scripts": {
...
"build:css": "npm run scss && npm run autoprefixer"
}
When you run npm run build:css
, it will tell the command line to run npm run scss
; when it completes successfully, it will then (&&
) run npm run autoprefixer
.
Just like with our build:css
task, we can chain our JavaScript tasks together to make it easier to run:
Combining JavaScript tasks
"scripts": {
...
"build:js": "npm run lint && npm run uglify"
}
Now, we can call npm run build:js
to lint, concatenate and uglify our JavaScript in one step!
Combine remaining tasks
We can do the same thing for our image tasks, as well as a task that combines all build tasks into one:
"scripts": {
...
"build:images": "npm run imagemin && npm run icons",
"build:all": "npm run build:css && npm run build:js && npm run build:images",
}
Watching for changes
Up until this point, our tasks require to make changes to a file, switch back to the command line and run the corresponding task(s). One of the most useful things we can do is add tasks that watch for changes that run tasks automatically when files change. To do this, I recommend using onchange. Install as usual:
npm i -D onchange
Let’s set up watch tasks for CSS and JavaScript:
"scripts": {
...
"watch:css": "onchange 'src/scss/*.scss' -- npm run build:css",
"watch:js": "onchange 'src/js/*.js' -- npm run build:js",
}
Here’s the breakdown on these tasks: onchange
expects you to pass a path as a string to the files you want to watch. We’ll pass our source SCSS and JS files to watch. The command we want to run comes after the --
, and it will run any time a file in the path given is added, changed or deleted.
Let’s add one more watch command to finish off our npm scripts build process.
Install one more package, parallelshell:
npm i -D parallelshell
Once again, add a new task to the scripts
object:
"scripts": {
...
"watch:all": "parallelshell 'npm run serve' 'npm run watch:css' 'npm run watch:js'"
}
parallelshell
takes multiple strings, which we’ll pass multiple npm run
tasks to run.
Why use parallelshell
to combine multiple tasks instead of using &&
like in previous tasks? At first, I tried this. The problem is that &&
chains commands together and waits for each command to finish successfully before starting the next. However, since we are running watch
commands, they never finish! We’d be stuck in an endless loop.
Therefore, using parallelshell
enables us to run multiple watch
commands simultaneously.
This task fires up a server with BrowserSync using the npm run serve
task. Then, it starts our watch commands for both CSS and JavaScript files. Any time a CSS or JavaScript file changes, the watch task performs a respective build task; since BrowserSync is set up to watch for changes in the `dist` folder, it automatically injects new files into any browser connected to it’s URL. Sweet!
Other useful tasks
npm
comes with lots of baked in tasks that you can hook into. Let’s write one more task leveraging one of these built scripts.
"scripts": {
...
"postinstall": "npm run watch:all"
}
postinstall
runs immediately after you run npm install
in your command line. This is a nice-to-have especially when working on teams; when someone clones your project and runs npm install
, our watch:all
tasks starts immediately. They’ll automatically have a server started, a browser window opened and files being watched for changes.
Wrap Up
Whew! We made it! I hope you’ve been able to learn a few things about using npm scripts as a build process and the command line in general.
Just in case you missed it, I’ve created an npm-build-boilerplate project with all of these tasks that you can use as a starting point. If you have questions or comments, please tweet at me or leave a comment below. I’d be glad to help where I can!
Why npm Scripts? is a post from CSS-Tricks