Avoid Heavy Babel Transformations by (Sometimes) Not Writing Modern JavaScript
It’s hard to imagine writing production-ready JavaScript without a tool like Babel. It’s been an undisputed game-changer in making modern code accessible to a wide range of users. With this challenge largely out of the way, there’s not much holding us back from really leaning into the features that modern specifications have to offer.
But at the same time, we don’t want to lean in too hard. If you take an occasional peek into the code your users are actually downloading, you’ll notice that sometimes, seemingly straightforward Babel transformations can be especially bloated and complex. And in a lot of those cases, you can perform the same task using a simple, “old school” approach — without the heavy baggage that can come from preprocessing.
Let’s take a closer look at what I’m talking about using Babel’s online REPL — a great tool for quickly testing transformations. Targeting browsers that don’t support ES2015+, we’ll use it to highlight just a few of the times when you (and your users) might be better off choosing an “old school” way to do something in JavaScript, despite a “new” approach popularized by modern specifications.
As we go along, keep in mind that this is less about “old vs. new” and more about choosing the best implementation that gets the job done while bypassing any expected side effects of our build processes.
Let’s build!
Preprocessing a for..of loop
The for..of
loop is a flexible, modern means of looping over iterable collections. It’s often used in a way very similar to a traditional for loop, which may lead you to think that Babel’s transformation would be simple and predictable, especially if you’re just using it with an array. Not quite. The code we write may only be 98 bytes:
function getList() {
return [1, 2, 3];
}
for (let value of getList()) {
console.log(value);
}
But the output results in 1.8kb (a 1736% increase!):
"use strict";
function _createForOfIteratorHelper(o) { if (typeof Symbol === "undefined" || o[Symbol.iterator] == null) { if (Array.isArray(o) || (o = _unsupportedIterableToArray(o))) { var i = 0; var F = function F() {}; return { s: F, n: function n() { if (i >= o.length) return { done: true }; return { done: false, value: o[i++] }; }, e: function e(_e) { throw _e; }, f: F }; } throw new TypeError("Invalid attempt to iterate non-iterable instance.nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method."); } var it, normalCompletion = true, didErr = false, err; return { s: function s() { it = o[Symbol.iterator](); }, n: function n() { var step = it.next(); normalCompletion = step.done; return step; }, e: function e(_e2) { didErr = true; err = _e2; }, f: function f() { try { if (!normalCompletion && it.return != null) it.return(); } finally { if (didErr) throw err; } } }; }
function _unsupportedIterableToArray(o, minLen) { if (!o) return; if (typeof o === "string") return _arrayLikeToArray(o, minLen); var n = Object.prototype.toString.call(o).slice(8, -1); if (n === "Object" && o.constructor) n = o.constructor.name; if (n === "Map" || n === "Set") return Array.from(o); if (n === "Arguments" || /^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n)) return _arrayLikeToArray(o, minLen); }
function _arrayLikeToArray(arr, len) { if (len == null || len > arr.length) len = arr.length; for (var i = 0, arr2 = new Array(len); i < len; i++) { arr2[i] = arr[i]; } return arr2; }
function getList() {
return [1, 2, 3];
}
var _iterator = _createForOfIteratorHelper(getList()),
_step;
try {
for (_iterator.s(); !(_step = _iterator.n()).done;) {
var value = _step.value;
console.log(value);
}
} catch (err) {
_iterator.e(err);
} finally {
_iterator.f();
}
Why didn’t it just use for loop for this? It’s an array! Apparently, in this case, Babel doesn’t know it’s handling an array. All it knows is that it’s working with a function that could return any iterable (array, string, NodeList), and it needs to be ready for whatever that value could be, based on the ECMAScript specification for the for..of loop.
We could drastically slim the transformation by explicitly passing an array to it, but that’s not always easy in a real application. So, to leverage the benefits of loops (like break and continue statements), while confidently keeping bundle size slim, we might just reach for the for loop. Sure, it’s old school, but it gets the job done.
function getList() {
return [1, 2, 3];
}
for (var i = 0; i < getList().length; i++) {
console.log(value);
}
/explanation Dave Rupert blogged about this exact situation a few years ago and found that forEach, even polyfilled, as a good solution for him.
Preprocessing Array […Spread]
Similar deal here. The spread operator can be used with more than one class of objects (not just arrays), so when Babel isn’t aware of the type of data it’s dealing with, it needs to take precautions. Unfortunately, those precautions can result in some serious byte bloat.
Here’s the input, weighing in at a slim 81 bytes:
function getList () {
return [4, 5, 6];
}
console.log([1, 2, 3, ...getList()]);
The output balloons to 1.3kb:
"use strict";
function _toConsumableArray(arr) { return _arrayWithoutHoles(arr) || _iterableToArray(arr) || _unsupportedIterableToArray(arr) || _nonIterableSpread(); }
function _nonIterableSpread() { throw new TypeError("Invalid attempt to spread non-iterable instance.nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method."); }
function _unsupportedIterableToArray(o, minLen) { if (!o) return; if (typeof o === "string") return _arrayLikeToArray(o, minLen); var n = Object.prototype.toString.call(o).slice(8, -1); if (n === "Object" && o.constructor) n = o.constructor.name; if (n === "Map" || n === "Set") return Array.from(o); if (n === "Arguments" || /^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n)) return _arrayLikeToArray(o, minLen); }
function _iterableToArray(iter) { if (typeof Symbol !== "undefined" && Symbol.iterator in Object(iter)) return Array.from(iter); }
function _arrayWithoutHoles(arr) { if (Array.isArray(arr)) return _arrayLikeToArray(arr); }
function _arrayLikeToArray(arr, len) { if (len == null || len > arr.length) len = arr.length; for (var i = 0, arr2 = new Array(len); i < len; i++) { arr2[i] = arr[i]; } return arr2; }
function getList() {
return [4, 5, 6];
}
console.log([1, 2, 3].concat(_toConsumableArray(getList())));
Instead, we could cut to the chase and and just use concat()
. The difference in the amount of code you need to write isn’t significant, it does exactly what it’s intended to do, and there’s no need to worry about that extra bloat.
function getList () {
return [4, 5, 6];
}
console.log([1, 2, 3].concat(getList()));
A more common example: Looping over a NodeList
You might have seen this more than a few times. We often need to query for several DOM elements and loop over the resulting NodeList
. In order to use forEach
on that collection, it’s common to spread it into an array.
[...document.querySelectorAll('.my-class')].forEach(function (node) {
// do something
});
But like we saw, this makes for some heavy output. As an alternative, there’s nothing wrong with running that NodeList
through a method on the Array
prototype, like slice
. Same result, but far less baggage:
[].slice.call(document.querySelectorAll('.my-class')).forEach(function(node) {
// do something
});
A note about “loose” mode
It’s worth calling out that some of this array-related bloat can also be avoided by leveraging @babel/preset-env
‘s loose mode, which compromises in staying totally true to the semantics of modern ECMAScript, but offers the benefit of slimmer output. In many situations, that might work just fine, but you’re also necessarily introducing risk into your application that you may come to regret later on. After all, you’re telling Babel to make some rather bold assumptions about how you’re using your code.
The main takeaway here is that sometimes, it might be more suitable to be more intentional about the features you to use, rather than investing more time into tweaking your build process and potentially wrestling with unseen consequences later.
Preprocessing default parameters
This is a more predictable operation, but when it’s repeatedly used throughout a codebase, the bytes can add up. ES2015 introduced default parameter values, which tidy up a function’s signature when it accepts optional arguments. Here we are at 75 bytes:
function getName(name = "my friend") {
return `Hello, ${name}!`;
}
But Babel can be a little more verbose than expected with its transformation, resulting in 169 bytes:
"use strict";
function getName() {
var name = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : "my friend";
return "Hello, ".concat(name, "!");
}
As an alternative, we could avoid using the arguments
object altogether, and simply check if a parameter is undefined
We lose the self-documenting nature that default parameters provide, but if we’re really pinching bytes, it might be worth it. And depending on the use case, we might even be able to get away with checking for falsey
to slim it down even more.
function getName(name) {
name = name || "my friend";
return `Hello, ${name}!`;
}
Preprocessing async/await
The syntactic sugar of async/await
over the Promise API is one of my favorite additions to JavaScript. Even so, out of the box, Babel can make make quite the mess out of it.
157 bytes to write:
async function fetchSomething(url) {
const response = await fetch(url);
return await response.json();
}
fetchSomething("https://google.com");
1.5kb when compiled:
"use strict";
function asyncGeneratorStep(gen, resolve, reject, _next, _throw, key, arg) { try { var info = gen[key](arg); var value = info.value; } catch (error) { reject(error); return; } if (info.done) { resolve(value); } else { Promise.resolve(value).then(_next, _throw); } }
function _asyncToGenerator(fn) { return function () { var self = this, args = arguments; return new Promise(function (resolve, reject) { var gen = fn.apply(self, args); function _next(value) { asyncGeneratorStep(gen, resolve, reject, _next, _throw, "next", value); } function _throw(err) { asyncGeneratorStep(gen, resolve, reject, _next, _throw, "throw", err); } _next(undefined); }); }; }
function fetchSomething(_x) {
return _fetchSomething.apply(this, arguments);
}
function _fetchSomething() {
_fetchSomething = _asyncToGenerator( /*#__PURE__*/regeneratorRuntime.mark(function _callee(url) {
var response;
return regeneratorRuntime.wrap(function _callee$(_context) {
while (1) {
switch (_context.prev = _context.next) {
case 0:
_context.next = 2;
return fetch(url);
case 2:
response = _context.sent;
_context.next = 5;
return response.json();
case 5:
return _context.abrupt("return", _context.sent);
case 6:
case "end":
return _context.stop();
}
}
}, _callee);
}));
return _fetchSomething.apply(this, arguments);
}
fetchSomething("https://google.com");
You’ll notice that Babel doesn’t convert async
code into promises out of the box. Instead, they’re transformed into generators that rely on the regenerator-runtime
library, making for more a lot more code than what’s written in our IDE. Thankfully, it’s possible to go the Promise route by means of a plugin, like babel-plugin-transform-async-to-promises
. Instead of that 1.5kb output, we end up with much less, at 638 bytes:
"use strict";
function _await(value, then, direct) {
if (direct) {
return then ? then(value) : value;
}
if (!value || !value.then) {
value = Promise.resolve(value);
}
return then ? value.then(then) : value;
}
var fetchSomething = _async(function (url) {
return _await(fetch(url), function (response) {
return _await(response.json());
});
});
function _async(f) {
return function () {
for (var args = [], i = 0; i < arguments.length; i++) {
args[i] = arguments[i];
}
try {
return Promise.resolve(f.apply(this, args));
} catch (e) {
return Promise.reject(e);
}
};
}
But, like mentioned before, there’s risk in relying on a plugin to ease pain like this. When doing so, we’re impacting transformations in the entire project, and also introducing another build dependency. Instead, we could consider just sticking with the Promise API.
function fetchSomething(url) {
return new Promise(function (resolve) {
fetch(url).then(function (response) {
return response.json();
}).then(function (data) {
return resolve(data);
});
});
}
Preprocessing classes
For more syntactic sugar, there’s the class
syntax introduced with ES2015, which provides a streamlined way to leverage JavaScript’s prototypical inheritance. But if we’re using Babel to transpile for older browsers, there’s nothing sweet about the output.
The input us only 120 bytes:
class Robot {
constructor(name) {
this.name = name;
}
speak() {
console.log(`I'm ${this.name}!`);
}
}
But the output results in 989kb:
"use strict";
function _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } }
function _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if ("value" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }
function _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }
var Robot = /*#__PURE__*/function () {
function Robot(name) {
_classCallCheck(this, Robot);
this.name = name;
}
_createClass(Robot, [{
key: "speak",
value: function speak() {
console.log("I'm ".concat(this.name, "!"));
}
}]);
return Robot;
}();
Much of the time, unless you’re doing some fairly involved inheritance, it’s straightforward enough to use a pseudoclassical approach. It requires slightly less code to write, and the resulting interface is virtually identical to a class.
function Robot(name) {
this.name = name;
this.speak = function() {
console.log(`I'm ${this.name}!`);
}
}
const rob = new Robot("Bob");
rob.speak(); // "Bob"
Strategic considerations
Keep in mind that, depending on your application’s audience, a lot of what you’re reading here might mean that your strategies to keep bundles slim may take different shapes.
For example, your team might have already made a deliberate decision to drop support for Internet Explorer and other “legacy” browsers (which is becoming more and more common, given that the vast majority of browsers support ES2015+). If that’s the case, your time might best be spent in auditing the list of browsers your build system is targeting, or making sure you’re not shipping unnecessary polyfills.
And even if you are still obligated to support older browsers (or maybe you love some of the modern APIs too much to give them up), there are other options to enable you to ship heavy, preprocessed bundles only to the users that need them, like a differential serving implementation.
The important thing isn’t so much about which strategy (or strategies) your team chooses to prioritize, but more about intentionally making those decisions in light of the code being spit out by your build system. And that all starts by cracking open that dist directory to take a peak.
Pop open that hood
I’m a big fan of the new features modern JavaScript continues to provide. They make for applications that are easier to write, maintain, scale, and especially read. But as long as writing JavaScript means preprocessing JavaScript, it’s important to make sure that we have a finger on the pulse of what these features mean for the users that we ultimately aim to serve.
And that means popping the hood of your build process once in a while. At best, you might be able avoid especially hefty Babel transformations by using a simpler, “classic” alternative. And at worst, you’ll come to better understand (and appreciate) the work that Babel does all the more.
The post Avoid Heavy Babel Transformations by (Sometimes) Not Writing Modern JavaScript appeared first on CSS-Tricks.