In 2018-19, experts predicted the directions of growth, changes in the scope of the industries, and most importantly, the emerging trends in consumer behaviour.
Brands created marketing strategies based on these well-calculated predictions that were the results of data and trends. Global investors invested billions of dollars in different industries hoping that the year 2020 would be the year of great ROI.
Everything was going more or less according to the plan until China’s WHO office reported the first case of an unknown virus on Tuesday, 31st December 2019.
Within the five months of the first officially registered case of coronavirus (COVID-19), the global pandemic has shaken the world upside down, expected trends and predictions have gone out of the window.
The global lockdown resulted in a loss of jobs and negatively impacted the GDPs of every country. Among the global chaos and uncertainty, few industries flourished and achieved their growth predictions. eCommerce and Web Hosting (PHP Cloud Hosting and Managed or Dedicated Hosting) are among the fortunate industries which are doing well despite the prolonged lockdown and ongoing pandemic.
The growing eCommerce industry is one of the major reasons why the web hosting industry is doing well. Now, more than ever, brands are focused on quality online presence and ensure flawless eCommerce experience. Delivering a great UX and UI requires a powerful and extensible hosting infrastructure.
COVID-19 and eCommerce Industry
The COVID-19 pandemic is the defining event of the year 2020 and the implications are being felt as businesses remain closed because of the lockdown. Taking businesses online is the only option to sustain operations. In many cases, the process of transitioning businesses to the online model is really easy as modern eCommerce platforms very effectively mimic the processes of offline businesses.
Changing Consumer Behavior
Along with the new wave of online trends, one of the most significant trends is the changes in the behaviour (what they buy, how, and when) of online buyers. The focus has moved to bulk-buying (because of unsure conditions) and brands need to accommodate this change in their online commerce industry.
eCommerce fueling the Web Hosting Industry
In 2018, the web hosting industry was predicted to grow at 13% CAGR and one of the key drivers of the market growth is the growing eCommerce industry. The market is expected to add $72.79b by 2023. With the current global conditions, the industry seems to be reaching the 2023 numbers much earlier because of the induction of local brands and small businesses focusing solely on the online business.
Businesses that were operating out of small product pages on Facebook or Instagram now aim to organize themselves better in order to compete with thousands of similar competitors. It goes without saying that these emerging businesses will be the new face of eCommerce and will bring in new customers for the hosting industry.
Competition Within the Web Hosting Industry
Web hosting industry reached the saturation point in 2019 when we have big names in shared hosting (GoDaddy, NameCheap, Hostgator), cloud IaaS (DigitalOcean, AWS, and Google Compute Engine) and many providers with several hybrid infrastructure models. Add thousands of their resellers and affiliate sellers and you can imagine the density of offers and choice. The only thing that distinguishes a web hosting brand from the rest is the quality of customer support and flexibility of operations.
The competition is by no means over as brands transitioning to online commerce continue to fuel the web hosting industry. This is a positive sign for eCommerce businesses as well, because, in order to get more business, web hosting brands have to increase their quality of services and deliver them at very competitive pricing.
What to Expect in the Rest of 2020?
To be fair, this is a difficult question to answer because COVID-19 conditions are changing very quickly. China has removed the lockdown in Wuhan, and counties like New Zealand and Australia have managed to contain the spread of the virus by implementing early lockdown strategies. As a result, these countries have been able to resume day-to-day business. However, the global business centres, the USA and Europe are still fighting hard and trying to contain the epidemic ASAP.
Ecommerce and web hosting industries are going to flourish and grow rapidly in 2020 and onwards. There is no denying the fact that the eCommerce industry has become more competitive because of the influx of new brands. Add the changes in consumer psychology to the mix and you can easily imagine the stiff competition that would benefit both the consumers and the overall eCommerce industry.
For eCommerce businesses, this is a time to experiment with new strategies in order to adapt to the change and tweak operations to capture a larger piece of the pie.
Conclusion
eCommerce and web hosting industries are rapidly evolving in the ongoing COVID-19 crisis and it is too early to fully understand the magnitude of the impact. However, experts are certain about fast track changes in both industries.
For entrepreneurs, this is a good time to get into the eCommerce arena with new ideas and experiments. What worked for the industry just six months ago is simply not valid anymore. With the right strategy, the eCommerce industry can be disrupted all over again.
Let’s keep a positive stance on the overall global conditions forced by the virus by allowing the businesses to grow along with the change.
What do you think is the biggest opportunity available for vendors in both industries? Let us know in the comments below.
The lockdown due to Covid-19 has changed the entire ways in which the business world is operating.
So reaching out to potential clients or customers for your business has to be made through innovative ways during this Covid-19 season. Investing in UX is regarded as highly relevant during this quarantine period to reap higher benefits.
Well, we will see how a perfect UI/UX interface for your online business platform will help you with this blog.
Keep reading!
Why is the UX design process important for your business during COVID- 19 Lockdown?
The term “UX design process” has been doing its round in the online platforms during the lockdown. Why so?
It’s only because the business world has understood the value of having a real good UX design. Many of the leading UI/UX designers have opined the same. Some of the significant benefits they do have are discussed below.
Advantage 1: Investing in UX design helps to reduce your overall costs to the maximum.
How? Well, when we start the UX design process, there is a lot of research, analysis and testing is done on the matter. This helps to implement the output product with ease. No further care has to be given in the process in later stages of development. The study we conduct during the UX design is with utmost care. They are designed in such a way that all the aspects are given a touch during the process.
More and more companies have begun to realize the importance of cutting costs to the core so that they can invest the saved amount on various other aspects. For a business to survive in the present competitive world, especially during the Corona Lockdown, reducing operating costs plays a very significant role.
Also, note the point that redesigning a product is much easier when compared to changing the product in the development stage. Another highlight is rectifying the usability issues, which forms the primary headache while developing a product. User design process takes care of the same very much efficiently. An efficient user design agency will be creating a prototype, and hence it gives us an exact feel of the ultimate product developed.
Advantage 2: User Experience helps you to increase your leads and thus convert into potential customers.
Have you ever wondered how some business firms do acquire a lot of business even during this Corona Lockdown?
Yes, it is by following a perfect UX design strategy. They do attract the customers even during a crisis. Despite the situations, businesses need to stay active. There will be persons who require their products and services. The thing one has to do is to find the right people! Business owners need to promote their products to the public. A perfect User design approach will help you to reach out to your audience with ease. Optimizing your business website or mobile apps in accordance will help you to do the same. The user should not be tired of searching for the services they require. We should make it simple and attractive.
Advantage 3: UX design improves your SEO rankings!
A perfectly executed UX design boosts your SEO rankings, which is very much needed for the hour. Perfect SEO rankings mean more business in return. The leading search engine Google gives more preference to the websites that have a better user experience in their search results.
The latest update of the algorithm released by Google has clearly specified its need for having attractive and useful UI design for websites.
We cannot blame them or the same, because they need to provide the users with the best possible results. So if your business website has a highly satisfying user experience, then the rest is excellent!
Advantage 4: UX design improves your brand loyalty.
If your online platform has a great user experience, it helps to retain your customers. They begin to trust your instincts and naturally will come back to you every time they need your product or service. Once the customer base enlarges, your business will create a brand value for itself. User experience creates fantastic experiences for the user by interacting and responding with them by practical means. That’s why user design is very much crucial for your brand loyalty.
Advantage 5: UX Design increases word of mouth referrals.
Word of mouth referrals do have its importance during this digital era too. A person will be interested in opting for your service if someone who they trust promotes your business. We all would usually recommend something best for our well-wishers. Hence, if a person finds your product or service appealing, they will transfer the details to another person.
But have you thought about how UX design helps in the word of mouth referrals?
It’s done through social media promotions, user reviews, and other interactions through the various digital platforms. A good UI/UX designing team will give space for sharing the business products or services through social media platforms. All these good ratings and reviews ultimately help you in capturing better business in return!
Summing Up
User Design process is one of the best ways to attract more customers and thus improve your business. It not only helps to promote your business but also helps to increase your brand loyalty, SEO rankings and much more. But this is all done by creating not just fantastic user experiences but by making the end user’s lives easier!
Investing in UX is worth it, especially during the Covid lockdown and the advantages mentioned above do reinforce the same. Contact a leading UX consulting company if you are interested in creating an amazing user experience for your online business!
There are a lot of different WordPress themes for different purposes on the web. Usually, while choosing the theme we pay attention to such criteria as loading speed, customization tools, pre-made layouts, compatibility with page builders, etc.
In this article, we would like to compare three of the most popular free themes. We have picked Gutenix, Astra, and OceanWP themes to compare them and help you choose the best one for your project.
Gutenix is a fully customizable and flexible WordPress theme that suits any website topic. It will become a great platform for introducing your business and services. The great addition is that the theme is compatible with the latest versions of WordPress and its most popular builders such as Gutenberg, Elementor, and Brizy. Being WooCommerce-ready Gutenix allows you to build not only portfolio websites but also websites for online selling.
The website based on the Astra WordPress theme can be simply customizable without any code lines. It will be a good bonus, especially for non-experienced developers. With this theme, you are free to choose any builder you want and like the most. Astra supports integration with your favorite WordPress tools like WooCommerce, Yoast, Toolset, etc. Set free your imagination and build a strong web platform using this theme.
Looking for the perfect theme for your website? Then check OceanWP WordPress theme with lots of advantages and features inside. With its help, you will be able to build such types of websites like portfolio, blog, business platform, or WooCommerce site. The theme is fast and translation ready that are the important features for users. OceanWP allows you to build the pages with Elementor, Brizy, Beaver Builder, Divi, etc.
Impress with high performance
Each of these free themes can boast of light weight and cleanness. They were built with high speed and SEO in mind. No one likes it when the website loads endlessly. Users usually leave the sites that load more than three seconds. With Gutenix, Astra, and OceanWP you will never face such trouble. We have tested them with the most popular speed testing tools and the results are impressive.
Gutenix loading time takes only 0.9 s. Google Speed Insights theme test has shown a 96% speed score. OceanWP is as fast as Gutenix, and the speed score of the theme is 98%. Astra has the 400 ms loading time and 95% at the Google Speed Insights as you can see.
Gutenix, Astra, and WPOcean themes can be deservedly named the best free WordPress themes to work with, for their high speed and performance optimization. With their help, you can effortlessly build a strong and powerful website that will reach the top of any search. Moreover, all of them are SEO-friendly that is also important for high ranking.
Explore customization options
Despite the fact that the Gutenix theme is newer on the WordPress market, it includes a huge number of customization options and several pre-made posts and pages layouts. Using live customizer you can apply the changes and preview them simultaneously without reloading the page.
Gutenix allows you to apply a fullwidth container or change the sidebar position (right or left). Find there 8 unique header styles and more than 650fonts on-board. Using them you can make your text more attractive, eye-catching, and readable.
Using WordPress Customizer you can also successfully attune Astra theme according to your tastes. Pre-made header designs, built-in pages, posts, and other website parts layouts. It also has a few header layouts are also at your service. With Astra theme, you have an ability to manage the content and meta of your blog page, change its width, etc. You can also preview the changes in real-time.
Talking about the OceanWP theme, it provides users with a simple and time-saving customization process. It is equipped with an intuitive interface and few essential blocks. To expand the process of website building and enrich the pages with a diversity of additional blocks you can purchase an extension bundle.
Responsiveness
As the majority of users often check the websites through tablets and mobile devices it is very important to choose the responsive theme. Using Gutenix, Astra, or OceanWP theme you can be sure that your pages will look great on any screen size you need. While customizing themes in WordPress customizer you can check the page view on desktop, tablet, and mobile.
The good bonus is the fact that Gutenix and Astra themes are compatible with any existing web browser.
Support included
Downloading Gutenix and Astra themes you will get an extended documentation where you will find all the necessary information about theme features and working process. Their developers have described all steps and niceties of working with their products. Especially it will be useful for those who are not good at website building.
Free after-sale support service is also included in all themes. The OceanWP theme comes with professional and fast support. Gutenix and Astra, besides the support system, also have the Facebook chats.
Each of these themes is good for exploitation and will become a great base for your future website. Which one to choose it’s only your decision. Following our overview, you can decide which aspects are more important for you and which theme is more fittable to your aims.
When designing a logo, you want it to stand out from the crowd, yet still be really simple.
Sometimes the designer is really clever and makes the logo very simple, yet includes a hidden message within the logo that has a deeper meaning.
In today’s article, we’re going to cover 20 logos with hidden messages.
Some logos you will have seen before, and some may be completely new to you, but hopefully, you will enjoy them all.
Amazon
The Amazon logo is an extremely simple logo and while the arrow may just look like a smile it actually points from a to z.
This represents that Amazon sells everything from a to z, and the smile on the customers face when they bought a product.
Goodwill
Goodwill. The one thrift store we all know and love.
When you look at the logo, you see it’s a person smiling, probably happy that they just donated their clothes or just copped an awesome find for a great deal!
Now look at the letter ‘g’ from ‘goodwill’.
You’ll see that same smiling person in the first letter of the logo!
LG
At first glance, you might think LG’s logo may just be the winking face of a happy client.
But look closer.
You’ll see that in the winky face logo, there’s actually an L in the center, and the face is a G!
Super clever on their part.
Pinterest
Pinterest is one of my favorite social media apps out there.
It’s always full of great ideas and new trends that I can get inspired from, and then I can pin those images to one of my boards.
Hence, “Pin-terest”.
Duh.
Anyway, to the untrained eye, you might just see the letter “P” in the logo.
But if you really pay close attention, you’ll see that the letter “P” is actually a pin.
Michael Deal, the co-designer of this logo said, “For most of the project, I had avoided making visual reference to the image of a pin because it seemed too literal. But the “P” started to lend itself too well to the shape of a map pin.”
Toyota
Next up, we have Toyota.
This one is definitely one of the coolest of them all, and if it hasn’t received some kind of award already, well, it definitely deserves one.
If you didn’t already know, the Toyota logo has the entire word “Toyota” written in it!
Here’s a diagram to explain it better.
Isn’t that the coolest thing you’ve ever seen?
BMW
BMW just recently updated its logo and it looks amazing.
I wrote an entire piece about the new BMW logo because that’s how much I loved it.
But anyway, let’s talk about the hidden message here!
This logo actually represents a propeller in motion, with the blue part representing the sky, and the white part representing the propeller.
BMW’s logo is a tribute to the company’s history in aviation.
Baskin Robbins
The Baskin Robbins logo may look like it includes a simple BR above the name. bBt if you take another look, you will that it includes a pink number 31. This is a reference to their original and iconic 31 flavors.
Chick-fil-a
The Chick-fil-a logo incorporates a chicken into the C. Although this isn’t very hidden, it is still very clever.
Eighty20
The eighty20 logo is a bit of a geeky one to figure out, the two lines of squares represent a binary sequence with the blue squares being 1’s and the grey squares being 0’s.
Which makes 1010000 which represents eighty and 0010100 which represents 20.
F1
The F1 logo is a fairly simple one to figure out. The negative space in the middle creates the 1.
Facebook Places
If you didn’t already know Facebook Places, it is Facebook’s new geolocational product, which is in direct competition with the current leader in that area, Foursquare.
Now if you take another look at Facebook Places logo you will notice there is a 4 in a square.
Now is this a coincidence or is it a dig at Foursquare?
Fedex
The FedEx logo looks like a plain, text-based logo.
But if you take a second look, between the E and the X, you will see an arrow, that represents the speed and accuracy of the company’s deliveries.
Milwaukee Brewers
The old Milwaukee Brewers logo may look like a simple catchers mitt holding a ball, but if you take a second glance, you will see the team’s initials M and B.
Museum of London
The Museum of London logo may look like a modern logo design, but it actually represents the geographic area of London as it as grew over time.
NBC
The NBC logo has a hidden peacock above the above text which is looking to the right.
This represents the companies motto to look forward and not back, and also that they are proud of the programs they broadcast.
Northwest Airlines
The old Northwest Airlines logo may look like a simple logo, but if you take a closer look, the symbol on the left actually represents both N and W and because it is enclosed within the circle it also represents a compass pointing northwest.
Piano Forest
The Piano Forest logo may look like a simple text logo with trees above it, but if you take another look you will see that the trees actually represent keys on a piano.
Toblerone
The Toblerone logo contains the image of a bear hidden in the Matterhorn mountain, which is where Toblerone originally came from.
Tostitos
The Tostitos logo includes two people sharing a chip and a bowl of salsa, this conveys an idea of people connecting with each other over a bowl of chips.
Treacy Shoes
The Treacy Shoes logo is very cute logo with a shoe hidden between the t and s.
In Conclusion
Making a clever logo doesn’t always just come easily to you.
It can take weeks, months, and even years to come up with something mindblowing.
Other times, the idea comes to the forefront of your brain and you can see it clear as day.
Gather inspiration from these amazing logos with hidden messages and start making your own!
You could design the next big logo.
Did I miss any other big logos that have hidden messages within them?
It’s hard to imagine writing production-ready JavaScript without a tool like Babel. It’s been an undisputed game-changer in making modern code accessible to a wide range of users. With this challenge largely out of the way, there’s not much holding us back from really leaning into the features that modern specifications have to offer.
But at the same time, we don’t want to lean in too hard. If you take an occasional peek into the code your users are actually downloading, you’ll notice that sometimes, seemingly straightforward Babel transformations can be especially bloated and complex. And in a lot of those cases, you can perform the same task using a simple, “old school” approach — without the heavy baggage that can come from preprocessing.
Let’s take a closer look at what I’m talking about using Babel’s online REPL — a great tool for quickly testing transformations. Targeting browsers that don’t support ES2015+, we’ll use it to highlight just a few of the times when you (and your users) might be better off choosing an “old school” way to do something in JavaScript, despite a “new” approach popularized by modern specifications.
As we go along, keep in mind that this is less about “old vs. new” and more about choosing the best implementation that gets the job done while bypassing any expected side effects of our build processes.
Let’s build!
Preprocessing a for..of loop
The for..of loop is a flexible, modern means of looping over iterable collections. It’s often used in a way very similar to a traditional for loop, which may lead you to think that Babel’s transformation would be simple and predictable, especially if you’re just using it with an array. Not quite. The code we write may only be 98 bytes:
function getList() {
return [1, 2, 3];
}
for (let value of getList()) {
console.log(value);
}
But the output results in 1.8kb (a 1736% increase!):
"use strict";
function _createForOfIteratorHelper(o) { if (typeof Symbol === "undefined" || o[Symbol.iterator] == null) { if (Array.isArray(o) || (o = _unsupportedIterableToArray(o))) { var i = 0; var F = function F() {}; return { s: F, n: function n() { if (i >= o.length) return { done: true }; return { done: false, value: o[i++] }; }, e: function e(_e) { throw _e; }, f: F }; } throw new TypeError("Invalid attempt to iterate non-iterable instance.nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method."); } var it, normalCompletion = true, didErr = false, err; return { s: function s() { it = o[Symbol.iterator](); }, n: function n() { var step = it.next(); normalCompletion = step.done; return step; }, e: function e(_e2) { didErr = true; err = _e2; }, f: function f() { try { if (!normalCompletion && it.return != null) it.return(); } finally { if (didErr) throw err; } } }; }
function _unsupportedIterableToArray(o, minLen) { if (!o) return; if (typeof o === "string") return _arrayLikeToArray(o, minLen); var n = Object.prototype.toString.call(o).slice(8, -1); if (n === "Object" && o.constructor) n = o.constructor.name; if (n === "Map" || n === "Set") return Array.from(o); if (n === "Arguments" || /^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n)) return _arrayLikeToArray(o, minLen); }
function _arrayLikeToArray(arr, len) { if (len == null || len > arr.length) len = arr.length; for (var i = 0, arr2 = new Array(len); i < len; i++) { arr2[i] = arr[i]; } return arr2; }
function getList() {
return [1, 2, 3];
}
var _iterator = _createForOfIteratorHelper(getList()),
_step;
try {
for (_iterator.s(); !(_step = _iterator.n()).done;) {
var value = _step.value;
console.log(value);
}
} catch (err) {
_iterator.e(err);
} finally {
_iterator.f();
}
Why didn’t it just use for loop for this? It’s an array! Apparently, in this case, Babel doesn’t know it’s handling an array. All it knows is that it’s working with a function that could return any iterable (array, string, NodeList), and it needs to be ready for whatever that value could be, based on the ECMAScript specification for the for..of loop.
We could drastically slim the transformation by explicitly passing an array to it, but that’s not always easy in a real application. So, to leverage the benefits of loops (like break and continue statements), while confidently keeping bundle size slim, we might just reach for the for loop. Sure, it’s old school, but it gets the job done.
function getList() {
return [1, 2, 3];
}
for (var i = 0; i < getList().length; i++) {
console.log(value);
}
/explanation Dave Rupert blogged about this exact situation a few years ago and found that forEach, even polyfilled, as a good solution for him.
Preprocessing Array […Spread]
Similar deal here. The spread operator can be used with more than one class of objects (not just arrays), so when Babel isn’t aware of the type of data it’s dealing with, it needs to take precautions. Unfortunately, those precautions can result in some serious byte bloat.
"use strict";
function _toConsumableArray(arr) { return _arrayWithoutHoles(arr) || _iterableToArray(arr) || _unsupportedIterableToArray(arr) || _nonIterableSpread(); }
function _nonIterableSpread() { throw new TypeError("Invalid attempt to spread non-iterable instance.nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method."); }
function _unsupportedIterableToArray(o, minLen) { if (!o) return; if (typeof o === "string") return _arrayLikeToArray(o, minLen); var n = Object.prototype.toString.call(o).slice(8, -1); if (n === "Object" && o.constructor) n = o.constructor.name; if (n === "Map" || n === "Set") return Array.from(o); if (n === "Arguments" || /^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n)) return _arrayLikeToArray(o, minLen); }
function _iterableToArray(iter) { if (typeof Symbol !== "undefined" && Symbol.iterator in Object(iter)) return Array.from(iter); }
function _arrayWithoutHoles(arr) { if (Array.isArray(arr)) return _arrayLikeToArray(arr); }
function _arrayLikeToArray(arr, len) { if (len == null || len > arr.length) len = arr.length; for (var i = 0, arr2 = new Array(len); i < len; i++) { arr2[i] = arr[i]; } return arr2; }
function getList() {
return [4, 5, 6];
}
console.log([1, 2, 3].concat(_toConsumableArray(getList())));
Instead, we could cut to the chase and and just use concat(). The difference in the amount of code you need to write isn’t significant, it does exactly what it’s intended to do, and there’s no need to worry about that extra bloat.
You might have seen this more than a few times. We often need to query for several DOM elements and loop over the resulting NodeList. In order to use forEach on that collection, it’s common to spread it into an array.
[...document.querySelectorAll('.my-class')].forEach(function (node) {
// do something
});
But like we saw, this makes for some heavy output. As an alternative, there’s nothing wrong with running that NodeList through a method on the Array prototype, like slice. Same result, but far less baggage:
[].slice.call(document.querySelectorAll('.my-class')).forEach(function(node) {
// do something
});
A note about “loose” mode
It’s worth calling out that some of this array-related bloat can also be avoided by leveraging @babel/preset-env‘s loose mode, which compromises in staying totally true to the semantics of modern ECMAScript, but offers the benefit of slimmer output. In many situations, that might work just fine, but you’re also necessarily introducing risk into your application that you may come to regret later on. After all, you’re telling Babel to make some rather bold assumptions about how you’re using your code.
The main takeaway here is that sometimes, it might be more suitable to be more intentional about the features you to use, rather than investing more time into tweaking your build process and potentially wrestling with unseen consequences later.
Preprocessing default parameters
This is a more predictable operation, but when it’s repeatedly used throughout a codebase, the bytes can add up. ES2015 introduced default parameter values, which tidy up a function’s signature when it accepts optional arguments. Here we are at 75 bytes:
function getName(name = "my friend") {
return `Hello, ${name}!`;
}
But Babel can be a little more verbose than expected with its transformation, resulting in 169 bytes:
"use strict";
function getName() {
var name = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : "my friend";
return "Hello, ".concat(name, "!");
}
As an alternative, we could avoid using the arguments object altogether, and simply check if a parameter is undefined We lose the self-documenting nature that default parameters provide, but if we’re really pinching bytes, it might be worth it. And depending on the use case, we might even be able to get away with checking for falsey to slim it down even more.
function getName(name) {
name = name || "my friend";
return `Hello, ${name}!`;
}
Preprocessing async/await
The syntactic sugar of async/await over the Promise API is one of my favorite additions to JavaScript. Even so, out of the box, Babel can make make quite the mess out of it.
"use strict";
function asyncGeneratorStep(gen, resolve, reject, _next, _throw, key, arg) { try { var info = gen[key](arg); var value = info.value; } catch (error) { reject(error); return; } if (info.done) { resolve(value); } else { Promise.resolve(value).then(_next, _throw); } }
function _asyncToGenerator(fn) { return function () { var self = this, args = arguments; return new Promise(function (resolve, reject) { var gen = fn.apply(self, args); function _next(value) { asyncGeneratorStep(gen, resolve, reject, _next, _throw, "next", value); } function _throw(err) { asyncGeneratorStep(gen, resolve, reject, _next, _throw, "throw", err); } _next(undefined); }); }; }
function fetchSomething(_x) {
return _fetchSomething.apply(this, arguments);
}
function _fetchSomething() {
_fetchSomething = _asyncToGenerator( /*#__PURE__*/regeneratorRuntime.mark(function _callee(url) {
var response;
return regeneratorRuntime.wrap(function _callee$(_context) {
while (1) {
switch (_context.prev = _context.next) {
case 0:
_context.next = 2;
return fetch(url);
case 2:
response = _context.sent;
_context.next = 5;
return response.json();
case 5:
return _context.abrupt("return", _context.sent);
case 6:
case "end":
return _context.stop();
}
}
}, _callee);
}));
return _fetchSomething.apply(this, arguments);
}
fetchSomething("https://google.com");
You’ll notice that Babel doesn’t convert async code into promises out of the box. Instead, they’re transformed into generators that rely on the regenerator-runtime library, making for more a lot more code than what’s written in our IDE. Thankfully, it’s possible to go the Promise route by means of a plugin, like babel-plugin-transform-async-to-promises. Instead of that 1.5kb output, we end up with much less, at 638 bytes:
"use strict";
function _await(value, then, direct) {
if (direct) {
return then ? then(value) : value;
}
if (!value || !value.then) {
value = Promise.resolve(value);
}
return then ? value.then(then) : value;
}
var fetchSomething = _async(function (url) {
return _await(fetch(url), function (response) {
return _await(response.json());
});
});
function _async(f) {
return function () {
for (var args = [], i = 0; i < arguments.length; i++) {
args[i] = arguments[i];
}
try {
return Promise.resolve(f.apply(this, args));
} catch (e) {
return Promise.reject(e);
}
};
}
But, like mentioned before, there’s risk in relying on a plugin to ease pain like this. When doing so, we’re impacting transformations in the entire project, and also introducing another build dependency. Instead, we could consider just sticking with the Promise API.
For more syntactic sugar, there’s the class syntax introduced with ES2015, which provides a streamlined way to leverage JavaScript’s prototypical inheritance. But if we’re using Babel to transpile for older browsers, there’s nothing sweet about the output.
"use strict";
function _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError("Cannot call a class as a function"); } }
function _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if ("value" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }
function _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }
var Robot = /*#__PURE__*/function () {
function Robot(name) {
_classCallCheck(this, Robot);
this.name = name;
}
_createClass(Robot, [{
key: "speak",
value: function speak() {
console.log("I'm ".concat(this.name, "!"));
}
}]);
return Robot;
}();
Much of the time, unless you’re doing some fairly involved inheritance, it’s straightforward enough to use a pseudoclassical approach. It requires slightly less code to write, and the resulting interface is virtually identical to a class.
function Robot(name) {
this.name = name;
this.speak = function() {
console.log(`I'm ${this.name}!`);
}
}
const rob = new Robot("Bob");
rob.speak(); // "Bob"
Strategic considerations
Keep in mind that, depending on your application’s audience, a lot of what you’re reading here might mean that your strategies to keep bundles slim may take different shapes.
For example, your team might have already made a deliberate decision to drop support for Internet Explorer and other “legacy” browsers (which is becoming more and more common, given that the vast majority of browsers support ES2015+). If that’s the case, your time might best be spent in auditing the list of browsers your build system is targeting, or making sure you’re not shipping unnecessary polyfills.
And even if you are still obligated to support older browsers (or maybe you love some of the modern APIs too much to give them up), there are other options to enable you to ship heavy, preprocessed bundles only to the users that need them, like a differential serving implementation.
The important thing isn’t so much about which strategy (or strategies) your team chooses to prioritize, but more about intentionally making those decisions in light of the code being spit out by your build system. And that all starts by cracking open that dist directory to take a peak.
Pop open that hood
I’m a big fan of the new features modern JavaScript continues to provide. They make for applications that are easier to write, maintain, scale, and especially read. But as long as writing JavaScript means preprocessing JavaScript, it’s important to make sure that we have a finger on the pulse of what these features mean for the users that we ultimately aim to serve.
And that means popping the hood of your build process once in a while. At best, you might be able avoid especially hefty Babel transformations by using a simpler, “classic” alternative. And at worst, you’ll come to better understand (and appreciate) the work that Babel does all the more.
I was reading Anna Kaley’s “Listboxes vs. Dropdown Lists” post the other day. It’s a fairly straightforward comparison between different UI implementations of selecting options. There is lots of good advice there. Classics like that you should use radio buttons (single select) or checkboxes (multiple select) if you’re showing five or fewer options, and the different options when the number of options grows from there.
One thing that isn’t talked about is how you implement these things. I imagine that’s somewhat on purpose as the point is to talk UX, not tech. But how you implement them plays a huge part in UX. In web design and development circles, the conversation about these things usually involves whether you can pull these things off with native controls, or if you need to rebuild them from scratch. If you can use native controls, you often should, because there are tons of UX that you get for free that that might otherwise be lost or forgotten when you rebuild — like how everything works via the keyboard.
But even without custom styling, we still have some UI options. If you need to select one option from many, we’ve got buttons, but data and end-result-wise, that’s the same as a . If you need to select multiple options, we’ve got , but that’s data and end-result-wise the same as .
CodePen Embed Fallback
You pick based on the room you have available and the UX of whatever you’re building.
In these difficult times, we are facing all sorts of new challenges, worrying about and being away from loved ones, virtually having no social life, and for some, working from home.
Though it seems like a dream at first, typing away on your laptop, staying in your pajamas and rocking a messy ponytail, and endless snacks (who needed the office’s cookie jar?), you eventually get the reality-check you didn’t know you needed. Working from home is hard, disconnecting from your work life becomes trickier, and motivating yourself to continue when you think you deserve to make yourself a chocolate cake, are just some of the challenges you may face.
There are positive sides to working from home when done correctly – 85% of businesses that have implemented flexible work locations have said that this has made their company more productive. What if your work could transform your way of looking at things? And help your company on its way to success!
As working from home is set to become the new work life for many of us, here are some of the tips and tricks of the trade to stay productive. Y’know, instead of sipping copious amounts of coffee and scowering ASOS for clothes you and I both know you won’t wear until the lockdown is over. Time to take those sequin dresses out of your basket…
1.Create a routine
First things first, you need to create a routine. You may think that you can just take tasks as they’re thrown at you, but you need to find a routine, like the one that helps you achieve all your goals in the office. If your home life is organized, then your work life will match.
Start off your day with a simple routine, as if you were off to the office – make your bed, take a shower, have breakfast, and even get ready in your office outfit. It creates structure throughout the day and makes you feel ready to sit down and get to work! It also makes you feel at peace and motivated.
Start off by making your bed, jumping in the shower, throwing your hair up into a bun, just little things that will make your day seem more structured. For your commute – walk to your kitchen and grab yourself some breakfast, take time to chill out, before starting your day with a fresh pair of eyes.
2.Avoid the temptation
It is tempting to stay in pajamas all-day everyday and forget any memories of what a pair of jeans feels like. However, staying in pajamas or gym wear will definitely not help your productivity, in fact, they will slow you down and make you feel sleepy. At least don a dressy top for your Zoom calls, and wear a pair of jeans from time to time, to feel empowered and look like you mean business, even on those down days. Would you wear pajamas to the office? No! So there’s your answer.
3.Create a workspace
Working on your bed, slouching back into the depths of your sofa, may seem like a cozy and simple way to get on with work, but is that actually going to help? Finding a dedicated workspace will make you more productive, especially having a special place to sit comfortably. Be comfortable but professional – this will boost your productivity and your mood. On Zoom calls, you’ll look put-together, because your colleagues don’t really want to see you slouched into the depths of your couch. This is one of the many ways you have to discipline yourself when working from home.
Make your workspace more interesting – if you’re lucky enough to be able to transform a corner or a room into an office, get in some cute stationary and plenty of literature to read to get you inspired. Drop in a few self-development books and trusty files to help you on your way to working from home success!
However, keep your workspace professional, 11% of people surveyed and working from home in the US noticed something unprofessional in the background of a colleague’s video. Keep partners out of sight when possible, and don’t show off the washing up left over from the night before – that’s a no no!
4.Dedicate some time to your appearance
Who is going to see you when you’re at home? Well, between Zoom calls and Whatsapp group conversations and the odd Facetime in the evening, you’d be surprised!
46% of people surveyed and working from home in the US admitted they spend more time on their personal appearance before a video call.
Why stop at video calls? You are allowed to feel glamorous even when you’re not sitting in front of a webcam, treat yourself to a pamper session with a brand new hairstyle – give those hot rollers for a trial spin and find new hairstyles to rock when the lockdown is over. Now’s a better time than any to brush up on your skills.
Follow YouTube tutorials, and the beauty experts on Instagram to be more beauty independent. Learn to finally ace contouring and highlighting, try new makeup looks or even learn how to color treat your hair from home. You’ll soon be able to give your friends and family makeups with your new-found skills!
5.Be disciplined
As well as incorporating a routine into your personal life, you need to be disciplined in your work life. Treating yourself to a two-hour rest for half an hour’s work isn’t the key to a successful day! Instead, keep motivated, and be tough on yourself, prioritize your tasks so you keep within your timeframe all whilst achieving your daily targets.
Creating even breaks all whilst getting your daily tasks done will pave the way to a hard-working and successful lifestyle. Whilst creating structure, create one for your tasks – for every task, give yourself a break to have a quick walk outside, or in the house, and give yourself a breather. You’ll come back refreshed!
6.Disconnect
The problem is from working from home is that when you feel the work is piling up – when do you stop? Continuously working without any time for yourself is neither productive or satisfying. We all need to switch off and take some time out for ourselves. “I’ve also enjoyed taking advantage of my extra free time. I feel like I’m always on the go, so it’s been nice being able to catch up with friends I haven’t talked to in a while, watch TV shows I’ve been wanting to see and exercise more often” saysAll Things Hair US’ Caitlin Reddington.
Take that time out for Facetimes with your friends and family, playing games online (Skribbl is fun to play with 10 of your friends online), meditating, getting more active and trying out those YouTube workouts you’ve had in your “Watch Later” list for the past 6 months… Whatever makes you happy – do it!
7.If you can, go out
Within reason, getting some fresh air will make you feel a lot more human. Whether it’s just heading to the shops for some essentials, or going for a jog, getting a few sun rays and seeing the great outdoors will boost your confidence and make you more positive during these strange times. Use your lunch breaks to head to the park and get some fresh air, it’s soothing and relaxing, even after manic days where work does not seem to stop piling up! Once you’re back at your desk, you’ll feel ready to face whatever work throws at you!
8.Be grateful
Though working from home can get you down, it’s time to be grateful. How about setting up a journal for you to write about what you’re grateful for each time? Or things that have made you smile? The world might be bleak right now but you can bring some joy to your life.
Don’t let the trivial stuff get you down, and enjoy the little things! Don’t take the little things for granted, things that make up your regular daily routine. Whether it’s walking to the office, or grabbing lunch with a colleague, you’ll appreciate it a whole lot more once this difficult period is over! Now is also the good time to shake off those unnecessary fears, we are all worried or scared of something – but some of them are really just trivial. Learn to love the little things, and shake away the first-world woes we are used to whining about!
Appreciate your loved ones, make time for them, even when you can’t see them face to face, create a feel good playlist, and watch museum tours online to keep in a positive and fun frame of mind outside of work too!
9.Learn something new
Use your spare time to swot up! Learn a new skill and feel even more productive. Finding a new language could help you deal better with working from home as it will help you challenge yourself. It also has health benefits – research shows that older people who speak several languages are less likely to develop symptoms of dementia. The bilingual brain is more concentrated on the job in hand – so skills like these will help you remain focused at work too!
You can alternatively learn an active skill – get into yoga on Instagram lives, subscribe to your gym to watch work-out videos on social media, use this time to become a better version of yourself! Others are brushing up on their cooking skills – cooking is therapeutic and rewarding, whether you live alone or with someone, it’s a real pleasure to cook something tasty during this difficult time. Once you’re back in the office, you’ll impress everyone with your 5-star cooking!
10.Avoid interruptions
When isolating with your family, roommates or a special someone, sometimes things can get a little crazy. Perhaps your roommate is having a Zoom call themselves, the kids are running all over the place, or the pets are causing havoc – the important thing is to have no interruptions or distractions. Find a routine that will allow you and your loved ones be productive, all whilst making it fair. Otherwise, you won’t get into deep work, and will put off all the hefty tasks that you have been trying to avoid for weeks.
Also, only give yourself a break once you have finished a task, stopping and starting mid-way will be very distracting and it will be even harder to get back to work!
11.Do take breaks
Of course, we’re not saying to not take breaks at all! In fact, separating each task with a break is important to allow you time to let your mind wander, maybe even get a snack and a drink, send a few texts before getting back to work. If you spread out your breaks evenly, then you won’t have to worry about being distracted as much. According to an Airtasker survey, the most effective way to remain productive whilst working at home was to take breaks (37%). Look out for each other – if you feel that your colleague is sounding a little drained, get them to take a break and look after themselves, show some solidarity!
12.Keep in touch
Though the idea of incessant Zoom calls worries you, it’s good to keep in touch with your team to boost team moral but also help everyone feel like they’re on track. Team calls also help you all know what you can improve on and what is coming up – video calls are great for feeling closer to your colleagues and reconnecting during this time apart. For those of you who are a little camera shy, then audio calls and Whatsapp group conversations will do the trick!
For a fun take on the work group conversation, we also recommend creating a non-work related group chat with your colleagues to keep in touch with them and send your favorite memes and videos, perhaps even share your new recipes – it reunites the whole company and creates a special bond between you all! You can even make new work friends!
The web has made it all too easy for consumers to look up anything and everything they’re interested in or have questions about. “Pet stores near me.” “Best web hosting 2020.” “Tom Brady net worth.”
And it’s with this easy access to data that consumers have grown pickier about who they do business with. Because if they can get answers to all the other questions in their lives, why can’t they find out everything there is to know about a company they intend to buy from?
As such, we’re going to see more companies lean more towards honest and straightforward approaches than they have in previous years… And that means web designers need to be ready to help clients communicate that transparency through their websites.
What Web Designers Can Do to Help Brands Build Trust
Transparency and trust go hand-in-hand in the minds of consumers. A report from SproutSocial provides additional insight into why it’s so important to them.
Although the report focuses on transparency in social media, at its core it’s looking at how brand transparency translates into consumer trust.
Here is one of the key takeaways:
When brands are honest with consumers about things like their internal workings, pricing, values, and so on, their customers become more loyal. And, not only that, they become active advocates for the brand.
As for what your visitors and prospects consider as “transparency”, here are the most common things they look for:
We can use this information to better present information on clients’ websites. Here’s how:
1. Be Clear About the Solution First Thing
53% of consumers define brand transparency as clarity. And what better way to be clear than to address their pain and provide your solution right away?
In fact, you could take a page out of RE/MAX‘s book and take all other distractions out of the way:
There is no navigation for the RE/MAX website save for the customer portal link. While you might not be able to get away with that exact design choice on your website, you could tuck your navigation under a hamburger menu to make sure the main thing in view is the call-to-action.
By removing other options from view, and painting a very strong argument for why your solution works (e.g. “Each year, our agents help hundreds of thousands of families buy or sell a home”), there’s no reason for visitors to get right to it. You’ve created the shortest, easiest, and clearest pathway to their pain relief.
2. Openly Display Customer and Client Reviews
One of the problems with displaying testimonials on a website is that the clients’ words are filtered through the company before they reach prospective clients’ eyes. In addition, brands obviously only want to share the most flattering of reviews, which can lead to some deception (whether intentional or not).
More than anything, consumers want brands to be open (59% of those surveyed defined transparency as openness). So, we need to do away with these overly flattering portraits of brands and start being more honest with prospects.
For service-based businesses, the solution is simple:
Encourage clients to leave reviews on the company’s Google or Facebook business page. You can put a link to those pages on the website so visitors see that honest reviews are welcome.
Use a reviews widget to display your online reviews — the good and the bad — on your website.
For ecommerce businesses, this is a little easier to implement as product reviews are commonplace. So long as there’s no manipulation of the data and visitors are able to see true reviews, there’s not much else to do than to configure a product reviews and ratings system like the one Olay has:
When you include reviews on your website, make sure a ratings sorting feature is included. That way, if customers want to see what all the bad reviews are saying, they can quickly get to them.
3. Maintain Integrity When It Comes to Data Collection
Privacy has been a major concern for consumers for years. But companies (and their web developers) found a solution amidst the release of GDPR: the cookie consent bar.
The only problem is that the cookies consent request was everywhere. And as tends to happen with consumers, banner blindness has led to more and more visitors ignoring those requests and clicking “OK” or “Allow” simply to get them out of the way.
Blind acceptance of a website’s privacy policies is not good for the brand nor the consumer. So, what some websites do now to appeal to the 23% of people who consider integrity the most important part of transparency is this:
Use just-in-time privacy notices that display only when visitors are about to share their information.
Include a “Do Not Sell My Personal Data” link at the bottom of websites as Tide does.
With the passing of the California Consumer Privacy Act, these statements enable California residents to indicate which tracking cookies they want to enable:
There are some cookie consent tools that provided for this level of user control, but not all — which is why this CCPA statement is a big step in the right direction.
4. Be 100% Honest About Pricing and Other Fees
It’s not always easy for consumers to decide what they’re going to spend their money on, what with the variety of options and distractions suggesting it might be better spent elsewhere. So, when your website provides pricing that seems too good to be true, don’t be surprised when they abandon the purchase when they discover it really is.
With 49% of consumers equating transparency with honesty, you can expect unexplained discrepancies between the ticket price and the price at checkout to cause issues for your brand.
When designing product-related pages on an ecommerce site, consider the best way to inform your shoppers without compromising the on-page experience.
Ticketmaster, for instance, does an amazing job of this.
For starters, this pop-up is what visitors see before they ever get a chance to look at ticket prices for upcoming NFL playoff games:
It’s just one way the website prepares them for any surprises.
Another way the site handles this well is with this well-placed reference to ticket fees:
The word “Fees” is a hyperlink that takes customers to the FAQs — one of many places they can go on the site for pricing-related questions:
Then there’s this accordion dropdown before checkout that details the fees that bring the total from $360 to $424:
This way, Ticketmaster customers are 100% prepared for what they’re about to find when they pull out their credit card to pay.
If your website has a high cart abandonment rate, there’s a good chance the issue has to do with the final costs. So, if your website hasn’t gently reminded them along the way of what they’ll pay, it’s time to build in more of that along the journey.
Transparency and Trust: Making the Connection for Your Clients
It’s understandable why consumers want to give their money to trustworthy brands. There are just too many options out there today. Why should a purchase ever require a deep-dive analysis into every company, every option, every product or service? By finding brands they can trust, their lives become much easier.
As a web designer, you have an important role to play in bringing prospective customers or clients to that conclusion.
Performance degradation is a problem we face on a daily basis. We could put effort to make the application blazing fast, but we soon end up where we started. It’s happening because of new features being added and the fact that we sometimes don’t have a second thought on packages that we constantly add and update, or think about the complexity of our code. It’s generally a small thing, but it’s still all about the small things.
We can’t afford to have a slow app. Performance is a competitive advantage that can bring and retain customers. We can’t afford regularly spending time optimizing apps all over again. It’s costly, and complex. And that means that despite all of the benefits of performance from a business perspective, it’s hardly profitable. As a first step in coming up with a solution for any problem, we need to make the problem visible. This article will help you with exactly that.
Note: If you have a basic understanding of Node.js, a vague idea about how your CI/CD works, and care about the performance of the app or business advantages it can bring, then we are good to go.
How To Create A Performance Budget For A Project
The first questions we should ask ourselves are:
“What is the performant project?”
“Which metrics should I use?”
“Which values of these metrics are acceptable?”
The metrics selection is outside of the scope of this article and depends highly on the project context, but I recommend that you start by reading User-centric Performance Metrics by Philip Walton.
From my perspective, it’s a good idea to use the size of the library in kilobytes as a metric for the npm package. Why? Well, it’s because if other people are including your code in their projects, they would perhaps want to minimize the impact of your code on their application’s final size.
For the site, I would consider Time To First Byte (TTFB) as a metric. This metric shows how much time it takes for the server to respond with something. This metric is important, but quite vague because it can include anything — starting from server rendering time and ending up with latency problems. So it’s nice to use it in conjunction with Server Timing or OpenTracing to find out what it exactly consists of.
But bear in mind: metrics are always context-related, so please don’t just take this for granted. Think about what is important in your specific case.
The easiest way to define desired values for metrics is to use your competitors — or even yourself. Also, from time to time, tools such as Performance Budget Calculator may come handy — just play around with it a little.
If you ever happened to run away from an ecstatically overexcited bear, then you already know, that you don’t need to be an Olympic champion in running to get out of this trouble. You just need to be a little bit faster than the other guy.
So make a competitors list. If these are projects of the same type, then they usually consist of page types similar to each other. For example, for an internet shop, it may be a page with a product list, product details page, shopping cart, checkout, and so on.
Measure the values of your selected metrics on each type of page for your competitor’s projects;
Measure the same metrics on your project;
Find the closest better than your value for each metric in the competitor’s projects. Adding 20% to them and set as your next goals.
Do you have a unique project? Don’t have any competitors? Or you are already better than any of them in all possible senses? It’s not an issue. You can always compete with the only worthy opponent, i.e. yourself. Measure each performance metric of your project on each type of page and then make them better by the same 20%.
Synthetic Tests
There are two ways of measuring performance:
Synthetic (in a controlled environment)
RUM (Real User Measurements)
Data is being collected from real users in production.
In this article, we will use synthetic tests and assume that our project uses GitLab with its built-in CI for project deployment.
Library And Its Size As A Metric
Let’s assume that you’ve decided to develop a library and publish it to NPM. You want to keep it light — much lighter than competitors — so it has less impact on the resulting project’s end size. This saves clients traffic — sometimes traffic which the client is paying for. It also allows the project to be loaded faster, which is pretty important in regards to the growing mobile share and new markets with slow connection speeds and fragmented internet coverage.
Package For Measuring Library Size
To keep the size of the library as small as possible, we need to carefully watch how it changes over development time. But how can you do it? Well, we could use package Size Limit created by Andrey Sitnik from Evil Martians.
The "size-limit":[{},{},…] block contains a list of the size of the files of which we want to check. In our case, it’s just one single file: index.js.
NPM script size just runs the size-limit package, which reads the configuration block size-limit mentioned before and checks the size of the files listed there. Let’s run it and see what happens:
npm run size
We can see the size of the file, but this size is not actually under control. Let’s fix that by adding limit to package.json:
Now if we run the script it will be validated against the limit we set.
In the case that new development changes the file size to the point of exceeding the defined limit, the script will complete with non-zero code. This, aside from other things, means that it will stop the pipeline in the GitLab CI.
Now we can use git hook to check the file size against the limit before every commit. We may even use the husky package to make it in a nice and simple way.
And now before each commit automatically would be executed npm run size command and if it will end with non-zero code then commit would never happen.
But there are many ways to skip hooks (intentionally or even by accident), so we shouldn’t rely on them too much.
Also, it’s important to note that we shouldn’t need to make this check blocking. Why? Because it’s okay that the size of the library grows while you are adding new features. We need to make the changes visible, that’s all. This will help to avoid an accidental size increase because of introducing a helper library that we don’t need. And, perhaps, give developers and product owners a reason to consider whether the feature being added is worth the size increase. Or, maybe, whether there are smaller alternative packages. Bundlephobia allows us to find an alternative for almost any NPM package.
So what should we do? Let’s show the change in the file size directly in the merge request! But you don’t push to master directly; you act like a grown-up developer, right?
Running Our Check On GitLab CI
Let’s add a GitLab artifact of the metrics type. An artifact is a file, which will “live» after the pipeline operation is finished. This specific type of artifact allows us to show an additional widget in the merge request, showing any change in the value of the metric between artifact in the master and the feature branch. The format of the metrics artifact is a text Prometheus format. For GitLab values inside the artifact, it’s just text. GitLab doesn’t understand what it is that has exactly changed in the value — it just knows that the value is different. So, what exactly should we do?
Define artifacts in the pipeline.
Change the script so that it creates an artifact on the pipeline.
To create an artifact we need to change .gitlab-ci.yml this way:
Because we have used the post prefix, the npm run size command will run the size script first, and then, automatically, execute the postsize script, which will result in the creation of the metric.txt file, our artifact.
As a result, when we merge this branch to master, change something and create a new merge request, we will see the following:
In the widget that appears on the page we, first, see the name of the metric (size) followed by the value of the metric in the feature branch as well as the value in the master within the round brackets.
Now we can actually see how to change the size of the package and make a reasonable decision whether we should merge it or not.
OK! So, we’ve figured out how to handle the trivial case. If you have multiple files, just separate metrics with line breaks. As an alternative for Size Limit, you may consider bundlesize. If you are using WebPack, you may get all sizes you need by building with the --profile and --json flags:
webpack --profile --json > stats.json
If you are using next.js, you can use the @next/bundle-analyzer plugin. It’s up to you!
Using Lighthouse
Lighthouse is the de facto standard in project analytics. Let’s write a script that allows us to measure performance, a11y, best practices, and provide us with an SEO score.
Script To Measure All The Stuff
To start, we need to install the lighthouse package which will make measurements. We also need to install puppeteer which we will be using as a headless-browser.
npm i -D lighthouse puppeteer
Next, let’s create a lighthouse.js script and start our browser:
Great! We now have a function that will accept the browser object as an argument and return a function that will accept URL as an argument and generate a report after passing that URL to the lighthouse.
We are passing the following arguments to the lighthouse:
The address we want to analyze;
lighthouse options, browser port in particular, and output (output format of the report);
report configuration and lighthouse:full (all we can measure). For more precise configuration, check the documentation.
Wonderful! We now have our report. But what we can do with it? Well, we can check the metrics against the limits and exit script with non-zero code which will stop the pipeline:
if (report.categories.performance.score
But we just want to make performance visible and non-blocking? Then let’s adopt another artifact type: GitLab performance artifact.
GitLab Performance Artifact
In order to understand this artifacts format, we have to read the code of the sitespeed.io plugin. (Why can’t GitLab describe the format of their artifacts inside their own documentation? Mystery.)
A measurement is an object that contains the following attributes:
name
Measurement name, e.g. it may be Time to first byte or Time to interactive.
value
Numeric measurement result.
desiredSize
If target value should be as small as possible, e.g. for the Time to interactive metric, then the value should be smaller. If it should be as large as possible, e.g. for the lighthouse Performance score, then use larger.
{
"name":"Time to first byte (ms)",
"value":240,
"desiredSize":"smaller"
}
Let’s modify our buildReport function in a way that it returns a report for one page with standard lighthouse metrics.
Now, when we have a function that generates a report. Let’s apply it to each type of the pages of the project. First, I need to state that process.env.DOMAIN should contain a staging domain (to which you need to deploy your project from a feature branch beforehand).
Note: At this point, you may want to interrupt me and scream in vain, “Why are you taking up my time — you can’t even use Promise.all properly!” In my defense, I dare say, that it is not recommended to run more than one lighthouse instance at the same time because this adversely affects the accuracy of the measurement results. Also, if you do not show due ingenuity, it will lead to an exception.
Use Of Multiple Processes
Are you still into parallel measurements? Fine, you may want to use node cluster (or even Worker Threads if you like playing bold), but it makes sense to discuss it only in the case when your pipeline running on the environment with multiple available cors. And even then, you should keep in mind that because of the Node.js nature you will have full-weight Node.js instance spawned in each process fork ( instead of reusing the same one which will lead to growing RAM consumption). All of this means that it will be more costly because of the growing hardware requirement and a little bit faster. It may appear that the game is not worth the candle.
If you want to take that risk, you will need to:
Split the URL array to chunks by cores number;
Create a fork of a process according to the number of the cores;
Transfer parts of the array to the forks and then retrieve generated reports.
To split an array, you can use multpile approaches. The following code — written in just a couple of minutes — wouldn’t be any worse than the others:
/**
* Returns urls array splited to chunks accordin to cors number
*
* @param urls {String[]} — URLs array
* @param cors {Number} — count of available cors
* @return {Array} — URLs array splited to chunks
*/
function chunkArray(urls, cors) {
const chunks = [...Array(cors)].map(() => []);
let index = 0;
urls.forEach((url) => {
if (index > (chunks.length - 1)) {
index = 0;
}
chunks[index].push(url);
index += 1;
});
return chunks;
}
Make forks according to cores count:
// Adding packages that allow us to use cluster
const cluster = require('cluster');
// And find out how many cors are available. Both packages are build-in for node.js.
const numCPUs = require('os').cpus().length;
(async () => {
if (cluster.isMaster) {
// Parent process
const chunks = chunkArray(urls, urls.length/numCPUs);
chunks.map(chunk => {
// Creating child processes
const worker = cluster.fork();
});
} else {
// Child process
}
})();
Let’s transfer an array of chunks to child processes and retrive reports back:
(async () => {
if (cluster.isMaster) {
// Parent process
const chunks = chunkArray(urls, urls.length/numCPUs);
chunks.map(chunk => {
const worker = cluster.fork();
+ // Send message with URL's array to child process
+ worker.send(chunk);
});
} else {
// Child process
+ // Recieveing message from parent proccess
+ process.on('message', async (urls) => {
+ const browser = await puppeteer.launch({
+ args: ['--no-sandbox', '--disable-setuid-sandbox', '--headless'],
+ });
+ const builder = buildReport(browser);
+ const report = [];
+ for (let url of urls) {
+ // Generating report for each URL
+ const metrics = await builder(url);
+ report.push(metrics);
+ }
+ // Send array of reports back to the parent proccess
+ cluster.worker.send(report);
+ await browser.close();
+ });
}
})();
And, finally, reassemble reports to one array and generate an artifact.
Well, we parallelized the measurements, which increased the already unfortunate large measurement error of the lighthouse. But how do we reduce it? Well, make a few measurements and calculate the average.
To do so, we will write a function that will calculate the average between current measurement results and previous ones.
// Count of measurements we want to make
const MEASURES_COUNT = 3;
/*
* Reducer which will calculate an avarage value of all page measurements
* @param pages {Object} — accumulator
* @param page {Object} — page
* @return {Object} — page with avarage metrics values
*/
const mergeMetrics = (pages, page) => {
if (!pages) return page;
return {
subject: pages.subject,
metrics: pages.metrics.map((measure, index) => {
let value = (measure.value + page.metrics[index].value)/2;
value = +value.toFixed(2);
return {
...measure,
value,
}
}),
}
}
Then, change our code to use them:
process.on('message', async (urls) => {
const browser = await puppeteer.launch({
args: ['--no-sandbox', '--disable-setuid-sandbox', '--headless'],
});
const builder = buildReport(browser);
const report = [];
for (let url of urls) {
+ // Let's measure MEASURES_COUNT times and calculate the avarage
+ let measures = [];
+ let index = MEASURES_COUNT;
+ while(index--){
const metric = await builder(url);
+ measures.push(metric);
+ }
+ const measure = measures.reduce(mergeMetrics);
report.push(measure);
}
cluster.worker.send(report);
await browser.close();
});
}
First, create a configuration file named .gitlab-ci.yml.
image: node:latest
stages:
# You need to deploy a project to staging and put the staging domain name
# into the environment variable DOMAIN. But this is beyond the scope of this article,
# primarily because it is very dependent on your specific project.
# - deploy
# - performance
lighthouse:
stage: performance
before_script:
- apt-get update
- apt-get -y install gconf-service libasound2 libatk1.0-0 libatk-bridge2.0-0 libc6
libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4
libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0
libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6
libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation
libappindicator1 libnss3 lsb-release xdg-utils wget
- npm ci
script:
- node lighthouse.js
artifacts:
expire_in: 7 days
paths:
- performance.json
reports:
performance: performance.json
The multiple installed packages are needed for the puppeteer. As an alternative, you may consider using docker. Aside from that, it makes sense to the fact that we set the type of artifact as performance. And, as soon as both master and feature branch will have it, you will see a widget like this in the merge request:
It may seem that we’re finally there, but no, not yet. If you are using a paid GitLab version, then artifacts with report types metrics and performance are present in the plans starting from premium and silver which cost $19 per month for each user. Also, you can’t just buy a specific feature you need — you can only change the plan. Sorry. So what we can do? In distinction from GitHub with its Checks API and Status API, GitLab wouldn’t allow you to create an actual widget in the merge request yourself. And there is no hope to get them anytime soon.
One way to check whether you actually have support for these features: You can search for the environment variable GITLAB_FEATURES in the pipeline. If it lacks merge_request_performance_metrics and metrics_reports in the list, then this features are not supported.
If there is no support, we need to come up with something. For example, we may add a comment to the merge request, comment with the table, containing all the data we need. We can leave our code untouched — artifacts will be created, but widgets will always show a message "metrics are unchanged».
Very strange and non-obvious behavior; I had to think carefully to understand what was happening.
So, what’s the plan?
We need to read artifact from the master branch;
Create a comment in the markdown format;
Get the identifier of the merge request from the current feature branch to the master;
Add the comment.
How To Read Artifact From The Master Branch
If we want to show how performance metrics are changed between master and feature branches, we need to read artifact from the master. And to do so, we will need to use fetch.
npm i -S isomorphic-fetch
// You can use predefined CI environment variables
// @see https://gitlab.com/help/ci/variables/predefined_variables.md
// We need fetch polyfill for node.js
const fetch = require('isomorphic-fetch');
// GitLab domain
const GITLAB_DOMAIN = process.env.CI_SERVER_HOST || process.env.GITLAB_DOMAIN || 'gitlab.com';
// User or organization name
const NAME_SPACE = process.env.CI_PROJECT_NAMESPACE || process.env.PROJECT_NAMESPACE || 'silentimp';
// Repo name
const PROJECT = process.env.CI_PROJECT_NAME || process.env.PROJECT_NAME || 'lighthouse-comments';
// Name of the job, which create an artifact
const JOB_NAME = process.env.CI_JOB_NAME || process.env.JOB_NAME || 'lighthouse';
/*
* Returns an artifact
*
* @param name {String} - artifact file name
* @return {Object} - object with performance artifact
* @throw {Error} - thhrow an error, if artifact contain string, that can't be parsed as a JSON. Or in case of fetch errors.
*/
const getArtifact = async name => {
const response = await fetch(`https://${GITLAB_DOMAIN}/${NAME_SPACE}/${PROJECT}/-/jobs/artifacts/master/raw/${name}?job=${JOB_NAME}`);
if (!response.ok) throw new Error('Artifact not found');
const data = await response.json();
return data;
};
Creating A Comment Text
We need to build comment text in the markdown format. Let’s create some service funcions that will help us:
You will need to have a token to work with GitLab API. In order to generate one, you need to open GitLab, log in, open the ‘Settings’ option of the menu, and then open ‘Access Tokens’ found on the left side of the navigation menu. You should then be able to see the form, which allows you to generate the token.
Also, you will need an ID of the project. You can find it in the repository ‘Settings’ (in the submenu ‘General’):
// You can set environment variables via CI/CD UI.
// @see https://gitlab.com/help/ci/variables/README#variables
// I have set GITLAB_TOKEN this way
// ID of the project
const GITLAB_PROJECT_ID = process.env.CI_PROJECT_ID || '18090019';
// Token
const TOKEN = process.env.GITLAB_TOKEN;
/**
* Returns iid of the merge request from feature branch to master
* @param from {String} — name of the feature branch
* @param to {String} — name of the master branch
* @return {Number} — iid of the merge request
*/
const getMRID = async (from, to) => {
const response = await fetch(`https://${GITLAB_DOMAIN}/api/v4/projects/${GITLAB_PROJECT_ID}/merge_requests?target_branch=${to}&source_branch=${from}`, {
method: 'GET',
headers: {
'PRIVATE-TOKEN': TOKEN,
}
});
if (!response.ok) throw new Error('Merge request not found');
const [{iid}] = await response.json();
return iid;
};
We need to get a feature branch name. You may use the environment variable CI_COMMIT_REF_SLUG inside the pipeline. Outside of the pipeline, you can use the current-git-branch package. Also, you will need to form a message body.
Let’s install the packages we need for this matter:
const FormData = require('form-data');
const branchName = require('current-git-branch');
// Branch from which we are making merge request
// In the pipeline we have environment variable `CI_COMMIT_REF_NAME`,
// which contains name of this banch. Function `branchName`
// will return something like "HEAD detached» message in the pipeline.
// And name of the branch outside of pipeline
const CURRENT_BRANCH = process.env.CI_COMMIT_REF_NAME || branchName();
// Merge request target branch, usually it's master
const DEFAULT_BRANCH = process.env.CI_DEFAULT_BRANCH || 'master';
/**
* Adding comment to merege request
* @param md {String} — markdown text of the comment
*/
const addComment = async md => {
const iid = await getMRID(CURRENT_BRANCH, DEFAULT_BRANCH);
const commentPath = `https://${GITLAB_DOMAIN}/api/v4/projects/${GITLAB_PROJECT_ID}/merge_requests/${iid}/notes`;
const body = new FormData();
body.append('body', md);
await fetch(commentPath, {
method: 'POST',
headers: {
'PRIVATE-TOKEN': TOKEN,
},
body,
});
};
Comments are much less visible than widgets but it’s still much better than nothing. This way we can visualize the performance even without artifacts.
Authentication
OK, but what about authentication? The performance of the pages that require authentication is also important. It’s easy: we will simply log in. puppeteer is essentially a fully-fledged browser and we can write scripts that mimic user actions:
Before checking a page that requires authentication, we may just run this script. Done.
Summary
In this way, I built the performance monitoring system at Werkspot — a company I currently work for. It’s great when you have the opportunity to experiment with the bleeding edge technology.
Now you also know how to visualize performance change, and it’s sure to help you better track performance degradation. But what comes next? You can save the data and visualize it for a time period in order to better understand the big picture, and you can collect performance data directly from the users.
You may also check out a great talk on this subject: “Measuring Real User Performance In The Browser.” When you build the system that will collect performance data and visualize them, it will help to find your performance bottlenecks and resolve them. Good luck with that!
The modern marketing landscape has changed drastically over the last few years. Moving from static marketing in magazines and on billboards, to an online frenzy of emails, posts, and expensive AdWord campaigns.
A common issue for small startups is that they don’t have the budget to compete with big companies on the social marketing scene. We explore four creative ways to increase business with a $0 marketing budget.
Facebook & Instagram
There’s a reason why we’ve put these two social powerhouses together. First, let’s talk about Facebook – it’s huge, it’s free, and if your business doesn’t have a page, you’re dead in the water. It’s the best way to communicate information about your business to a captive audience.
Your Facebook profile has information on your business hours, contact details, and your products. Facebook links to Instagram and the benefit of this is that when you post on Instagram, you can have the same post appear on your Facebook page, saving you time.
Instagram is a graphic platform, so pictures or photos are what get people’s attention here. Take a pic of your products, and if you offer a service, then create a service-related photo to share. You can bounce between the two platforms using graphics on Instagram and informational posts on Facebook.
Content
Some marketers say a website is a dying tool, but we beg to differ. The appeal may have lost a little of its initial spark, but sites are still a reliable fallback for clients looking for information. Search engines have powerful features that can help clients find what company they want and in what area. A function that social media has not perfected yet.
Your website is your online portfolio, and you have full control over its content. Fill it with relevant information that would interest your target market. Think out of the box here and don’t limit your website to only explaining what services you offer, expand on the information to encompass complimentary info. You can create a free site too, to begin with.
For example, if you sell stationery, don’t just list your products, because let’s face it, stationery in itself is unexciting. Place content on your website surrounding advances in office technology, improving the work environment, office workers’ mental health, etc.
There’s so much information that’s related indirectly to the stationery industry that you’ll find customers drawn to the content, which end up purchasing products.
Videos
YouTube is a fantastic platform for businesses to build their exposure. Product reveals and instructional videos can be used to educate customers and be linked to Facebook. The video links can also be embedded in direct email campaigns.
What’s magical about YouTube is that users who are interested in a particular genre or subject are prompted to watch other videos within the same interest spectrum, which in a way, is free push marketing for you.
Be sure to create a video that’s not too specific. If you’re selling car tires, don’t just make a video on the particular brand of tire, but also include general information such as how to prolong the life of the vehicle, save money on maintenance, etc.
Your Facebook and Website pages are a portal to a captive audience. Use the platforms to gather your customer’s information and email them often with new product launches and ways that they can improve their lives.
Remember, it’s not only about pushing your products and services, but also the need to drive brand awareness. It sometimes means not directly promoting your company but sending out general information that will improve customer’s lives, and at the same time, keep your company name in their minds.
Conclusion
The biggest mistake that many small businesses make is that they don’t take the time to make a great social media content strategy plan. They have a website and Facebook page, but post rarely and don’t update content. Use email and video to reinforce your exposure and appeal to more people.
With any marketing campaign, it needs to be done consistently to have any effect. Create a plan, implement it with enthusiasm, and your business will be sure to make its mark.