The following is a guest post by Jan Östlund. Jan is the creator of the GitFTP-Deploy, now at v2.0. He wrote to me about GitFTP-Deploy, and I thought it was pretty cool. As you’ll learn in this post, it’s macOS Git-based deployment software. I think it’s worth knowing about as it sits nicely between free (and usually a bit more complex) roll-your-own solutions, and solutions with monthly or yearly subscription costs. GitFTP-Deply is a flat cost. This is slightly tricky territory, as it’s advertorial in nature. So full disclosure: Jan didn’t pay for this, but I opted to use affiliate links.
Let’s set the scene. Say you are a web freelancer and are almost finished with a client’s new website. Over the years, you have learned the hard way not to edit the files directly over FTP. It’s too easy to make breaking changes with no record of what changed and who did what. Nowadays you are using Git to manage the version of the files.
Why use version control?
There are many benefits of using a version control system for your projects. Even if you’re a very organized person, you still might get confused with a naming system like `index-2017-01-12.html` or `header_image_final_final_v2.svg`. Is it really final? How do you know what exactly is different between these versions and the last?
A version control system (VCS, like Git) enforce that there is only one version of your files at any given time. All past versions of files are neatly packed up inside the VCS. When you need it, you can request any version at any time, and you’ll have a snapshot of the complete project at hand.
Every time you save a new version of your project, your VCS requires you to provide a short description of the changes. Additionally (if it’s a code/text file), you can see exactly what has been modified in the file’s content. The VCS helps you understand how your project evolved between versions.
Deployment / Uploading
As useful as a VCS is, it doesn’t directly help with uploading files to a live website. (We’ll refer to that as deployment.)
Deploying files can be very easy. Use an FTP client (e.g. Transmit) to upload files via FTP or SFTP straight to your server. The initial release of a site is especially easy: just upload all the files.
When you make changes to a site, you also need to upload files. But… which ones? Do you always remember which files you have changed? If your panicked client calls to tell you that the site is broken, do you know what changed the last few times you uploaded files?
If you are using Git, it’s easy to see. But still, Git doesn’t do deployment, and manually checking which files have changed and moving those is error prone and tedious. You still need a smart way to upload and deploy your changed files only.
So what other options do you have for deploying files? One option is also installing Git on the server. Then just like you push and pull from your Git repository locally, you can pull from that repository on the server and the server will pull down the latest changes. This isn’t an option for everyone, though. It will require shell access to the server and that just isn’t possible on many shared hosting solutions.
Another possibility is to rely on third-party cloud services like DeployBot, Springloops, or Beanstalk. None of these choices are bad, but there are potential downsides:
There are monthly or annual costs to these services, whether you are actively using them at the moment or not.
Setting up external Git repositories may take some time and can be complicated.
There is also the increased risk of relying on a third party service. The deployment service can be down at the moment you want to deploy.
The speed of the deployment is dependent on that service. There may be a long queue of other deployments before yours.
A Look at GitFTP-Deploy
Let’s take a look at my alternative: GitFTP-Deploy. GitFTP-Deploy is a native macOS app that only uploads the changed files (through SFTP, FTP or FTPS) since the last deployment. You don’t have to remember which files you have changed, added, or deleted.
Since GitFTP-Deploy reads what have changed from your local Git repository, you also get in the habit of using a Git for your project: once a file is committed, it’s also ready for deployment.
Another feature which could ease up your deployment is GitFTP-Deploy’s ability to run pre- and post-deployment scripts. Are you using a JavaScript compiler (like Babel) or CSS preprocessor (like Stylus)? Are you concatenating and compressing assets? Running these tasks can be make automatic by the app.
Other times it can be a real time saver just to commit the last changes, and it’s automatically pushed onto the server, just with a single Git commit command.
Getting started
In less than two minutes you can start to deploy your files.
1) Create a new site
2) Point to your local repository and select which commit you want to start deploying from
3) Setup your server connection
4) Click Deploy
Maybe your workflow is a bit more complicated? You need more power?
You can specify scripts to be run both locally and on your server, before and after uploads. For example, before uploading, you may want to run your favorite JavaScript compiler or a Gulp script that concatenates and optimizes your JavaScript files for production.
Or perhaps you are using a workflow with another task runner like Grunt? Grunt can also be configured to help you with optimizing images, compressing scripts, compiling preprocessors, and countless other things.
The ability to run tasks can be quite powerful. For example, even WordPress has WP-CLI, meaning you could script out things like database syncing and settings updates with your deployment.
Other Notes on Usage
If you prefer not to have build files under version control, you can add this folder to “always upload”.
Using GitFTP–Deploy does not mean that you cannot use GitHub or another third-party hosted Git repository service. Just make sure to pull the changes from there before deploying.
Team Usage
While GitFTP–Deploy is not exactly built for teams, you can still use it. The easiest way it that one person handles the deployments. Another more advanced option is to run GitFTP-Deploy on a server.
This way may not work for large teams, and commits are done through many different individuals. However, GitFTP-Deploy will attempt to check which branch and commit that which was last deployed.
2016 was a difficult year for typography, and 2017 does not promise to be much easier. The entire previous year, this integral element of UI underwent changes. It was a constant subject to trends. At first, there were designs with the huge bold almost overwhelming fonts, hand-written scripts, regular types used in shared spaces. Then, the retro typefaces, vertical lettering used in primary navigation, hollow symbols, masking applied to the titles came in the stead. This year is also going to be pretty busy since experiments with the typefaces are carrying on.
With all this hype around the typography, it is always a good thing to have one or two good-looking fonts at your fingertips. The more so, regardless of the situation, it still plays a vital role in conveying the message to the user and reflecting the general atmosphere of the interface. It will always be in the designer’s arsenal of essential instruments. It is one of those tiny details that can easily upset all the harmony and destroy the first impression up to the hilt as well as save the day and separate your project from the others. So, it should be treated with the respect.
Today we want to remind you about a hundred of beautiful, carefully-optimized and vigilantly–crafted typefaces that were popular in 2016. Although all of them are free; not all of them can be used in commercial projects. Please, be careful when applying them in your future works.
When you examine the most successful interaction designs of recent years, the clear winners are those who provide an excellent functionality. While functional aspect of a design is key to product success, aesthetics and visual details are equally important?—?particularly how they can improve those functional elements.
In today’s article, I’ll explain how visual elements, such as shadows and blur effects, can improve the functional elements of a design.
How to Add a Responsive Lightbox in Adobe Muse. No Coding Skills Required.
A Lightbox is a great way to have your website visitors focus their attention on a specific item on your web page. With a Lightbox you can have the website be in the background while a specific item is emphasized in the center of the website. An opaque or solid overlay covers the website so it is not seen in the background while an element is focused on. Adobe Muse has a few Lightbox features but one of the drawbacks with the Adobe Muse built-in Lightbox is that it is not responsive. That is why I decided to create “The Lightbox” widget. With this widget you can add a YouTube video, Vimeo video, image, Google Map, or website to the Lightbox. As the user resizes the browser the Lightbox changes size as well to fit all devices. You can also set the opacity of the overlay in the Lightbox to be more solid or more see through. You can have a solid color or choose a gradient for the Lightbox overlay.
Features Include:
Set the background color and opacity of the Lightbox background.
Add a gradient for the Lightbox background.
Link a YouTube video to the Lightbox.
Link a Vimeo video to the Lightbox.
Link a website to the Lightbox.
Link an image to the Lightbox.
Link a Google Map to the Lightbox.
Link a Muse For You Hover Box to the Lightbox.
Here are the steps to add “The Lightbox” widget:
1. Install the The Lightbox Widget by clicking the .mulib file inside of the widget folder. This will install directly into the Adobe Muse library panel.
2. Drag and drop “The Lightbox – Add First” widget from the library panel onto the Adobe Muse website. If you would like to have a gradient background in the Lightbox add “The Lightbox – Add First – With Gradient.” If you do not see the library panel go to Window > Library. From the “Add First” widget you can style the color and opacity of the Lightbox overlay.
3. Next drag and drop “The Lightbox Widget” onto your Adobe Muse website. You will notice there is an option that says “Graphic Style Name.” This is the graphic style name you will want to apply to the element that opens the Lightbox when clicking on it. To assign the graphic style name open the graphic styles panel. If you do not see the graphic styles panel go to Window > Graphic Styles. Then assign the graphic style name that is in the widget to the element you would like to open the Lightbox.
4. Afterwards link the element in Adobe Muse to a YouTube video, Vimeo Video, Image, Google Map, or website via the built-in “Hyperlinks” section in Adobe Muse.
Perhaps you’ve heard of HTTP/2? It’s not just an idea, it’s a real technology and slowly but surely, hosting companies and CDN services have been releasing it to their servers. Much has been said about the benefits of using HTTP/2 instead of HTTP1.x, but the proof the the pudding is in the eating.
Today we’re going to perform a few real-world tests, perform some timings and see what results we can extract out of all this.
Why HTTP/2?
If you haven’t read about HTTP/2, may I suggest you have a look at a few articles. There’s the HTTP/2 faq which gives you all the nitty gritty technical details whilst I’ve also written a few articles about HTTP/2 myself where I try to tone-down the tech and focus mostly on the why and the how of HTTP/2.
In a nutshell, HTTP/2 has been released to address the inherent problems of HTTP1.x
HTTP/2 is binary instead of textual like HTTP1.x – this makes it transfer and parsing of data over HTTP/2 inherently more machine-friendly, thus faster, more efficient and less error prone.
HTTP/2 is fully multiplexed allowing multiple files and requests to be transferred at the same time, as opposed to HTTP1.x which only accepted one single request / connection at a time.
HTTP/2 uses the same connection for transferring different files and requests, avoiding the heavy operation of opening a new connection for every file which needs to be transferred between a client and a server.
HTTP/2 has header compression built-in which is another way of removing several of the overheads associated with HTTP1.x having to retrieve several different resources from the same or multiple web servers.
HTTP/2 allows servers to push required resources proactively rather than waiting for the client browser to request files when it thinks it need them.
These things are the best (if simplistic) depiction of how HTTP/2 is better than HTTP1.x. Rather than the browser having to go back to the server to fetch every single resource, it’s picking up all the resources and transferring them at once.
An semi-scientific test of HTTP/2 performance
Theory is great, but it’s more convincing if we can see some real-data and real performance improvements of HTTP/2 over HTTP1.x We’re going to run a few tests to determine whether we see a marked improvement in performance.
Why are we calling this a semi-scientific test?
If this were a lab, or even a development environment where we wanted to demonstrate exact results, we’d be eliminating all variables and just test the performance of the same HTML content, one using HTTP1.x and one using HTTP/2.
Yet (most of us) don’t live in a development environment. Our web applications and sites operate in the real world, in environments where fluctuations occur for all sorts of valid reasons. So while lab testing is great and is definitely required, for this test we’re going out in the real-world and running some tests on a (simulated) real website and compare their performance.
We’re going to be using a default one-page Bootstrap template (Zebre) for several reasons:
It’s a very real-world example of what modern website looks like today
It’s got quite a varied set of resources which are typical of sites today and which would typically go through a number of optimizations for performance under HTTP1.x circumstances
25 images
6 JS scripts
7 CSS files
It’s based on WordPress so we’ll be able to perform a number of HTTP1.x based optimizations to push its performance as far as it can go
It was given out for free in January by ThemeForest. This was great timing, what better real-world test than using a premium theme by an elite author on ThemeForest?
We’ll be running these tests on a brand new account powered by Kinsta managed WordPress hosting who we’ve discovered lately, and whose performance we really find great. We do this because we want to avoid the stressed environments of shared hosting accounts. To reduce the external influence of other sites operating on the same account at the same time, this environment will be used solely for the purpose of this test.
We ran the tests on the lowest plan because we just need to test a single WordPress site. In reality, unlike most hosting services, there is no difference in speed/performance of the plans. The larger plans just have the capacity for more sites. We then set up one of the domains we hoard (iwantovisit.com) and installed WordPress on it.
We’ve also chosen to run these tests on WordPress.
The reason for doing that is for a bit of convenience rather than anything else. Doing all of these tests on manual HTML would require quite a lot of time to complete. We’d rather use that time to do more extensive and constructive tests.
Using WordPress, we can enable such plugins as:
A caching plugin (to remove generation time discrepancies as much as possible)
Combination and minification plugin to perform optimizations based on HTTP1.x
CDN plugin to easily integrate with a CDN whilst performing HTTP/2 tests integrated with a CDN
We setup the Zebre theme and installed several plugins. Once again, this makes the test very realistic. You’re hardly going to find any WordPress sites without a bunch of plugins installed. We installed the following:
We also imported the Zebre theme demo data to have a nicely populated theme with plenty of images, making this site an ideal candidate for HTTP/2 testing.
The final thing we did was to make sure there is page caching in place. We just want to make sure we were not suffering from drastic fluctuations due to page generation times. The great thing is that with Kinsta there’s no needed for any kind of caching plugin as page caching is fully built into the service at the server-level.
The final page looked a little like this:
And this is the below the fold:
We’re ready for the first tests.
Test 1 – HTTP1 – caching but no other optimizations
Let’s start running some tests to make sure we have a good test bed and get some baseline results.
We’re running these tests with only WordPress caching – no other optimizations.
Testing Site
Location
Page Load time
Total Page Size
Requests
GTMetrix
Vancouver
3.3s
7.3Mb
82
Pingdom tools
New York
1.25s
7.3Mb
82
There’s clearly something fishy going on. The load times are much too different. Oh yes: Google Cloud platform, Central US servers east are located in Iowa, making the test location of Pingdom tools New York much closer than Vancouver, skewing the results in favor of New York.
You probably know that if you want to improve the performance of your site, there is one very simple solution: host your site or application as physically close as possible to the location of your visitors. That’s the same concept CDNs use to boost performance. The closer the visitors to the server location of the site, the better the loading time.
For that reason, we’re going to run two types of tests. One is going to have a very close location between the hosting service and the test location. For the other, we’re going to choose to amplify the problem of distance. We’re thus going to perform a trans-atlantic trip with our testing, from the US to Europe, and see whether the HTTP/2 optimizations results in better performance or not.
Let’s try to find a similar testing location on both test services. Dallas, Texas is a common testing ground, so we’ll use that for the physically close location. For the second location, we’re going to use London and Stockholm, since there isn’ a shared European location.
Testing Site
Location
Page Load time
Total Page Size
Requests
Pingdom tools
Dallas
2.15s
7.3Mb
82
That’s better. Let’s run another couple of tests.
Testing Site
Location
Page Load time
Total Page Size
Requests
GTMetrix
Dallas
1.6s
7.3Mb
83
Pingdom tools
Dallas
1.74s
7.3Mb
82
GTMetrix
London
2.6s
7.3Mb
82
Pingdom tools
Stockholm
2.4s
7.3Mb
82
You might notice there are a few fluctuations in the requests. We believe these are coming from external scripts being called, which sometimes differ in the number of requests they generate. In fact, although the loading times seem to vary by about a second, by taking a look at the waterfall graph, we can see that the assets on the site are delivered pretty consistently. It’s the external assets (specifically: fonts) which fluctuate widely.
We can see clearly also how the distance affects the loading time significantly by about a second.
Before we continue, you’ll also notice that our speed optimization score is miserable. That’s why for our second round of tests we’re going to perform a number of speed optimizations.
Test 2 – HTTP1 with performance optimizations and caching
Now, given that we know that HTTP1.x is very inefficient in the handling of requests, we’re going to do a round of performance optimizations.
We’re going to install HummingBird from WPMUDEV on the WordPress installation. This is a plugin which handles page load optimizations without caching. Exactly what we need.
We’ll be enabling most of the optimizations which focus on reducing requests and combining files as much as possible.
Minification of CSS and JS files
Combining of CSS and JS files
Enabling of GZIP compression
Enabling of browser caching
We’re not going to optimize the images because this would totally skew the results.
As you can see below, following our optimization, we have a near perfect score for everything except images. We’re going to leave the images unoptimized on purpose so that we retain their large size and have a good “load” to carry.
Let’s flush the caches and perform a second run of tests. Immediately we can see a drastic improvement.
Never mind the C on YSlow. It’s because we’re not using a CDN and some of the external resources (the fonts) cannot be browser cached.
Testing Site
Location
Page Load time
Total Page Size
Requests
GTMetrix
Dallas
1.9s
7.25Mb
56
Pingdom tools
Dallas
1.6s
7.2Mb
56
GTMetrix
London
2.7s
7.25Mb
56
Pingdom tools
Stockholm
2.28s
7.3Mb
56
We can see quite a nice improvement on the site. Next up, we’re going to enable HTTPS on the site. This is a prerequisite for setting up HTTP/2.
Test 3 – HTTP/2 without optimizations and caching
We’ll be using the Let’s Encrypt functionality to create a free SSL certificate. This is built into Kinsta, which means setting up HTTPS should be pretty straightforward.
This plugin checks whether a secure certificate for the domain exists on your server, if it does, it forces HTTPS across your WordPress site. Really and truly, this plugin makes implementing HTTPS on your site a breeze. If you’re performing a migration from HTTP to HTTPS, do not forget to perform a full 301 redirection from HTTP to HTTPS, so that you don’t lose any traffic or search engine rankings whilst forcing HTTPS on your site.
Once we’ve fully enabled and tested HTTPS on our website, you might need to do a little magic to start serving resources over HTTP/2, although most servers today will switch you directly to HTTP/2 if you are running an SSL site.
Kinsta runs on Nginx, and enables HTTP/2 by default on SSL sites, so enabling SSL is enough to switch the whole site to HTTP/2.
Once we’ve performed the configuration our site should now be served on HTTP/2. To confirm that the site is running on HTTP/2, we’ve installed this nifty chrome extension which checks which protocols are supported by our site.
Once we’ve confirmed that HTTP/2 is up and running nicely on the site, we can run another batch of tests.
Testing Site
Location
Page Load time
Total Page Size
Requests
GTMetrix
Dallas
2.7s
7.24Mb
82
Pingdom tools*
Dallas
2.04s
7.3Mb
82
GTMetrix
London
2.4s
7.24Mb
82
Pingdom tools*
Stockholm
2.69s
7.3Mb
82
*Unfortunately, Pingdom tools uses Chrome 39 to perform the tests. This version of Chrome does not have HTTP/2 support so we won’t be able to realistically calculate the speed improvements. We’ll run the tests regardless because we can have a benchmark to compare with.
Test 4 – HTTP/2 with performance optimizations and caching
Now that we’ve seen HTTP/2 without any performance optimizations, it’s also a good idea to actually check whether HTTP1 based performance optimizations can and will make any difference when we have HTTP/2 enabled.
There are two ways of thinking about this:
Against: To perform optimizations aimed at reducing connections and size, we are adding performance overhead to the site (whilst the server performs minification and combination of files), therefore there is a negative effect on the performance.
In favor: Performing such minification and combination of files and other optimizations will have a performance improvement regardless of protocol, particularly minification which is essentially reducing the size of resources which need to be delivered. Any performance overhead can be mitigated using caching.
Testing Site
Location
Page Load time
Total Page Size
Requests
GTMetrix
Dallas
1.0s
6.94Mb
42
Pingdom tools**
Dallas
1.45s
7.3Mb
56
GTMetrix
London
2.5s
7.21Mb
56
Pingdom tools**
Stockholm
2.46s
7.3Mb
56
**HTTP/2 not supported
Test 5 – CDN with performance optimizations and caching (no HTTP/2)
You’ve probably seen over and over again how one of the main ways to improve the performance of a site is to implement a CDN (Content Delivery Network).
But why should a CDN still be required if we are now using HTTP/2?
There is still going to be a need for a CDN, even with HTTP/2 in place. The reason is that besides a CDN improving performance from an infrastructure point of view (more powerful servers to handle the load of traffic), a CDN actually reduces the distance that the heaviest resources of your website need to travel.
By using a CDN, resources such as images, CSS and JS files are going to be served from a location which is (typically) physically closer to your end user that your website’s hosting server.
This has an implicit performance advantage: the less content needs to travel, the faster your website will load. This is something which we’ve already encountered in our initial tests above. Physically closer test locations perform much better in loading times.
For our tests, we’re going to run our website on an Incapsula CDN server, one of the CDN services which we’ve been using for our sites lately. Of course, any CDN will have the same or similar benefits.
There are a couple of ways that your typical CDN will work:
URL rewrite: You install a plugin or write code such that the address of resources are rewritten such that they are served from the CDN rather than your site’s URL
Reverse proxy: you make DNS changes such that the CDN handles the bulk of your traffic. The CDN service then sends the requests for dynamic content to your web server.
Testing Site
Location
Page Load time
Total Page Size
Requests
GTMetrix
Dallas
1.5s
7.21Mb
61
Pingdom tools
Dallas
1.65s
7.3Mb
61
GTMetrix
London
2.2s
7.21Mb
61
Pingdom tools
Stockholm
1.24s
7.3Mb
61
Test 6 – CDN with performance optimizations and caching and HTTP/2
The final test which we’re going to perform is implementing all possible optimizations we can. That means we’re running a CDN using HTTP/2 on a site running HTTP/2, where all page-load optimizations have been performed.
Testing Site
Location
Page Load time
Total Page Size
Requests
GTMetrix
Dallas
0.9s
6.91Mb
44
Pingdom tools**
Dallas
1.6s
7.3Mb
61
GTMetrix
London
1.9s
6.90Mb
44
Pingdom tools**
Stockholm
1.41s
7.3Mb
61
**HTTP/2 not supported
Nice! We’ve got a sub-second loading time for a 7Mb sized website! That’s an impressive result if you ask me!
We can clearly see what a positive effect HTTP/2 is having on the site – when comparing the loading times, you can see that there is a 0.5 second difference on the loading times. Given that we’re operating in an environment which loads in less than 2 seconds in the worst-case scenario, a 0.5 second difference is a HUGE improvement.
This is the result which we were actually hoping for.
Yes, HTTP/2 does make a real difference.
Conclusion – Analysis of HTTP/2 performance
Although we tried as much as possible to eliminate fluctuations, there are going to be quite a few inaccuracies in our setup, but there is a very clear trend. HTTP/2 is faster and is the recommended way forward. It does make up for the performance overhead which is introduced with HTTPS sites.
Our conclusions are therefore:
HTTP/2 is faster in terms of performance and site loading time than HTTP1.x.
Minification and other ways of reducing the size of the web page being served is always going to provide more benefits than the overhead required to perform this “minification”.
Reducing the distance between the server and the client will always provide page loading time performance benefits so using a CDN is still a necessity if you want to push the performance envelope of your site, whether you’ve enabled HTTP/2 or not.
What do you think of our results? Have you already implemented HTTP/2? Have you seen better loading times too?
Opera has recently unveiled a brand-new refresh of its user interface in the developer stream. Really part of a bigger project named Reborn—appropriately so—this new build offers a totally new look and some new features that are sure to pique the interest of many designers.
According to the blog entry on Opera’s official website, the browser’s design gets an entirely new facelift. Less platform-specific, the redesign focuses more on high-quality graphical design than anything else.
One of the first changes that observant designers will notice is with the tabs: they’ve been simplified, are lighter, and seem more elegant overall. In practical, user-experience terms, you’ll be able to find open tabs with much greater ease.
Then, there’s the improvement with the sidebar. It now features a bit of animation for extra vibrancy, and it’s also subtler and sports a more refined appearance.
Speed Dial also gets a new look; it features smooth animations and shadows that are more noticeable for a slightly more potent 3D look. There are also going to be some default wallpapers for Speed Dial.
Originally located in Speed Dial, the sidebar moves to the main browser window; this is close to the setup in Opera Neon. The sidebar’s first version provides users with one-click access to the most vital tools like: Bookmarks; History; Extensions; Personal news.
Users will be able to customize the sidebar so only the tools they find useful will show up there. This new version of the sidebar is visible for new users by default, but existing users have the choice to turn it on if they want to, by simply activating the switch, which is found in Speed Dial.
The browser’s refresh also acknowledges the pervasive influence of messaging on the web today. To that end, this redesign features the opportunity to keep Messenger.com in a side tab. A UX consideration primarily, this should address the cumbersome nature of having to constantly switch back and forth between tabs to answer messages. Additional features incorporating social services into the browser’s design are intended for the future.
Users who want to make use of Messenger within the browser have to simply click the icon found on the top of the sidebar. There are two ways to use Messenger after logging in: 1) Open it in overlay; 2) Pin it to use side-by-side with the current tab. Option 2 allows users to integrate online chat into the full browsing experience for a better UX.
This redesign was released through Opera’s developer stream. Usually, any improvements are released in the developer stream to work out any instability while in the beta state. Then, after a few months, the redesign proper migrates to the consumer version of the browser, so everyone can expect to see the full redesign in the near future.
JavaScript module bundling has been around for a while. RequireJS had its first commits in 2009, then Browserify made its debut, and since then several other bundlers have spawned across the Internet.
Among that group, webpack has jumped out as one of the best. If you’re not familiar with it, I hope this article will get you started with this powerful tool.
Every human has at least one talent that they enjoy. For most of them, it is only natural to turn this talent into a hobby. Some people, however, manage to reconcile career and talent, but these cases are rare – especially if the talent is discovered later on in life. The thought of it being too late is wrong. It’s never too late to turn your hobby into a job and found an avocational business with eCommerce.
Not everyone is able to throw everything away, and jump right into autonomy. So this article is a golden thread for everyone that plans to slowly go freelance while keeping their day job.
Your Product
The first thing is always the question: what do I even want to sell? You either produce something yourself, or you resell goods. Of course, the former takes more effort and requires individual skills. The latter, however, is less complicated but takes a lot more capital. We will focus on a self-produced product, as our goal are minimal costs. All recommendations made for self-produced products usually apply for selling third-party goods as well, though.
Now, you have to decide what exactly you want your product to be. Don’t make the mistake of choosing a product that seems to sell well, but is outside of your talents. A lot of quality would be lost, and work would become frustrating over time. Stay authentic, and find something that you’re good at. No matter what you choose, your product should meet three basic requirements:
It can be made at home (no additional rooms have to be rented).
No additional workers are required. (Whether you want to include your family or not is up to you, but, before the cooperation, agree on potential payment – instant, or only in the case of success.)
No expensive, additional machines are required (this doesn’t include sewing machines, or large cooking pots).
If your product meets these requirements, you can continue with the next step.
Your Customers
Even with a part-time activity, you won’t get around a detailed market analysis. Knowing if people are willing to buy your product is essential. To figure that out, you should first check the competition in your niche. If there are lots of providers, it’s a sign that an individual demand for your product exists. At the same time, however, this might mean that the market is already saturated. If your product is very innovative, and there are very few to no competitors, this can be a sign of an unsatisfied or lacking demand.
As you can’t draw any clear results from simply analyzing competition, it’s important to create additional context. To do that, you should directly contact your targeted customers. Surveys, and internet research in relevant forums may help you gain a detailed image of your client. Now, all you need to do is find out where to find these customers, and go there. The next step takes some courage: ask your clients what they think about your product. What else would they wish for? How much are they willing to pay for it? This type of feedback is invaluable and will help you adjust your product to your customer’s wishes. As an additional advantage, your research will also show you places where you should advertise your product.
The Price
The price makes or breaks your autonomy. It has to be high enough for you to profit but mustn’t scare off the customers. Due to your spadework, you already have an idea of how much the customers are willing to pay. Now, you can also check what prices the competition demands for similar products.
An important thing to keep in mind when deciding on a price is your costs. Every product causes costs that you have to cover with the price. Costs divide into two categories: overheads, and variable costs, Overheads always accrue, no matter how much you produce. This includes things like rent. For you, the variable costs are even more important. These are the costs for each product you produce. Used material and shipping cost are classic members of this category. Many part-time freelancers forget to calculate their own work time. Even if you do everything yourself, work time is a resource that should be included in the price. Otherwise, you’re just stealing your own money.
Equipped with the market price level, and the cost calculations, it’s time to find the balance between the two. The price ideas of the customers could help you here. In the end, you should make a profit. After all, this is supposed to be worthwhile.
The Sale
You know what you want to sell. You know who you want to sell it to. You also know what price you want to sell it for. Now, there’s only the question how you’re going to sell it left. Assuming that the production site is limited (your flat/house), you usually won’t open a store. Then, the easiest option is an online trade. A digital shop space removes geographical limits. Thus, you can sell your product to everyone that you want to sell it to. Normally, designing, and marketing a professional online shop would take a good chunk of money. So why not profit from existing online shops? Usually, they offer a system called drop shipping. Here, the online shop offers your product on their page, taking care of ads, billing, and first level customer contact. Now, if a customer orders one of your products, the online shop hands that order over to you. Now, you have to take care of packing, shipping, and possible product returns.
The customer pays, and everyone is happy, as the online shop usually takes a share of the turnover. Although you’d probably prefer keeping 100% of the turnover, this type of distribution is worth it, especially in the beginning. As you don’t need to care about ads and the shop, you have time to optimize your business processes. As soon as these are set up, and you have some profit on the side, you can deal with your own online shop. Typically, both models can be operated at the same time.
The popular social networking platform made yet another important change that was not formally announced. Facebook made a move that has probably been planned for a long time: the platform just changed the entire messaging experience by integrating the Messenger app into its desktop version. Reactions of all kinds soon appeared online.
The change brought to Messenger app was spotted by most users, especially that this time it seems to be more than just a simple beta “test.” When we look at the home page, we notice that the Messenger icon in the blue navigation bar at the top of the screen has replaced the old inbox icon; when you click on it, you go to a radically overhauled inbox, similar to a Messenger.com.
Judging by the public statements made by its representatives, Facebook needed to go more mobile. The platform is also meant to help its users by presenting them with relevant information which can be more easily accessed. Indeed, the following features of the newest Facebook Messenger version can be quite useful:
The new “home page” divided into modules/panes allows you to see a list of the most recent messages and the friends you chat most frequently with are highlighted in the “Favorites” module below.
You can now easily find a particular conversation, change the chat’s color, edit nicknames.
The new “Active Now” module allows you to see when your friends are available, and the “Birthdays” module will remind you of your friends’ birthdays.
The new Messenger includes in-built emoji, stickers and GIF buttons, and, what is more important, payment options to transfer money to contacts, and video games.
The new Messenger is easier to use because it gives you the possibility to reorganize your chat threads based on your favorites and active users so that you might get to important chats faster, and get immediate responses back.
On the other hand, numerous users were not pleased with the new changes, stating that the old inbox layout was better and asking how they can switch back to the old Messenger (which is not possible).
The features people mostly complained about are:
The possibility to see your other messages on the side. This can be quite distracting and inconvenient, especially when you have lots of messages from your admiring fans and exes?
The extra space for ads that was added on the right side can be disturbing
The bigger version of the inbox covers half of the message screen now and this can also make the app harder to use
the message box only scrolls to the right, so users can’t easily see the whole message to guide the feel of the message or easily check for typos. That can be frustrating.
When users try to copy parts of a conversation and save it in word, this is not possible anymore. Also, the date and time of the messages can’t be copied at all
Users now can’t write longer messages without their paragraphs being truncated in the composer
Messages can’t be filtered by “unread”
Photo sharing needs improvement
Currently, you don’t have the ability to delete individual messages within a conversation
What users generally complained about most is the fact that Facebook complicated things unnecessarily, without doing usability studies or testing the changes in focus groups first. The loss of the inbox layout shook most users which openly expressed their complaints online. People are also discontent with the fact that the Messenger app, originally designed for mobile, is now being forced upon desktop and laptop users without choice.
In reply, David Marcus, the vice president of messaging products at Facebook, stated that the changes were meant to harmonize the user experience across all platforms, especially when the app is used by 1 billion+ people primarily on mobile. Clearly, the Messaging app needed to feel and look more mobile.
Mr. Marcus also claimed that what Facebook was actually trying to do with the New app is add more value to messaging, to make it more relevant and more interesting than before. And he promised his team would look into the features that people are not currently pleased with.
Stan Chudnovsky, head of product for Messaging at Facebook, also stated that the only change brought to the network is the introduction of the new modules, which actually put together different messages or different people. Messages have been displayed in chronological order since the beginning of the smartphone era.
The Facebook representative also claimed that there is more in store for Messenger: new modules will be progressively introduced to the app because people deserve an enriched messaging experience. The need for innovation is undeniable, especially in this field. It looks like you’re going to have to keep your eyes on your smartphones to see what the platform offers you next.
Facebook’s intent is apparently to revolutionize messaging communication, but will these new changes convince the public?
What do you think about the new update? Let us know in the comment section bellow.
I saw an interesting take on off-canvas navigation the other day over on The New Tropic. It wasn’t the off-canvas part so much. It was how the elements within the nav took up space. They stretched out to take up all the space, when available, but never squished too far. Those are concepts that flexbox makes pretty easy to express! Let’s dig in a little.
Here’s the nav, a video showing what I mean:
My favorite part is how there are submenus. When a submenu is toggled open, the same rules apply. If some stretching has happened, the nav items will shrink in height, making room for the submenu. But never shrink too far. If there isn’t room, the menu will just scroll.
Let’s make sure that list is as tall as the browser window, which is easy with viewport units. Then make sure each of the list items stretch to fill the space:
.main-nav > ul {
height: 100vh;
display: flex;
flex-direction: column;
}
.main-nav > ul > li {
flex: 1;
}
We’ve already gotten almost all the way there! Stretching works great, only when there is room, like we want:
Quick Toggles
We have a in place to toggle the submenus (arguably, we should probably place those buttons with JavaScript, since they don’t do anything without). Here’s how they could work. The submenus are hidden by default: