The command line is increasingly becoming a part of every web developer’s workflow. With tools like Grunt, Gulp and Bower leveraging the increase in productivity that comes with working in the command line, we are seeing it become a much more friendly and comfortable place for beginners and experts alike.
This article provides insight into some of the best tools to use in your day-to-day workflow in the command line and gets you started with a totally customized setup. Also, please make sure to check out my series on how to become a command-line power user1, available for free, of course.
Getting The Right Terminal
Before we can start using ZSH, Z and related tools, getting the right terminal application up and running would be extremely helpful. The default Terminal and Powershell applications on OS X and Windows leave much to be desired.
For OS X users, iTerm 22 is recommended as a replacement for OS X’s default Terminal. iTerm 2 introduces some features that are missing in the regular terminal, including commands you would regularly use in your text editor. This includes pane splitting, custom color schemes, paste history, fine-grained control over hotkeys, together with dozens3 of other handy preferences that you will find useful as you become more comfortable in the terminal.
On Windows we have the built-in PowerShell. Most users find this quite different to the interface of the typical UNIX servers used to host websites; it’s also rarely addressed in online tutorials. For this reason, it’s recommended to use an emulator that provides a closer experience to a real UNIX command line, like Linux and OS X do.
You have a couple of options here. The easiest would be to install Cmdr4, which provides Git integration, custom prompt and color schemes out of the box. For most, this will be more than enough to get started with all major web development tooling. It cannot, however, do any of the ZSH and Z that we will be exploring below.
For a full-blown UNIX emulation, there is Cygwin5 which allows us to run all UNIX commands as well as to work with Oh-My-ZSH. It’s not for the faint of heart, but if you are fairly comfortable with Windows, it might be worth trying out. Alternatively there is the all-in-one OH MY CYGWIN6, which might speed up your installation process.
Use ZSH and Oh-My-ZSH
When you start a terminal application, whether it be on your server or your local computer, it is running a shell called Bash. Bash is by far the most popular shell and comes with pretty much every UNIX-based operating system. There are, however, alternatives to Bash that make using the terminal faster and more comfortable for web developers.
One of the most popular shells with web developers is the Z shell, or ZSH. Along with that, we use a ZSH framework named Oh-My-ZSH7.
Installing Oh-My-ZSH is very simple. Simply run the following command and restart your terminal:
curl -L https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh | sh
Now, each time you start a terminal session, you will be using ZSH rather than the default Bash!
ZSH Settings
Before jumping into the next few sections, we need to know about ZSH settings. These are stored in a .zshrc file located in your home directory. It’s a hidden file, so you might not see it in your home directory, but you can view it by running open ~/.zshrc from the terminal. Swap out open with your favorite editor command, such as nano, subl or vim.
Now, we aren’t making any changes to this file just yet, but leave it open. Whenever you make a change to this file, you need to source it in order for the changes to take effect in your terminal. To do this, you can either close the current tab and open a new one or run the source ~/.zshrc command from the terminal.
Terminal Customization
Customizing what your terminal looks like is one of the best things you can do. Not only does it make you look like a bad-ass coder, but it can greatly improve readability via different colors. It can also improve productivity by displaying important information related to file path, Git status and more!
Prompts
Prompts are the line(s) of text shown when you are about to type something into the terminal. Your prompt provides useful information related to your project, such as the current version of Ruby, Node.js and so on, the current status of your Git repository, the outcome of the last run task, as well as the current working directory.
You can customize your path into oblivion, but chances are that someone has created a prompt that already suits your needs.
Your ZSH theme is set in the few lines of your .zshrc file. Look for something like ZSH_THEME="robbyrussel" — this is the default theme that comes with ZSH. I recommend setting this to ZSH_THEME="random", which will randomly assign a theme each time you open a new terminal tab or run source ~/.zshrc. Run this a few times until you find one you like; you can find the current theme name by running echo $ZSH_THEME.
You can browse all the ZSH themes and prompts in the wiki10. Because there are hundreds of themes, not all of them come with ZSH by default. Any you want will need to be downloaded and placed in ~/.oh-my-zsh/themes; Because this is a hidden directory, you can access it by running open ~/.oh-my-zsh/themes.
Here are a few popular themes:
Note: Many of these themes require a patched font to display the arrows and Git icons. You can download the fonts on GitHub20; then, make sure to set them in your iTerm2 settings.
Color Schemes
Now, the prompts define the standard color to be used, but we’ll use iTerm2 themes to customize what those colors actually look like. By default, the themes come with your basic red, green, yellow and blue, but we can tweak those to be the exact variants that we want.
You can edit the colors or even make your own theme in “Preferences” ? “Profiles” ? “Colors,” or grab one of the existing themes already out there.
So, now that our terminal is looking great, what can we actually do with ZSH and related tools? Probably one of the most useful features of ZSH is that it enables us to list and tab through files and folders. If you have ever tried to perfectly spell the name of a file, struggled with the case or fought with an impossibly long list of folders with spaces in it, you know the pain and limitations of Bash.
Folder and file tabbing works with any terminal command: cd, trash, cp, open, subl, etc. But for the purposes of this tutorial, let’s use cd for folders and open for files.
Go ahead and type cd (note the space after cd), and hit the “Tab” key twice. You can now use your arrow keys to move over, up and down through the files and folders. To select a folder, hit “Return.” You can now hit “Tab” and “Tab” again to discover subdirectories or hit “Return” to run the command.
This also works for completing file and folder names. Let’s say I’ve got two folders, css/ and /Capitalize. If I type cd c and then hit “Tab” twice, I’ll be able to cycle through all of the folders that start with C. You’ll noticed it’s case-insensitive. This is extremely helpful when you have many files with similar names.
Finally, this also works with command names whose names you might not totally remember. For example, if you’re working with MongoDB, 13 commands are associated with it: mongod, mongodump, mongoexport, mongofiles mongoimport and so on
By typing mong and hitting “Tab,” you’ll see all available commands that start with mong.
ZSH Plugins
ZSH allows you to extend built-in functionality by adding plugins, and it actually ships with a bunch of fantastic ones. To enable a plugin, open your .zshrc file and scroll down until you see the spot where active plugins are defined. To add a new one, just type the name between the parentheses, making sure to include a space between each name.
Z isn’t part of Oh-My-ZSH, but it’s the perfect companion for anyone who heavily uses the command line. The idea behind Z is that it builds a list of your most frequent and recent — “Frecent” — folders and allows you to jump to them quickly in one command, rather than having to tab through a nested folder structure.
To install it, make sure Z is included in the plugins list from above. While this works for most, some users have trouble getting this to work. If that is the case, download download Z30 and put it in your home directory so that it’s located at ~/z.sh. Then, in your .zshrc file, include the following and then source your .zshrc file again.
# include Z, yo
. ~/z.sh
Once it is installed, continue with your regular traversing of directories with your cd command. Z is watching where you frequently and recently have been, and is building a weighted list of directories.
After a few hours of moving around, you can use the z command followed by a word that is in your directory. It uses fuzzy matching to figure out what folder you are trying to get to, and it’s almost always right!
z styles might bring you to ~/Dropbox/projects/0202-coffee-shop/styles.
z pizza might bring you to ~/Dropbox/projects/0300-pizza/.
z pizza styles might bring you to ~/Dropbox/projects/0300-pizza/styles.
z 303 might bring you to ~/Dropbox/projects/0303/candy-store/.
For a full list of advanced Z commands, visit the GitHub repository31, or watch the Command Line Power User video on Z.
More, More, More!
As developers, we know how important it is to sharpen our tools and continually add new ones to our workflow. The command line is one of the best tools you can master as a developer. With this article, we are just scratching the surface of what we can do with the command line. Check out Command Line Power User32 for all 11 free videos about getting comfortable with the command line.
Websites have become more comprehensive and complex applications over time. Website interactions are often confirmed with an alert. The browser’s standard warning suffices for most cases; however, it looks pretty terrible. With SweetAlert, you can easily create animated alerts that, additionally, provide more functions than the boring standard. Include SweetAlert and Get Started SweetAlert comes from the digital pen of the Stockholm-based developer Tristan Edwards whose homepage is, by the way, a good example for parallax scrolling. SweetAlert was released under the MIT license which means that it is totally free. It is made from a JavaScript and a CSS file and has no dependencies. Getting started is fairly easy: Apart from the JavaScript file, you need to include a style sheet file, which contains all alert styles, into the HTML document. Then you can call various alerts via “swal()”. swal(“Das hat schon mal geklappt.”); To create a simple alert dialogue box, all you need to do is titling the method with a text you want to have displayed. An OK button for closing/confirming the dialogue box will be added automatically. If you want additional text in the dialogue box, you need to add another parameter in the “swal()” call. […]
* You might also be interested in the following articles:
In this article I’ll be taking a look at how to build a simple yet robust workflow for developing sites that require PHP and MySQL. I’ll show you how to use Vagrant51 to create and run a web server on your own computer, with the version of PHP your live site runs. I also demonstrate a process for using a hosted service to deploy files in a robust way to your live server.
This article is for you if you currently have no way to test your PHP and MySQL sites locally, or use something like MAMP2 or XAMPP3. The second half of the article will help you move away from uploading files using FTP to a process that is far less likely to cause you problems.
The Aim Of A Local Development Environment
When designing and developing your website, you should try to match the live web server as much as possible. This should include ensuring that paths from root don’t change between local and live versions, and that PHP modules and permissions are the same in both places. This approach will reduce the possibility of something going wrong as you push the site live. It should also enable you to come back to a site to make changes and updates and know that you can then deploy those changes without breaking the running site.
A good local development environment saves you time and stress. It gives you a place to test things out. It means that you can pick up a project, make some changes, deploy them and bill your client for another job well done.
Disaster-Free Deployments
If you keep a list of changes made to your site and then transfer the files one by one, you leave yourself open to difficulties caused by human error and connectivity problems. Many issues we see supporting our products are down to failed FTP transfers. A key file has failed to upload, and it is deep in the core product. It’s easy to forget to transfer a file, and it’s also easy to leave old files lying around. If the software you use has removed some files to resolve a security issue, leaving them on the server could leave you at risk even if you have upgraded.
A good deployment method ensures that the files on your live server exactly match those locally. If anything fails to deploy, you should be notified so you can fix the issue before your client or their customers see it first!
Step 1: Grab Some Tools
We’re going to be using some free tools to create our development environment. First, download VirtualBox4, a free application which allows you to run a virtual machine on your computer. You may have already come across virtual machines if you work on a Mac and use a Windows virtual machine for testing. A virtual machine is exactly as the name suggests, a complete virtual OS running on your computer.
Install the version of VirtualBox for your operating system.
Now download and install Vagrant51. Vagrant is an application that helps you manage virtual machines.
It’s possible to work with virtual machines without using Vagrant. However, each time you want to set up a new VM you have to go through the process of installing web server software and configuring the server. Vagrant helps you automate that process so that within a few minutes you can have a local web server running your site.
If you are on Mac OS X or Linux, at the command line run the following command:
sudo vagrant plugin install vagrant-bindfs
For all operating systems, run the next command to install Vagrant Host Manager8 to save you editing your hosts file by hand.
sudo vagrant plugin install vagrant-hostmanager
Vagrant requires a project folder with a text file saved with the name Vagrantfile in the root. In the Vagrantfile you specify how the VM should be set up. You can write your own configuration scripts for Vagrant, but for most cases you don’t need to as someone else has already done the hard work for you. Here we’re going to use a tool called PuPHPet9.
PuPHPet
PuPHPet is an online configuration tool that helps you configure a Vagrant project. You work through a form on the website, selecting options for your site, and then download a package containing a Vagrantfile and other scripts to set up a virtual machine.
Step 2: Discover What Is On Your Live Server
To use PuPHPet to set up a development server that is as close as possible to the hosting you will use for the site, first find out what is on the live server. You want to know:
Type of Linux
Web server: Apache or Nginx (probably Apache if shared hosting)
PHP version: this will be something like PHP 5.4 or 5.5, etc.
The configured resource limits for file upload, memory and so on
Installed PHP modules; for example: gd, curl
MySQL version
If you don’t yet have access to the hosting then you will need to ask the host these questions. If you do have access then you can find out for yourself.
Upload a file to the server named info.php that contains the following PHP function:
You should see an indication of the base operating system in the first line of the report “System”. Knowing that you have a Debian, Ubuntu or CentOS system might be helpful for more advanced configurations.
2. Web Server
This is probably Apache. If you see any mention of Apache in the initial section or in the headings below, it’s Apache. The most likely alternative is Nginx. For simple sites the biggest difference between web servers is the fact that rewrite rules are different, so if you are creating friendly URLs you need to know the correct syntax to use.
3. PHP Version
The version of PHP will be right at the top of the document next to the PHP logo. It might be a long string — you are mostly interested in one number after the dot. So if you see “PHP Version 5.4.4-14+deb7u14,” all you need to note down is PHP 5.4.
4. PHP Modules
PuPHPet will install some default modules for you. If you want to be sure certain things are present, however, you can specify them. The PHP modules are listed, with details about them, after the “Core” section of the report. Common modules to look out for are:
curl: for making requests to other servers
gd and/or imagemagick: used for image manipulation
mysql, mysqli and pdo: methods of connecting to the database. You should probably be using mysqli or pdo at this point
5. Resource Limits And Configuration Options
Under the section “Core” you will find all kinds of information about PHP. Useful settings to note down are:
max_execution_time: how long a script may run for
max_file_uploads: how many files may be uploaded at once
max_input_vars: how many fields a form is limited to
post_max_size: the maximum size of a form post
upload_max_filesize: file upload limit
Depending on your hosting, some of these might be able to be changed. For example, you can usually increase the size of files that can be uploaded.
6. MySQL Version
Under the PHP module information for mysql, mysqli and pdo_mysql you should see a value for “Client Library Version”: this is your MySQL version. Again, knowing just one value after the dot is fine.
Beware Of Old PHP!
On doing this test, if you discover that the server is running anything older than PHP 5.4 — stop now and find out how to upgrade the hosting to a more recent PHP version. For a new site I’d suggest ensuring you are on at least PHP 5.5. Version 5.6 is even better.
PHP5.3 is not only end-of-life, it’s also really slow in comparison to newer PHP versions. It’s a good plan to make sure you are using a supported version of a core technology on your site. Through helping customers at Perch we’ve found that, in general, hosts are happy to upgrade you to a newer server if you put in a request. If they are not, I’d seriously consider moving hosts13.
Step 3: Build A Project With PuPHPet
Now that you have your information to hand, you can use it to build a project with PuPHPet that reasonably closely mirrors your environment. I’ll walk you through the interface. If I don’t mention a setting and you don’t have an opinion about it, then leave the default value.
Deploy Target
On the PuPHPet website14, choose Deploy Target ? Locally in the sidebar. In the main screen select VirtualBox as the provider.
Under Distro you can select the type of Linux you are using, if it is listed. If it isn’t listed I would suggest using the default Ubuntu.
The IP address needs to be something unique on your network, not a real external IP. I tend to use IP addresses with the format 10.1.0.130 for VMs.
The hostname identifies your server. Again this can be something made up.
Shared Folders is an important setting. When you use a virtual machine you are running an entirely new computer with its own file system on your computer. If you want to continue editing files in the usual place on your computer — and not have to transfer them into the VM to view them — you need to map the drive on your own machine to one on the VM. That’s what we are doing when we create a shared folder.
On my Mac, inside /Users/rachel/Sites I have a folder called vm. This is where I place a folder for each of my projects. When I set up a VM I use the path /Users/rachel/Sites/vm as the folder source, mapped to /var/www as the folder target.
If this is a new site and you don’t already have files created, at this point I’d suggest creating a folder for the project you are setting up the virtual machine for, and pop an index.html into that folder with
It works!
in it. Just so you can see that things are working after running setup.
Finally, if you are on Mac OS X or Linux, select NFS as the shared folder type.
System
You can probably leave everything here as the default. It’s worth knowing that under this option you can configure cron jobs for scheduled tasks and add system packages if you have certain things you want to install.
Web Servers
Unless you have identified that you have Nginx, select Apache and check Install Apache. This will open up a further set of options. Here is where you configure your virtual hosts.
A virtual host means that instead of having one website per server you can have multiple websites on a server. With virtual machines you can create as many as you like, so it’s up to you whether you configure a single virtual host or more. What you should not do is configure one virtual host and then stick multiple websites into subfolders of that host. Each site needs either its own VM or a virtual host on a VM so that the path of your files from root does not change when you go live.
The basic settings for a virtual host are as follows:
Server name: clientname.dev or any made up domain you like.
Document root: from /var/www. If you have shared a folder in the way I suggested, /var/www is that directory on your computer — the directory with all your project folders in it — so you can specify /var/www/clientname here.
If you want to add another host, scroll down to Add an Apache vhost and create your next one.
Languages
Select PHP and check Install PHP.
Under PHP Version select the version you identified as being on your host.
Under PHP Modules add any specific modules (for example, “gd” and “curl”) that you identified as present on your hosting.
Databases
Select MySQL and if you know the version of MySQL select it here.
You can now create a database user with a password. I tend to just use the name “vagrant” for both on local development machines.
You can also create a database ready to use for your site. Remember these details as you’ll need them to install your CMS or use in your own custom code that connects to MySQL.
Mail Tools
If you are using a CMS then it’s a good idea to have some way of looking at the emails it sends. PuPHPet suggests you install MailCatcher21 locally for this task as it saves configuring a mail server.
That should be it for setup. Select Create Archive from the sidebar and download your file. Unzip the file and put it somewhere on your system — mine all live in my home directory in a subdirectory called vagrant.
Your First Virtual Machine
You are almost ready to go. Open up a terminal window and change into the folder where you unzipped your project.
cd /Users/rachel/vagrant/mynewproject
Now run the command:
vagrant up
The first time you do this it will take a while. Vagrant will see that you don’t already have the base operating system downloaded so it will download it. When you create a new project in the future and use the same version of Linux, Vagrant will copy the box so this will be quicker.
You will see a lot of stuff scrolling by — don’t worry about it; it will take a few minutes to get everything set up for you. If you are using NFS you will be prompted for your password during the process to allow Vagrant to edit your exports file.
Once Vagrant has finished you should be able to go to the domain you set up for your virtual host using your web browser and see your site! If you make changes to your files and reload the browser, you will see your changes.
Basic Vagrant Commands
Vagrant is controlled with a few simple commands from the command line. We’ve already used vagrant up which will start up a VM. If the VM is brand-new it will also provision it — setting up the packages you configured to be installed, creating your virtual hosts, and so on. If you run vagrant up on a VM that has already been provisioned, Vagrant will not reprovision it.
Understanding the commands and what they will do is important, but if you prefer to stay out of the command line, take a look at Vagrant Manager24. Vagrant Manager is an application for Mac OS X and Windows that gives you a nice way to manage your VMs and also see which are running at any one time.
If you want to reprovision a VM, first make sure it is running with vagrant up, then type:
vagrant provision
To stop a VM from running you can use:
vagrant suspend
This will pause the box and give back your host machine the memory it uses, but it won’t delete anything on the VM or shut down the operating system. If you run vagrant up again, it will come back just as it was before you paused it.
To shut down the operating system on a VM use:
vagrant halt
Running vagrant up on a halted box boots up the system again.
If you want to set your virtual machine right back to its initial state, run:
vagrant destroy
This will delete anything you installed on the server. It won’t touch the files in your mapped drive as those are hosted on the host computer, but it will delete MySQL databases, for example. If you want the data from those, export it first.
To access the command line on the VM type:
vagrant ssh
You will then be on your VM and can run any commands. A common thing you might do is import or export a database file.
Importing A Database File
Our process creates an empty database. If you are installing a CMS or some other software, it is likely that it will create the tables for you. If you want to import a file exported from your live server, you can do that at the command line.
Use vagrant ssh to reach the command line of your VM. Make sure your exported database SQL script is in the root of your site, within the shared folder. Then, type the following (I’m assuming a database name of dbMySite, username and password both set to “vagrant”.
mysql -u vagrant -p dbMySite < /var/www/clientname/db.sql
You will then be prompted for the password. Type your password and the database will be imported.
Deploying Live
After following these steps you should have a nice way to work locally on one or more projects at a time. You can set up virtual machines that are similar to live, and you are not developing in subfolders. We can continue to enhance our workflow by moving away from FTP to using a deployment service.
Get Files Into Source Control
If you are already using Git, then you are part of the way to simple deployments. If not, that is the first thing we need to do. Our process will be to commit files into Git on our own computer, then create an account on a hosted Git repository and push our files to the live server from there.
At the command line, give Git your name using the following command:
git config --global user.name "YOUR NAME"
Use the next command to give Git your email address. We are going to use a hosted repository, so use the email address here that you will use to sign up.
Stay on the command line and change to the directory where you keep your site files. If your files are in /Users/rachel/Sites/vm/clientname, you would type:
cd /Users/rachel/Sites/vm/clientname
The next command tells Git that we want to create a new Git repository here:
git init
We then add our files:
git add .
Then commit the files:
git commit -m "Adding initial files"
The comment in quotes after -m is a message describing the commit. Your local files are now in Git! As you work you can add and commit files.
If you would rather not work in the command line, there are plenty of applications to help you work with Git. Tower28 is a popular choice. The people who develop Tower have also produced a great book on learning version control with Git29. You can read online free or purchase an ebook version.
Create A Hosted Repository
To make deployments easy we are going to use a hosted Git repository that will securely store your files and allow you to deploy them live. The hosted service I’m suggesting here is Beanstalk32, because it does both hosted Git and deployment. There are other services that will deploy from a GitHub account or other hosted Git service; Beanstalk bundles them together which makes things straightforward.
After setting up your Beanstalk account, create a repository there. You now need to push your files to that repository. At the command line, make sure you are inside the directory that contains your files, and type:
Your files will now be transmitted to Beanstalk. You should be able to see and browse around them there.
We can now edit our files, previewing them on our own web server. We can commit them locally and push the changes to Beanstalk. Our final step is to make them live.
Deployment
Deployments on Beanstalk can be manual or automatic. An automatic deployment would happen when you pushed some code into your branch on GitHub. Typically, you’d do this on a staging environment where it wouldn’t matter if the code you pushed broke things.
For a live site, especially working in our simple way, you’ll want to do a manual deployment. You will log into Beanstalk and trigger the deployment yourself.
When you deploy, Beanstalk will make sure that the files on the server are identical to the files in Git. If a new file is present, it will be added, changed files will be updated, and any files deleted from Git will be removed from the server.
To get started deploying your files, go to the repository that you created on Beanstalk and select Deployments. Here you can create a new environment and server. If you are creating a deployment to the live server, name it “Live” or “Production,” keep Deployment Mode as Manual and specify the master branch.
You can then add a server type. If you are deploying to shared hosting, that type would ideally be SFTP, but could also be FTP. On the next screen you then add your server details. These are exactly what you would use to connect with an FTP client.
Beanstalk allows you to run a test to check that your server can be connected to. Once you have your server set up, and have verified the connection, you are all set to deploy your files to your live site. It’s as simple as clicking a button and waiting. Once you deploy, Beanstalk will do the following:
Connect to your server.
Ensure that the files on the server match the files in the branch you are deploying.
On initial deploy, all existing files on the server have to be checked. Your first deploy will be slow!
Subsequent deploys only change things that have changed in Git.
Deployment Tips
Here are a few suggestions to make deploying your sites in this way easier.
Create A Multiple-Server Config File
Working across a few environments means you are going to need to manage things that are specific to each environment, such as database settings or file paths. I like to create a config file that switches on host name, so the same file can be everywhere and you can’t accidentally replace your live details with the development server ones. You can see an example for Perch38, but you could do the same for any other system that has a config file as part of the code.
Use .gitignore To Keep Things Out Of Beanstalk
There are likely to be files and assets that you don’t want to push to Beanstalk. You can use a .gitignore file to make Git ignore them. There are some good starting point files39 for various systems on GitHub.
Exclude Files And Folders On GitHub
If you want files and folders to end up on Beanstalk as part of the repository but not be deployed onto your server, you can also exclude them from the deployment. Perhaps you have some source assets you want to manage in Git along with the site, but don’t want to deploy. You can configure patterns, specific files or directories when setting up or editing your deployment.
Edit Files On Beanstalk In Emergencies
When you are away from your desk and need to fix a problem, Beanstalk can save the day. You can edit a file directly on Beanstalk using the web interface and push it live.
This is not as good as testing locally before deploying but handy in an emergency. Unlike editing directly on the server, you have the safety net of being able to roll back to a previous version if it all goes wrong. You also have the changed file in Git so you don’t overwrite the change next time you deploy.
Use Navicat To Sync Database Changes
One of the biggest problems of deploying changes can arise if you need to make changes to the live database to keep it in sync with your local one. Navicat40 can help with that job. You can select a source and target, compare differences and run queries to make changes.
Your New Workflow
If you’ve followed this article you should now be in a position to develop one or many sites locally, using a setup similar to how the site will run on the live server.
Your files are now version-controlled and are pushed to a remote Git repository.
You can deploy in the confidence that what ends up on the live server is exactly what should be on that server — no more, no less.
When you need to make changes to a project in the future, you can make sure you have the latest files from Beanstalk, make your changes, test, commit and deploy, and not worry that you might break something. The time you have spent getting your workflow straight should pay off the first time you need to make updates to a running site that you haven’t touched for a few weeks.
This isn’t the only way to achieve a solid development environment and deployment process, but it’s a reasonably straightforward one. Once you understand this type of workflow, you can explore how to streamline it further, making time to do more interesting things than fight with servers and hosting!
It seems like every other day the public is granted some new means of accessing the web. Some days it’s a new browser. Others it’s a new smartphone. Or a tablet. Or an e-reader. Or a video game console. Or a smartwatch. Or a TV. Or a heads-up display. Or a car. Or a refrigerator. I worked on one project where the client provided me with a spreadsheet detailing 1,400 different user agents that accessed the login screen for the m-dot site. In two days! As further evidence, consider the enlightening details of this post from Jason Samuels of the National Council on Family relations, a non-profit organization: In 2008, visits from “mobile” devices accounted for only about 0.1% of their traffic. In 2014, that number had skyrocketed to 14.4%. In 2008, they detected 71 different screen resolutions, which is already a lot to consider. By 2014, however, they were seeing 1,000 unique screen resolutions each and every quarter (with over 200 of those recording 10+ visits per quarter). That last stat blows my mind every time I read it. You can’t design for 200 different screens, let alone 1,000. It’s a fool’s errand. And don’t even think of trying […]
* You might also be interested in the following articles:
Almost four years ago, I waxed hillbilly on how nice it was to stick with what you knew, at least for side projects. At the time, my main project was Java and my side projects were .NET. Now, my main project is .NET and for whatever reason, I thought it would be nice to take on a side project.
The side project is Western Devs, a fairly tight-knit community of developers of similar temperament but only vaguely similar backgrounds. It’s a fun group to hang out with online and in person and at one point, someone thought “Wouldn’t it be nice to build ourselves a website and have Kyle manage it while we lob increasingly ridiculous feature requests at him from afar?”
Alas, I suffer from an unfortunate condition I inherited from my grandfather on my mother’s side called “Good Idea At The Time Syndrome” wherein one sees a community in need and charges in to make things right and damn the consequences on your social life because dammit, these people need help! The disease is common among condo association members and school bus drivers. Regardless, I liked the idea and we’re currently trying to pull it off.
The first question: what do we build it in? WordPress was an option we came up with early so we could throw it away as fast as possible. Despite some dabbling, we’re all more or less entrenched in .NET so an obvious choice was one of the numerous blog engines in that space. Personally, I’d consider Miniblog only because of its author.
Then someone suggested Jekyll hosted on GitHub pages due to its simplicity. This wasn’t a word I usually assocated with hosting a blog, especially one in .NET, so I decided to give it a shot.
Cut to about a month later, and the stack consists of:
Of these, the one and only technology I had any experience with was Rake, which I used to automate UI tests at BookedIN. The rest, including Markdown, were foreign to me.
And Lord Tunderin’ Jayzus I can not believe how quickly stuff came together. With GitHub Pages and Jekyll, infrastructure is all but non-existent. Octopress means no database, just file copying. Markdown, Slim and SASS have allowed me to scan and edit content files easier than with plain HTML and CSS. The Minimal Mistakes theme added so much built-in polish that I’m still finding new features in it today.
The most recent addition, and the one the prompted this post, was Travis. I’m a TeamCity guy and have been for years. I managed the TeamCity server for CodeBetter for many moons and on a recent project, had 6 agents running a suite of UI tests in parallel. So when I finally got fed up enough with our deploy process (one can type `git pull origin source && rake site:publish` only so many times), TeamCity was the first hammer* I reached for.
One thing to note: I’ve been doing all my development so far on a MacBook. My TeamCity server is on Windows. I’ve done Rake and Ruby stuff on the CI server before without too much trouble but I still cringe inwardly whenever I have to set up builds involving technology where the readme says “Technically, it works on Windows”. As it is, I have an older version of Ruby on the server that is still required for another project and on Windows, Jekyll requires Python but not the latest version, and I need to install a later version of DevKit, etc, etc, and so on and so forth.
A couple of hours later, I had a build created and running with no infrastructure errors. Except that it hung somewhere. No indication why in the build logs and at that moment, my 5-year-old said, “Dad, let’s play hockey” which sounded less frustrating than having to set up a local Windows environment to debug this problem.
After a rousing game where I schooled the kid 34-0, I left him with his mother to deal with the tears and I sat down to tackle the CI build again. At this point, it occurred to me I could try something non-Windows-based. That’s where Travis came in (on a suggestion from Dave Paquette who I also want to say is the one that suggested Jekyll but I might be wrong).
Fifteen minutes. That’s how long it took to get my first (admittedly failing) build to run. It was frighteningly easy. I just had to hand over complete access to my GitHub repo, add a config file, and it virtually did the rest for me.
Twenty minutes later, I had my first passing build which only built the website. Less than an hour later and our dream of continuous deployment is done. No mucking with gems, no installing frameworks over RDP. I updated a grand total of four files: .travis.yml, _config.yml, Gemfile, and rakefile. And now, whenever someone checks into the `source` branch, I am officially out of the loop. I had to do virtually nothing on the CI server itself, including setting up the Slack notifications.
This is a long-winded contradiction of my post of four years ago where my uncertainty with Java drove me to the comfort of .NET. And to keep perspective, this isn’t exactly a mission critical, LOB application. All the same, for someone with 15-odd years of .NET experience under his obi, I’d be lying if I said I wasn’t amazed at how quickly one can put together a functional website for multiple authors with non-Microsoft technology you barely have passing knowledge of.
To be clear, I’m fully aware of what people say about these things. I know Ruby is a fun language and I feel good about myself whenever I do anything substantial with it. And I know Markdown is all the rage with the kids these days. It’s not really one technology on its own that made me approach epiphaniness. It’s the way all the tools and libraries intermingle so well. Which has this optimistic hillbilly feeling like his personal life and professional life are starting to mirror each other.
Is there a lesson in here for others? I hope so as it would justify me typing all this out and clicking publish committing to the repository. But mostly, like everything else, I’m just happy to be here. As I’ve always said, if you’ve learned anything, that’s your fault, not mine.
It’s always great to have a little toolbox with just the right tools waiting for you when you need them. What if you are about to start working on a new project which should apply the material design language introduced by Google last year? What if you had just a good starter kit with everything you need to dive into the creative process without being distracted by routine tasks?
We’re here to have your back — with a little selection of handy goodies, icons, templates and tools to help you get off the ground faster. This post is one of our first shorter “Sideblog”1 pieces where we highlight some of the more useful and helpful snippets and goodies every now and then. We’d love to hear your feedback in the comments to this post.
Material Design, The Visual Language
Intended by Google primarily for Android apps running on phones, tablets and everything in between, material design raised awareness of the delightful details that can make up an interface — from subtle transitions and animations to colorful interfaces with bold, vibrant typography. Experiences crafted with material design in mind are bright and appealing; if well crafted, they’re smooth and accessible as well. No wonder that the visual language found its way through to a brave new mobile world in native apps, hybrid apps and also websites.
The visual language is based on strong color schemes, clarity and space. Shadows play a central role in creating a three-dimensional feeling on 2-D screens, and support consistency and continuity when used with animations in the user experience. It’s more than an extended style guide, though. There are many freely available resources dedicated to material design and you can use them for your project right away. Material design is well documented4 and elaborated to the last detail.
Yet whenever we talk about aesthetics and interaction, we ought to have a conversation about performance, too. Even performant animations can prove an enormous bottleneck when every DOM element is supposed to move, animate and transition from one state to another. Performance matters more than ever before and we have to find the delicate balance between smooth interactions and getting content to the user fast.
More weight doesn’t mean more wait9, so we could treat animations as progressive enhancement, acknowledging that the experience isn’t going to match the material design culture for everybody. That’s when responsive animations — the concept we haven’t been thinking about a lot yet — might become important as well (not to be mixed up with animations in responsive design10, which can be delightful as well).
What can be adopted, though, are colors, spacing, fonts and icons. In fact, there are quite a few resources you can use to get just what you need quickly, when you need it.
Obviously, Roboto has been extensively crafted to fit the purpose of material design, so it’s highly versatile and fully supports Cyrillic, Latin, Greek and Vietnamese (extended character sets, of course). The typeface is available in a variety of weights, from thin and light, to normal and medium, to bold and black (100–300–400–500–700–900); the same goes for italic variants, too. The family can be used alongside the Roboto Condensed family and the Roboto Slab family.
While Google has suggested a few valid restrictions constraints when designing Android apps, you can still fit in enough space for creativity to play with the aesthetics of your interface.
Iconography is an important factor, but often it can’t exist alone: for a stronger visual impact, it needs supporting photography or illustrations. Furthermore, Google also encourages30 using (and mixing) illustrations and photos to enhance the user’s experience — with predictive, specific and relevant images, and potentially with an interactive overlay to hide them when they aren’t needed.
The Physical Side Of Material Design
Unlike flat design33, material design widely uses the so-called paper shadow. This shadow is supposed to act like a sheet of paper lying on a bright surface. It emulates a 3-D presence for a digital object. Material design is derived from the material world. Probably the most well-known example is the Gmail icon, which uses lighting effects to make you think of a conventional envelope.
Aside from Google’s official icon set, designers have taken the approach further by crafting their own icons and adapting the visual languages to their needs. That might not exactly correspond to the original idea behind material design, but it doesn’t mean that it can’t work well. For example, Muhammad Yasir from Dubai has released a free PSD material icon set36. There are many37more38icons39available40 free, too.
Cards
With material design, content is always presented in cards43 which use hierarchy, background images and content to “provide context and an entry point to more robust information and views.” Indeed, cards work well as they are supposed to put just the right amount of information in a compact overview, enhanced and supported by visual elements. There are several variations of cards, depending on the content you want to fill in, but usually you will either have an action displayed or provide information in a content block.
In material design, Google also advocates launch screens45, which might sound like a good ol’ splash screen from the sweet and sour Flash times. However, the context might require them: for example, during the in-between time, when your application needs a few moments to fetch data or provide feedback. It might also be useful for onboarding46.
Animations In Material Design
Smooth experiences with material design are achieved with animations47. There are many interesting material design animations48, and often they are quite subtle, but when put together they establish a sense of continuity and delight.
You can find a few examples and freely available samples below:
If you’re looking for further tutorials and examples, search for “Polycasts”, a series of videos produced by Google.
HTML/CSS/JavaScript Components
Google has just launched Material Design Lite59, an extensive set of components, templates and styles which have been heavily optimized for performance, speed and accessibility. The components do not rely on any JavaScript frameworks and fall back to a basic experience in legacy browsers. Among the templates, the library also has a “blog” template — at just 159KB, it’s a lightweight template containing built-in patterns for subscription call-to-action, comments, comment ratings, and more.
Besides the blog template, Google has also released a lite version of the current Android.com website. Its basic weight without web fonts is just 27KB, and contains a search field, navigation, and a carousel.
Material Design User Interface Kits For Free Download
The Designtory UI Kit (PSD)
The Ultralinx Material Design Kit (PSD)
Okilla Material Design Kits (PSD)
InVision’s Sketch UI Kit (.sketch)
Google’s Sticker Sheet Material Design UI Kit (.sketch, .ai, .psd)
Where To Go From Here
Trends aren’t important, but techniques are. Whatever flavor of material design you select for your project, keep in mind that it’s all about conforming to the culture of the device your users are using, yet also creating a distinctive, memorable, delightful experience for your users. We can, of course, achieve it without material design, but we can also benefit from some of the qualities and patterns of its rich visual language.
One way or another, at this point you should have a few tools in your toolbox to approach that project head-on, without losing time, and focusing on crafting those websites that your users will love and keep returning to.
Did you like this “Sideblog” piece? Would you love to see more posts like this one on the future? We’d love to hear your feedback in the comments to this post.
This article was written by Sven Lennartz95, co-founder of Smashing Magazine. It was first published in German96, and then extended and edited by Markus Seyfferth and Vitaly Friedman.
Every now and then we see discussions proclaiming a profound change in the way we design and build websites. Be it progressive enhancement1, the role of CSS2 or, most recently, web design itself being dead3. All these articles raise valid points, but I’d argue that they often lack objectivity and balance, preferring one side of the argument over another one.
These discussions are great for testing the boundaries of what we think is (or is not) possible, and they challenge how we approach our craft, but they don’t help us as a community to evolve together. They divide us into groups and sometimes even isolate us in small camps. Chris Coyier has published a fantastic post4 recently covering the debate on the role of CSS in light of growing popularity of React.js, extensively and objectively. That’s the quality discussions we need, and that’s what keeps us evolving as a growing and maturing community.
Web technologies are fantastic — we all agree on this. Our tools, libraries, techniques and methodologies are quite fantastic, too. Sometimes they are very different and even contradictory, but they are created with the best intentions in mind, and often serve their purpose well in the specific situations they were designed for. Sometimes they contain mistakes, but we can fix them due to the nature of open source. We can submit a patch or point out solutions. It’s more difficult, but it’s much more effective.
There are a lot of unknowns to design and build for, but if we embrace unpredictability5 and if we pick a strategy to create more cohesive, consistent design systems6, we can tackle any level of complexity — in fact, we do it every single day. We solve complex problems by seeking solutions, and as we do, we make hundreds of decisions along the way. Yet sometimes we fall into the trap of choosing a solution based on our subjective preferences, not objective reasoning.
We tend to put things into buckets, and we tend to think in absolutes. Pro carousels or anti carousels; pro React.js or anti-React.js; for progressive enhancement or against it. But the web isn’t black and white — it’s diverse, versatile, tangled, and it requires pragmatism. We are forced to find reasonable compromises within given constraints, coming from both business and UX perspectives.
Tools aren’t good or evil; they just either fit a context or they don’t. Carousels can have their place when providing enough context to engage users (as Amazon does). React.js modules can be lazy-loaded for better performance, and progressive enhancement is foundational for making responsive websites really8, really9 fast. And even if you have extremely heavy, rich imagery, more weight doesn’t have to mean more wait10; it’s a matter of setting the right priorities, or loading priorities, to be precise.
No, web design isn’t dead. Generic solutions are dead.11 Soulless theming and quick skinning are dead. Our solutions have to be better and smarter. Fewer templates, frameworks and trends, and more storytelling, personality and character. Users crave good stories and good photography; they’re eager for good visuals and interesting layouts; they can’t wait for distinctive and remarkably delightful user experiences. This exactly should be our strategy to create websites that stand out.
There are far too many badly designed experiences out there, and there is so much work for us to do. No wonder that we are so busy with our ongoing and upcoming projects. Proclaiming our craft to be dead is counter-productive, because we’ve shown ourselves and everybody out there what we are capable of. The last fifteen years of web design were nothing if not outstanding in innovation and experimentation. And it’s not about to stop; that’s just not who we are.
If we can’t produce anything but generic work, other creatives will. The web will get better and it’s our job to make it better. It won’t be easy, but if we don’t adapt our practices and techniques, we’ll have to give way to people who can get it done better than we can — but web design itself isn’t going anywhere any time soon.
It’s up to us to decide whether we keep separating ourselves into small camps, or build the web together, seeking pragmatic solutions that work well within given contexts. We might not end up with a perfect solution every time, but we’ll have a great solution still; and more often than not it’ll be much, much better than the solution our client came to us for in the first place.
Welcome to part 7 of our Online Marketing series that will cover online video marketing. We will not discuss how to shoot a viral video but rather how online video marketing works and how best to approach it, complete with some general ideas and tips. We have seen advertising or marketing video spots on TV and movie screens for decades. But the scope of the internet, especially the rise of the social networks, has increased the possibilities for video marketing dramatically. Nowadays, instead of cool TV spots it’s viral online videos that become watercooler talk (a notable exception being the Super Bowl commercials.) As an entrepreneur you could have many kinds of video produced: promotional videos, image films, corporate films, training videos, product features, tutorials and many more. Professional videos captivate the viewer with a high-quality mix of information and emotion – reflecting well on your brand and your offers. What is Online Video Marketing? Online video marketing distributes videos on the internet to spread or support PR, marketing or sales messages. You could, for example, upload a video to as many platforms as possible to reach as many people from your target group as possible. For instance, a product […]
* You might also be interested in the following articles:
Developer relations is an integral part of many software companies that hope to win the hearts and minds of developers. You may refer to it as developer evangelism or community outreach but ultimately, it’s a motion dedicated to ensuring that you’re proactively listening to what the community needs and looking to see how you can help, providing a conduit for developers to offer you feedback and have an opportunity to share your vision with the community and hopefully solve some of their problems. In my opinion, this is absolutely the right order to be driving on since it’s important to always think of the needs of the community. But the problem with developer relations is that it’s a subjective, somewhat nebulous field that in most cases doesn’t involve tangible “things“. This can make it hard to measure how successful you or your team are and if you’re hitting the mark with your community. What do Developer Advocates do? From my experience and through many discussions with my peers, the typical developer advocate tends to focus on several key outreach mechanisms to engage with developers. These are: Social media engagement, primarily Twitter Content generation via blogs or 3rd party sites such […]
* You might also be interested in the following articles:
Tom Fishburne shares a pretty funny comic on how to give and receive feedback. While this is from a Marketing standpoint, we can (as designers) also learn from this.
For us in the creative industry, getting buy-in to our ideas or concepts is paramount. I’ve personally have experienced every one of this feedback. Sometimes delivered in a rather unpleasant manner. As Designers and Design Thinkers, we have to seek creative ways to deal with such feedback that goes beyond just doing good work.
Often this includes being vigilant with meeting minutes or what agencies call “client contact reports”, identifying roles and responsibilities very early in the project, ensuring you understand the needs of all direct and indirect stakeholders and finally building a good rapport with your client to tease all of this information out.
This is a really nice and timely reminder to all, including myself.