In order for messages to be clearly transmitted, many factors play an important role. Depending on the type of communication, factors such as stuttering, poor grammar, bad alignment of letters, incorrect use of punctuation marks, mumbling, and others set apart unclear and clear messages. In oral speech, diction, proper intonation, a calm tempo of spoken words, the intensity of the voice are all skills that anyone who wants to be a good communicator has to achieve. In written speech, readable caligraphy, correct alignment of letters, words, sentences, and paragraphs, will make any text accessible to the reader.
Why are all these important? Designers have a major responsibility to make the latter type of communication possible without any obstacle. There are a few notions any designer should be familiar and able to work with: Kerning, Leading, and Tracking.
What is Kerning?
Kerning: Definition
Kerning is the stylistic process that made you read the first word of this sentence “KERNING” and not “KEMING”. You’ve probably already guessed it. Kerning is the act of adjusting the space between two letters in order to avoid the irregular flow of the words and to improve legibility.
A WebAIM survey from 2018 reported that 12.5% of users who rely on any sort of assisted technology browse the web with custom stylesheets, which can include doing away with every CSS declaration across a site. And, if we’re talking about slow internet connections, ditching CSS could be one way to consume content faster. There’s also the chance that CSS is disabled for reasons outside our immediate control, like when a server has hiccups of fails to load assets.
As an experiment, I used five websites and a web app without CSS, and this post will cover my experiences. It wound up being a rather eye-opening adventure for me personally, but has also informed me professionally as a developer in ways I hope you’ll see as well.
But first, here’s how to disable CSS
You’re absolutely welcome to live vicariously through me in the form of this post. But for those of you who are feeling up to the task and want to experience a style-less web, here’s how to disable CSS in various browsers:
Chrome: There’s actually no setting in Chrome to disable CSS, so we have to resort to an extension, like disable-HTML.
Firefox:View > Page Style > No Style
Safari:Safari > Preferences... > Show Develop menu in menu bar. Then go to the Develop dropdown and select the “Disable Styles” option.
Opera: Like Chrome, we need an extension, and Web Developer fits the bill.
Internet Explorer 11:View > Style > No style
I couldn’t find a documented way to disable CSS in Edge, but we can remove CSS from it and any other browser programmatically via the CSS Object Model API in the DevTools console:
var d = document;
for (var s in S = d.styleSheets)
S[s].disabled = true;
for (var i in I = d.querySelectorAll("[style]"))
I[i].style = "";
The first loop disables all external and internal styles (in and ), and the second eliminates any inline styles. The caveat here, however, is that elements can still dynamically be given new inline styles. To instantly erase them, the best workaround is adding a timer. Something like this:
Alternatively, there are text-only browsers — such as the ancient Lynx — but expect to be living without video, images (including SVGs), and JavaScript.
Through the style-less looking glass…
For each site I surfed without CSS — Amazon, DuckDuckGo, GitHub, Stack Overflow, Wikipedia and contrast checker called Hex Naw — I’ll share my first impressions and put some suggestions out there that might help with the experience.
Get ready, because things might get a bit… appalling. ?
Website 1: Amazon.com
CSS is what gives every website its design. Websites sure aren’t very fun and friendly without it! I’ve read about somebody going a week without JavaScript and how the experience resulted in websites that were faster, though certain aspects of them would not function as expected.
But CSS. Turning off CSS while browsing the web wouldn’t exactly make the web far less usable… right? Or, like JavaScript, would some features not work as expected? Out of curiosity, I decided to give it a whirl and rip the CSS flesh off the HTML skeleton while browsing a few sites.
Why, you might ask? Are there any non-masochistic reasons for turning off CSS? Heydon Pickering once tweeted that disabling CSS is a good way to check some accessibility standards:
Common elements like headings, lists, and form controls are semantic and still look good.
A visual hierarchy is still established with default styles.
The content can still be read in a logical order.
Images still exist as tags rather than getting lost as CSS backgrounds.
A WebAIM survey from 2018 reported that 12.5% of users who rely on any sort of assisted technology browse the web with custom stylesheets, which can include doing away with every CSS declaration across a site. And, if we’re talking about slow internet connections, ditching CSS could be one way to consume content faster. There’s also the chance that CSS is disabled for reasons outside our immediate control, like when a server has hiccups of fails to load assets.
As an experiment, I used five websites and a web app without CSS, and this post will cover my experiences. It wound up being a rather eye-opening adventure for me personally, but has also informed me professionally as a developer in ways I hope you’ll see as well.
But first, here’s how to disable CSS
You’re absolutely welcome to live vicariously through me in the form of this post. But for those of you who are feeling up to the task and want to experience a style-less web, here’s how to disable CSS in various browsers:
Chrome: There’s actually no setting in Chrome to disable CSS, so we have to resort to an extension, like disable-HTML.
Firefox:View > Page Style > No Style
Safari:Safari > Preferences... > Show Develop menu in menu bar. Then go to the Develop dropdown and select the “Disable Styles” option.
Opera: Like Chrome, we need an extension, and Web Developer fits the bill.
Internet Explorer 11:View > Style > No style
I couldn’t find a documented way to disable CSS in Edge, but we can remove CSS from it and any other browser programmatically via the CSS Object Model API in the DevTools console:
var d = document;
for (var s in S = d.styleSheets)
S[s].disabled = true;
for (var i in I = d.querySelectorAll("[style]"))
I[i].style = "";
The first loop disables all external and internal styles (in and ), and the second eliminates any inline styles. The caveat here, however, is that elements can still dynamically be given new inline styles. To instantly erase them, the best workaround is adding a timer. Something like this:
Alternatively, there are text-only browsers — such as the ancient Lynx — but expect to be living without video, images (including SVGs), and JavaScript.
Through the style-less looking glass…
For each site I surfed without CSS — Amazon, DuckDuckGo, GitHub, Stack Overflow, Wikipedia and contrast checker called Hex Naw — I’ll share my first impressions and put some suggestions out there that might help with the experience.
Get ready, because things might get a bit… appalling. ?
Website 1: Amazon.com
There’s no real need for an introduction here. Not only is Amazon a household staple for so many of us, it also powers a huge chunk of the web, thanks to their ubiquitous Amazon Web Services platform.
There’s a vast number of things going on here, so I’ll explore the style-less stuff that gets in my path while finding a product and pretending to purchase it.
On the homepage, I immediately see a sprite sheet used by the site. It’s really in place of where the logo could be, thus making it tough to know whether or not those images are intended to be there. Each sprite contains multiple versions of the logo, and even if I could see the “Amazon” word mark in it, it’s surprisingly that it’s not the global home link. If you’re curious where the home link really is, it’s this structure of spans where the logo is served up as background image… in CSS:
The next problem that arises is that the “Skip to main content” link doesn’t look like a typical skip link, yet it works like one. It turns out to be an element without an href, and JavaScript (yes, I did leave that enabled) is used to mimic anchor functionality.
When I start a search, I have to look further below the “Get started” link to see the suggestions. Under the “Your Lists” and “Your Account” items, it becomes difficult to tell the links apart. They appear all strung together as if they were one super long mega link. I believe it would have been more effective to use a semantic unordered list in this scenario to maintain a sense of hierarchy.
Under all those search suggestions, however, the account and navigation links are easier to read since they’re separated by some space.
Interestingly, the carousel lower down the page is still somewhat functional. If I click the “Previous page” or “Next page” options, the order of the images is changed. However, hopping between those options required me to scroll.
Skipping down a bit further, there’s an advertisement element. It contains an “Ad feedback” string that looks static just like what we saw with the “Skip to main content” link earlier. Well, I clicked it anyway and it revealed a form for sharing feedback on the advertisement relevance.
You may have missed it, but there’s a blank button above the two groups of form labels and the radios buttons are out of place. The structure is confusing because I don’t know which labels belong to which radio buttons. I mean, I guess I could assume that the first label goes with the first radio input, but that’s exactly what it is: a guess.
What’s also confusing is that there are Submit buttons between the “Close Window,” “Cancel,” and “Send Feedback” options at the bottom of the form. If I press any of these, I’m taken back to the ad. Now, suppose I were blind and using a screen reader to navigate this same part, even with the presence of CSS. I would be told “Submit, button” for two of the buttons and would therefore have zero clue what to do without guessing. It’s another good reminder about the importance of semantics when handling markup (button labels in this case) and being mindful of how much reliance is placed on JavaScript to override web defaults.
Doing a search — let’s say for “Mac Minis” — I can still access and understand the product ratings since they are displayed as text (instead of the tooltips they are otherwise) in place of stars. This is a good example of using a solid textual fallback when an image is used as visual content, but is served as a background image in CSS.
Having chosen the Mac Mini with Intel Core i3, I’m greeted by other Mac products above the product I’ve selected and have to navigate beyond them to select the quantity I want to purchase.
Scroll down, and an “Add to Cart” button is displayed next to a label bearing the same content. That’s redundant and probably unnecessary since a element is capable of holding its own label:
<button>Add to Cart</button>
Next up, we have an offer for an Amazon Prime membership. That’s all fine and dandy, but notice that it’s inserted between the product I’m purchasing and the “Buy Now” button. I have a really hard time knowing whether clicking “Buy Now” is going to add the Mac Mini to checkout, or whether I’m purchasing Amazon Prime instead.
I also wanted to play around a bit, so I tried removing the Mac Mini from my cart once I figured out how to add it. It took me like ten seconds to locate the cart so I could edit it. Turns out it was directly next to “Proceed to checkout (1 item)” link but rams right up alongside it so it all looks like a single link.
Overall, it wasn’t difficult to find a product. On the other hand, the path to checkout became more of a headache as I proceeded. There was some poor semantic- and accessibility-related practices that caused confusion, and important buttons and links became more difficult to find.
? What the Site Does Well
? What the Site Can Improve
Carousels are functional even without styling.
The logo relies on a background image, obscuring the path back home.
The content hierarchy is still generally helpful for knowing where we are on a page.
Many links and anchors rely on JavaScript and do not appear to be interactive.
The order of elements remains roughly in tact.
Links often bump up against each other or are placed outside where they would be relevant.
Great use of fallbacks for product rating that rely on background images.
Button labels are either misleading or repetitive.
Form elements fail to align themselves properly.
There’s a rough journey to check out.
Website 2: DuckDuckGo
Have you used DuckDuckGo before? I assume many folks reading CSS-Tricks have, but for those who may be hearing of it for the first time, it’s an alternative to Google search with an emphasis on user privacy.
So, getting started with this is a little misleading because the DuckDuckGo homepage is super simple. Not much can go wrong there, right? Well, it’s a little more involved than that since we’re dealing with search results, content hierarchy and relevance once we get into making search queries.
Right off the bat, what I’m greeted with is a lot more content than I would have expected for such a simple lander. At it’s not totally clear what website this is by scanning the website. The first mention of the product name is the fourth item in the first unordered list and it’s a call to action to “Spread DuckDuckGo.” The logo is totally missing, which obviously means it’s used as a background… in CSS.
Speaking of that unordered list, I assume what I’m seeing belongs in the header, and there’s no skip navigation. We have a triple arrow icon (is that a mobile menu or a menu to hide the least important items, or something else?), followed by privacy-related content, social media links, something that looks like one link but is actually two links for “About DuckDuckGo” and “Learn More.”
Finally, toward the very bottom is where the primary use case for the site actually comes up: the search bar. I assume the “S” label means “Search” and the “X” label is shorthand to clear the search field.
Alright, onto performing a search. It’s super cool that I can still see auto-suggestions and use the up and down arrow keys to highlight each one. Clearing the field though, the suggestions don’t disappear until after I refresh the page.
Everything in the Settings menu are items in a list including what should be headings — “Settings,” “Privacy Essentials,” “Why Privacy,” “Who We Are,” and “Keep in Touch.” These are very likely part of a mobile men if CSS was enabled, perhaps triggered by that triple arrow link thing at the top. In that menu, I see four blank bullet points between “Settings” and “More Themes.”
Coming here as a new user, I have no idea what those empty list items are, but the bullets I highlighted in the screenshot above are actually the theme buttons. To clarify the intent, some fallback text would be helpful, and these should be radio or normal buttons instead of list items (considering their functionality).
Every block of content with an “X” — including the “Settings” — cannot be dismissed; however, clicking the “X” above an image of a hiker image does cause a chunk of content to clear off the screen — thanks to JavaScript still being enabled. What I really find awkward is the redundant numeration in the ordered list under “Switch to DuckDuckGo…” We see this:
1. 1We don't store your personal info
2. 2We don't follow you around with ads
3. 3We don't track you. Ever.
Looks like some mixed use case of semantic markup with some other way to display list item numbers.
There’s a colossal amount of white space under the hiker image until the first
element. Assuming they’re either links or buttons, clicking every instance of “Add DuckDuckGo to [browser]” does nothing. Each section’s illustration causes some unnecessary horizontal scrolling, which is a common issue we’ll see in the other sites we look at.
After those sections, there’s a blank box and I have no idea what it is.
I cracked open DevTools and it turns out to be a element in an that holds only JavaScript for something related to POST requests. It might as well be one of those elements we should leave alone.
Following that, I see two repeated instances of “Set as Default Search Engine” wrapped around a “Set as Homepage” section.
These must have been the instructions that popped up when I clicked the “Add DuckDuckGo…” actions, but it shows the impact hiding and showing content can have when we’re dealing with straight markup. Instead of repeating content, the corresponding links or buttons should point to one instance. That would cut the redundancy here.
OK, time to finally get into search. The first thing I see in the search results is an empty box with an instruction to ignore the box. Okey-dokey then.
Moving on, did you see that DuckDuckGo link? That must be the logo, and I wonder why this was not on the homepage. Seems like low-hanging fruit for improvement.
The search bar still functions normally with the exception of the “S” and “X” buttons that have swapped places from where they were on the homepage.
Onto the search results. I could easily distinguish one result from another. What I found quite unnecessary, yet funny, is that the “Your browser indicates if you’ve visited this link” messaging that’s located at the end of each page title. That would be super annoying from a screen reading perspective. Imagine hearing that repeated at the end of every page title. That messaging is there to be displayed alongside checkmarks that contain tooltips that hold that messaging. But, with CSS disabled, well, no checkmarks and no tooltips. As a result, all I get is an extra long heading.
The navigation bar that is normally displayed as tabs to filter by different types of results (e.g. Images) seems to do nothing at this point because it’s hard to tell that they are filters without styling. But if I click on the Images filter, the image results are actually loaded lower down onto the page, piled right on top of the Web results, and the page becomes mega long as a result. Oh, and you might think that scrolling all the way back up (and it’s a long way up) then clicking another filter, say Videos, would replace the images, but that simply inserts video thumbnail images below the images making an already mega long page a super mega long page. Imagine the page weight of all those assets!
Well, you don’t have to. According to DevTools, images alone account for 831 requests and a total weight of 23.7 MB. Hefty!
The last couple of items are worth noting. Clicking the “Send feedback” link apparently does nothing. Maybe that triggered a modal with CSS? And, although the “All Regions” link does not resemble a link and I could’ve easily ignored it, I was curious enough to click it and was taken to an anchor point of a list of countries. The last two links just made their corresponding contents appear under the list country options.
There’s a lot going on here and there are clearly opportunities for improvement. For example, there are calls to action that display as normal text that should be either be links or buttons instead. Also, we’d think the performance of a site would get better with CSS disabled, but all those loaded assets in the search results are prohibitive. That said, the search experience isn’t painful at all… that is, unless you’re digging into images or videos while doing it.
? What the Site Does Well
? What the Site Can Improve
Search is consistent and works with or without CSS.
A “skip” link for would help with keyboard browsing.
The content hierarchy makes content easy to read and search results a clean experience.
Non-link items in the “Settings” menu should be headings for separate unordered lists so there is a clear hierarchy for how the options are grouped.
Good use of a homepage link at the top of the search results page.
Some content is either duplicated or repeated because the site relies on conditionally showing and hiding content.
Make sure that all calls to action render as links instead of plain text.
Use a fallback solution to filter the types of search results to prevent items stacking and help control hefty page weight.
Website 3: GitHub
Hey, here’s a site many of us are well familiar with! Well, many of us are used to being logged into it all the time, but I’m going to surf it while logged out.
Already, there’s a skip link (yay). There’s also a mobile navigation icon that I expect will do nothing, and am proven right when I try.
Between some of the navigation items, there are unnecessarily giant gaps. If you click on these, they still function as dropdown menus. They are
and elements… but something feels semantically wrong. It is nice that the menu items are actually unordered list items and that native browser functionality can still take place by using a semantic way to expand content. But that SVG icon messes with me.
Before typing anything into the field, I see three instances of “Search All GitHub” and “Jump to” links. I have no idea which to click, but if I do a search, the keyword shows up in the third group.
Everything else on the homepage seems fine except for a number of overly large images horizontally overflowing the window.
Let’s go back to the search bar and navigate to any repo we can find. Right under the Search button, we have two nearly identical secondary navigation bars that return the repository counts, code, commits, and other meta. Without looking at the source, I have no clue what the purpose is for having two of these.
Repository pages still have an easy-to-follow structure and a logical hierarchy for the most part. While logged out and having my cache cleared before coming, the “Dismiss” button for the “Join GitHub today” block still performs as I’d expect. Like we saw earlier on Amazon, the tag links are difficult to tell apart because they run together as a single line.
The next two buttons — “JavaScript” and “New Pull Request” — don’t seem to do anything when I click them. I’d imagine the pull request button should be disabled while viewing as a guest since, unless it’s intended to take a user to a log in screen first… but even that doesn’t feel right. Turns out that the button is indeed disabled when CSS is active, though. Then the rest of the page is fairly easy to understand.
If you’re here mainly for managing, contributing to, or checking out repositories, you won’t face a whole lot of friction since the hierarchy plays out well. You’ll experience pretty much the same elsewhere, whether you’re looking at pull requests, issues, or individual files. Most of the hurdles live in less prominent pages on the site.
? What the Site Does Well
? What the Site Can Improve
The hierarchy and structure of many pages are really easy to follow and make logical sense.
Use the height and width attributes on elements and SVGs to prevent them from blowing up.
Most of the SVG icons embedded on the page are appropriately sized.
Watch for empty list items.
Nice use of a skip link in the header.
Ensure that button labels use full words.
Make sure links have whitespace or line breaks between them to prevent run-ons.
Website 4: Hex Naw
This next site is an online tool I use often to check color contrasts for accessibility. And for a site that is so big on color, there’ s probably a lot happening here with CSS, so it should get interesting.
There’s immediately a large amount of space above the navigation and no skip links. The hamburger and close buttons for the mobile layout and “X” buttons next to each color to test are oversized.
Oh, and check out this giant gap between the “Test Colors” button and the next section of content.
One of the many nice features of this site is a checkbox that allows you to see only the colors that passed the test, rather than viewing all of the tested colors. Unfortunately, that button does nothing with CSS disabled. However, I can still see which colors work and get the definitions for contrast ratio, large text, and small text directly in the result table.
Hiding and showing the terms is probably what the button does with CSS. The bummer is that I won’t know the purpose of those single letters (e.g. S and R) after the table headers. It’s also both ironic and confusing to see that message for all failing colors after the table because, well, there are passing colors in this list. What could be done is have hide it by default but conditionally inject it later if all the colors in a single test fail.
Pulling out DevTools, it turns out some of the white space at the top is the Hex Naw logo as a SVG file. The space above that is associated with other SVG symbols used for the page. By using a default color of black for the logo, this would help reduce some of the space. I made that quick change in DevTools and it makes a noticeable difference.
The second gap of space is caused by an SVG loader that appears while calculating color contrasts. This could be helped by specifying a much smaller, yet proportional, width and height exactly like the mobile menu and “X” icons.
Adding an initial width and height to each SVG would definitely reduce the need to scroll. This is also what we can do to fix the gaps we saw in GitHub’s navigation as well.
Ultimately, Hex Naw remains pretty useful without CSS. I can still test colors, get passing and failing color results, and navigate around the page. It’s just too bad I wasn’t able to work with actual colors and had to work around those extra large SVG icons.
? What the Site Does Well
? What the Site Can Improve
The site maintains good content hierarchy throughout the site.
SVGs should be use a fallback fill color and use the height and width attributes.
All of the elements are written semantically.
Feedback for all failing colors could be dynamically added and removed to prevent awkward messaging.
The tests themselves function properly with the exception of being able to show or hide information.
Consider an alternative way to display color for the values being tested, like table cells with the background color attribute.
Website 5: Stack Overflow
Like GitHub, Stack Overflow is one of those resources that many (if not most) of us keep in our back pocket because it helps find whether someone has already asked a development question and the answers to them.
On the page to ask a question, I see a bunch of blank bullet points above the main element. I have no idea why those empty list items are there. I don’t see any of the formatting buttons either, but after messing around a bit, I found that they happen to be nothing more than blank list items. Perhaps fallback text or an SVG icon for each item would help identify what these are and do. They should be turned into real buttons as well.
It’s also still possible to get a list of similar questions while entering text into the title field. Every works here as expected, which is nice. Although, it is strange that the vote counts for each suggested question appears twice, once above the title as a link and again next to the title without being linked.
One of the key elements we all look for when landing on a Stack Overflow question page is that big green checkmark that indicates the correct answer out of all the submitted answers. But with CSS turned off, it’s hard to tell which answer was accepted because each answer in the list has a black checkmark. Even if the accepted answer is always at the top, there’s still no alternative or fallback indication without having to interact with the page. Additionally, there’s no indication if you have already up voted or down voted the question or any of the answers.
To sum up my experience on Stack Overflow, I was able to accomplish what I normally come to the site for: finding answers to a programming problem. That said, there were indeed a few opportunities for improvement and this site is a prime example of how design often relies on color to indicate hierarchy or value on a page, which was sorely missing from the question pages in this experiment.
? What the Site Does Well
? What the Site Can Improve
Almost every element is written semantically.
Use clear controls to identify editing tools while asking or answering questions.
SVG icons use the width and height attributes.
Consider a visual icon to distinguish the accepted answer from any other answers to a question.
Lists of answers are clear and easy to scan.
Consider a different method to indicate an up vote or a down vote besides color alone.
Website 6: Wikipedia
Wikipedia, the web’s primary point of reference! It’s an online staple and one of its appealing qualities is a sort of lack of design. This should make for an interesting test.
A few links down, we have a skip navigation option for the real navigation and search. The homepage header containing the globe image maintains its two column layout, and you may have guessed why: this is a table layout. While it may not be a usability issue, we know it isn’t semantic to rely on tables to create a layout. That was a relic of the way past when we didn’t have floats, flexbox, grid or any other way to handle content placement. That said, there are no noticeable usability issues or confusing elements on the page.
Let’s move on to what many of us spend the most time on in Wikipedia: an article entry. This is often the entry point to Wikipedia, especially for those of us that start by typing something into a search engine, then click on the Wikipedia search result.
The bottom line is that this page is still extremely usable and hierarchical with CSS disabled. The layout goes down to a single column, but the content still flows in a logical order and even maintains bits of styling, thanks again to a reliance on tables and inline table properties.
One issue I bumped up against is the navigation. There is a “Jump to navigation” link in the header which indeed drops me down to the navigation when I click it. In case you’re wondering, the navigation is contained in the footer, which is the reason for needing to jump to it.
There are seemingly random checkboxes above a couple of the navigation headings (specifically for “Variants” and “More”) and they appear to serve no purpose, although the checkbox above “More” becomes displays at a certain viewport width when CSS is enabled.
There actually is one odd thing in the navigation, and it’s a label-less button between the “In other projects” and “Languages” headings.
Clicking that button, I’m still able to access the language settings, and it mostly works as expected. For example, the layout maintains a tabbed layout which is super functional.
In the Display tab, however, the “Language” and “Fonts” buttons do nothing. They probably are tabs as well, but at least I can see what they offer. Beside those buttons are two empty select menus that do absolutely nothing (the first one does become populated with ComicNeue, OpenDyslexic, and System font options when you check the checkbox). Looking at the “Input” tab, the writing language buttons still happen to function as tabs. I’m still able to select options other than English, Spanish, and Chinese.
The articles aren’t difficult to read at all without CSS and that’s because nearly every element is semantically correct and follows a consistent document hierarchy. One thing I did wonder was where the “Show/Hide” button that’s normally in the table of contents went. It turns out to be a lone checkbox, and the label is fake — it uses the content property on a pseudo-element in CSS to display the label.
Another issue in articles is that you have to spend time hunting images down when previewing them. Normally, clicking an image in the article sidebar will trigger a full-screen modal that contains a carousel of images. Without CSS, that carousel is gone and, in its place, is the image with a row of unlabeled buttons above it. That’s a bummer, but would be perfectly OK if the carousel wasn’t all the way down the page, opposite of where the clicked image is at the top of the page without an ability to jump down to it.
I’d be careless if I didn’t mention that the Wikipedia logo was nowhere to be found on the article! It’s not even a white SVG on white. The link contains actually nothing:
<a class="mw-wiki-logo" href="/wiki/Main_Page" title="Visit the main page"></a>
Thankfully, the “Main page” link under “Navigation” is the another way back home without pressing the browser Back button. But, still feels odd to have no branding on the page when it does such a great job of it on the homepage.
Wikipedia’s HTML issues exist mostly in features I expect to be less often used rather than articles. They never hampered my reading experience in the long run.
? What the Site Does Well
? What the Site Can Improve
The site maintains a clean structure and hierarchy.
The logo placement could be moved (or added, in some cases) to the top of the page without a CSS background image.
Skip links are used effectively for search and navigation.
Buttons should include labels.
The article content is semantic and easy to read.
The image carousel on pages could load where the trigger occurs and use proper button labels for the controls.
Ways to make CSS-less a better experience
CSS is a key component to the modern web. As we’ve seen up to this point, there are a number of sites that become next to un-unusable without it — and we’re counting some of the most recognizable and used sites in that mix. What we’ve seen is that, at best, the primary purpose for a site can still be accomplished, but there are hurdles along the way. Things like:
missing or semantically incorrect skip links
links that run together
oversized images that require additional scrolling
empty elements, like list items and button labels
Let’s see if we can compile these into a sort of list of best practices to consider for situations where CSS might be disabled or even unavailable.
Include a skip navigation link at the top of the document
Having a hidden link to skip the navigation is a must. Notice how most of the sites we looked at contained navigation links directly in the header. With CSS turned off, those navigations became long lists of links that would be so hard to tab or scroll through for any user. Having a link to skip that would make that experience much better.
The most basic HTML example I’ve seen is an anchor link that targets an ID where the main content begins.
<a href="#main">Skip to main content</a>
<!-- etc. -->
<main id="main"></main>
And, of course, we can throw a class name on that link to hide it visually so it is not displayed in the UI but still available for both keyboard users and when CSS happens to be off.
Another pain point we saw in a few cases were text links running together. Whether it was in the navigation, tags, or other linked up meta, we often saw links that were “glued together” in such a way that several individual links appeared to be one giant link. That’s either the result of hand-coding the links like that or an automated build task that compresses HTML and removes whitespaces in the process. Either way, the HTML winds up like this:
We can keep the freedom to use spaces or line breaks though, even with CSS disabled. One idea is to lean on flexbox for positioning list elements when CSS is enabled. When CSS is disabled, the list items should stack vertically and display as list items by default.
If the items are tags and should still be separated, then traditional spacing methods like margins and padding are still great and we can rely on natural line breaks in the HTML to help with the style-less formatting. For example, here are line breaks in the HTML used to separate items, flexbox to remove spaces, then styled up in CSS to re-separated the items:
The biggest nuisance in this experiment may have been images exploding on the screen to the point that they dominate the content, take up an inordinate amount of space, and result in a hefty amount of scrolling for all users.
The fix here is rather straightforward because we have HTML attributes waiting for us to define them. Both images and SVG have methods for explicitly defining their width and height.
Many of the large gaps on the sites we looked at looked like empty space, but they were really white SVGs that blew up to full size and blended into the white background.
So, yes, using the proper width and height attributes is a good idea to prevent monstrous icons, but we can also do something about that white-on-white situation. Using properties like fill and fill-rule as attributes will work here.
<!-- Icon will be red by default -->
<svg viewBox="-241 243 16 16" width="100px" fill="#ff0000">
<path d="M-229.2,244c-1.7,0-3.1,1.4-3.8,2.8c-0.7-1.4-2.1-2.8-3.8-2.8c-2.3,0-4.2,1.9-4.2,4.2c0,4.7,4.8,6,8,10.6 c3.1-4.6,8-6.1,8-10.6C-225,245.9-226.9,244-229.2,244L-229.2,244z"/>
</svg>
/* ...and it's still red when CSS is enabled */
svg {
fill: #ff0000;
}
Lastly, if buttons are initially empty, they need to have visible fallback content. If they use a background image and a title for what the do, use a span containing the title text then add aria-hidden="true" so it doesn’t sound like the screen reader is reading the button label twice (e.g. VoiceOver says, “Add button Add” instead).
It can be easy to either forget or be afraid to check how a site appears when CSS isn’t available to make the UI look as good as intended. After a brief tour of the Non-CSS Web™, we saw just how important CSS is to the overall design and experience of sites, both small and large.
And, like any tool we have in our set, leaning too heavily on CSS to handle the functionality and behavior of elements can lead to poor experiences when it’s not around to do its magic. We’ve seen the same be true of sites that lean too heavily on JavaScript. This isn’t to say that we should not use them and rely on them, but to remember that they are not bulletproof on their own and need proper fallbacks to ensure an optimal experience is still available with or without our tooling.
Seen in that light, CSS is really a layer of progressive enhancement. The hierarchy, form controls, and other elements should also remain intact under their user agent styles. The look and feel, while important, is second when it comes to making sure elements are functional at their core.
Ivan Akulov has collected a whole bunch of information and know-how on making things load a bit more quickly with preload and prefetch. That’s great in and of itself, but he also points to something new to me – the as attribute:
Supposedly, this helps browsers prioritize when to download assets and which assets to load.
My favorite part of this post is Ivan’s quick summary at the end which clearly defines what all these different tags can be used for:
– when you’ll need a resource in a few seconds – when you’ll need a resource on a next page – when you know you’ll need a resource soon, but you don’t know its full url yet
Make sure to check out our own post on the matter of prefetching, preloading, and prebrowsing, too. Adding these things to our links can make significant performance improvements and so check it out to add more resources to your performance toolbox.
In contrast to the world of print design, our creative process has often been constrained by what is possible with our limited tools. It also has been made more difficult by the unique challenges of designing for the web, such as ensuring that our sites cater well to a diverse range of devices and browsers.
Now, the web isn’t print of course, and we can’t take concepts from sturdy print and apply them blindly to the fluid web. However, we can study the once uncharted territory of layout, type treatment and composition that print designers have skillfully and meticulously conquered, and explore which lessons from print we could bring to our web experiences today.
We can do that by looking at our work through the lens of art direction, a strategy for achieving more compelling, enchanting and engaging experiences. With the advent of front-end technologies such as Flexbox, CSS Grid and Shapes, our creative shackles can come off. It’s time to explore what it actually means.
Art Direction For The Web exists because we wanted to explore how we could break out of soulless, generic experiences on the web. It isn’t a book about trends, nor is it a book about design patterns or “ready-to-use”-solutions for your work. No, it’s about original compositions, unexpected layouts and critical design thinking. It’s about how to use technical possibilities we have today to their fullest extent to create something that stands out.
It’s a book for designers and front-end developers; a book that’s supposed to make you think, explore and bypass boundaries and conventions, to try out something new — while keeping accessibility and usability a priority.
To achieve this, the book applies the concept of art direction — a staple of print design for over a hundred years — to examine a new approach to designing for the web starting from the story you want to tell with your design and building to a finished product that perfectly suits your brand.
Of course, the eBook is free of charge for Smashing Members, and Members save off the regular price, too.
Written by Andy Clarke. Reviewed by Rachel Andrew. Foreword by Trent Walton. Published in April 2019.
The possibilities of art direction on the web go far beyond responsive images. The book explores how to create art-directed experiences with modern front-end techniques.
1. What Art Direction Means
+
Ask what art direction means to developers, and they might answer: using the picture element or sizes attribute in HTML for responsive images; presenting alternative crops, orientations, or sizes at various screen sizes. But there’s more to it.
2.One Hundred Years Of Art Direction
+
Bradley, Brodovitch, Brody, and Feitler — together, their names sound like a Mad Men-era advertising agency. In this chapter, we’ll take a look at their iconic works, from the 1930’s to the 1980’s.
3. Art-Directing Experiences
+
Whether we write fact or fiction, sell or make products, the way to engage people, create desire, make them happy, and encourage them to stay that way, is by creating narratives. So what do we need to consider when doing so?
4. Art Direction And Creative Teams
+
Let’s take a look at how we can embrace collaboration and form teams who follow strategies built around common goals.
5. Principles Of Design
+
Are the principles which have guided design in other media for generations relevant to the world of digital products and websites? Of course! In this chapter, we’ll explore the principles of symmetry, asymmetry, ratios, and scale.
6. Directing Grids
+
Grids have a long and varied history in design, from the earliest books, through movements like constructivism right up to the present-day popularity of grids in frameworks like Bootstrap and material design. This chapter explains grid anatomy and terminology and how to use modular and compound grids.
7. Directing Type
+
White space, typographic scale, and creative uses of type are the focus in this chapter.
8. Directing Pictures
+
Images and how we display them have an enormous impact on how people perceive our designs, whether that be on a commercial or editorial website, or inside a product. In this chapter, you’ll learn how to position and arrange images to direct the eye.
9. Developing Layouts With CSS Grids
+
CSS Grid plus thoughtful, art-directed content offers us the best chance yet of making websites which are better at communicating with our audiences. In this chapter, Andy explains properties and techniques which are most appropriate for art direction.
10. Developing Components With Flexbox
+
While Grid is ideal for implementing art-directed layouts, Flexbox is often better suited to developing molecules and organisms such as navigation links, images, captions, search inputs, and buttons. This chapter explores how to make use of it.
11. Developing Typography
+
From multi-column layout and arranging type with writing modes to text orientation and decorative typography, this chapter dives deep into the code side of type.
12. Developing With Images
+
How do you fit your art-directed images to a user’s viewport? And what do CSS shapes and paths have in store for your design? Let’s find out in this final chapter.
In his book, Andy shows the importance and effectiveness of designs which reinforce the message of their content, how to use design elements to effectively convey a message and evoke emotion, and how to use the very latest web technologies to make beautifully art directed websites a reality. It goes beyond the theory to teach you techniques which you can use every day and will change the way you approach design for the web.
The book is illustrated with examples of classic art direction from adverts and magazines from innovative art directors like Alexey Brodovitch, Bea Feitler, and Neville Brody. It also features modern examples of art direction on the web from sites like ProPublica, as well as an evocative fictitious brand which demonstrates the principles being taught.
Art Direction for the Web begins by introducing the concept of art direction, its history, and how it is as relevant to modern web design as it ever been in other media. In Part 1, “Explaining Art Direction”, Andy shows you how to start thinking about all aspects of your design through the lens of art direction.
You will learn how design can evoke emotion, influence our subconscious perception of what we are reading, and leave a lasting impression on us. You will also learn the history of art direction, beginning with the earliest examples as a central component of magazine design and showing how the core philosophies of art direction persist through an incredible range of visual styles and ensure that the design always feels appropriate to the content.
As art direction is often about ensuring the visual design fits the narrative of your content, this section will also give you the practical skills to identify the stories behind your projects, even when they appear hard to uncover.
Finally, this part will teach you that art direction is a process that we can all be involved with, no matter our role in our projects. Strong brand values communicated through codified principles ensure that everyone on your team speaks with the same voice to reinforce your brand’s messaging through art direction.
Part 2, “Designing For Art Direction”
In Part 2, “Designing For Art Direction,” covers how to use design elements and layout to achieve visual effects which complement your content. You will learn principles of design such as balance, symmetry, contrast and scale to help you understand the design fundamentals from which art direction is based. You will also learn how to create interesting and unique layouts using advanced grid systems with uneven columns, compound and stacked grids, and modular grids.
This book also covers how to use typography creatively to craft the voice with which your brand will speak. In addition to a study on how to create readable and attractive body text, this section also explores how to be truly expressive with type to make beautiful headings, stand-firsts, drop-caps, quotes, and numerals.
You will also learn how to make full of use of images in your designs — even while the dimensions of the page change — to create impactful designs that lead the eye into your content and keep your readers engaged.
Part 3, “Developing For Art Direction”
The final part of Art Direction for the Web, “Developing For Art Direction,” teaches you the latest web design tools to unshackle your creativity and help you start applying what you have learned to your own projects.
You will learn how to use CSS Grid to create interesting responsive layouts and how Flexbox can be used to design elements which wrap, scale and deform to fit their containers.
This third part will also explore how to use CSS columns, transforms, and CSS Grid to create beautiful typography. You will also learn how viewport units, background-size, object-position, and CSS shapes can create engaging images that are tailored for every device or window width.
Throughout the book, Andy has showcased how art direction can be applied to any design project, whether you are designing for a magazine, a store front, or a digital product.
Testimonials
“On the web, art direction has been a dream deferred. “The medium wasn’t meant for that,’ we said. We told ourselves screens and browsers are too unreliable, pages too shape-shifty, production schedules too merciless to let us give our readers and users the kind of thoughtful art directional experiences they crave. But no longer. Andy Clarke’s “Art Direction for the Web” should usher in a new age of creative web design.”
“Andy shows how art direction can elevate your website to a new level through a positive experience, and how to execute these design principles and techniques into your designs. This book is filled with tons of well-explained practical examples using the most up-to-date CSS technologies. It’ll spin your brain towards more creative thinking and give your pages a soul.”
Andy Clarke is a well-known designer, design consultant, and mentor. With his wonderful wife, Sue, Andy founded Stuff & Nonsense in 1998. They’ve helped companies around the world to improve their designs by providing consulting and design expertise.
Andy has written several popular books on website design and development, including Hardboiled Web Design: Fifth Anniversary Edition, Hardboiled Web Design, and Transcending CSS: The Fine Art Of Web Design. He’s a popular speaker and gives talks about art direction and design-related topics all over the world.
Why This Book Is For You
The book goes beyond teaching how to use the new technologies on the web. It delves deeply into how the craft of art direction could be applied to every project we work on.
Perfect for designers and front-end developers who want to challenge themselves and break out of the box,
Show how to use art direction for digital products without being slowed down by its intricacies,
Features examples of classic art direction from adverts and magazines from innovative art directors like Alexey Brodovitch, Bea Feitler, and Neville Brody.
Shows how to use type, composition, images and grids to create compelling responsive designs,
Illustrates how to create impact, stand out, be memorable and improve conversions,
Explains how to maintain brand values and design principles by connecting touch points across marketing, product design, and websites.
Packed with practical examples using CSS Grid, CSS Shapes and good ol’ Flexbox,
Explains how to integrate art direction into your workflow without massive cost and time overhead.
Art direction matters to the stories we tell and the products we create, and with Art Direction for the Web, Andy shows that the only remaining limit to our creativity on the web is our own imagination.
We hope you love the book as much as we do. Of course it’s art-directed, and it took us months to arrange the composition for every single page. We kindly thank Natalie Smith for wonderful illustrations, Alex Clarke and Markus Seyfferth for typesetting, Rachel Andrew for technical editing, Andy Clarke for his art direction and patience, and Owen Gregory for impeccable editing.
We can’t wait to hear your stories of how the book will you design experiences that stand out. Even if after reading this book, you’ll create something that will stand the test of a few years, that’s an aim that the book was worth writing for. Happy reading, everyone!
As you draw closer to the finish line with a website, does your client see it just as clearly as you do? Or are they still wavering on design and copy choices even while you’re in the final stages of QA, or talking about additional features they’ll want to add to the site “some day”?
Unless you are getting paid — and paid well — for every single hour you put into a website, you have to be willing to enforce a final stopping point. If you don’t, your client will undoubtedly play the “What about this? Or this?” game for as long as you allow them to.
And you can’t afford to do that. You have other clients whose websites deserve your attention.
Just as you have created an onboarding process to smoothly kick off a new website project, you must do the same with an offboarding process.
Step 1: Collect Your Final Payment
Once the client has given you the approval on the finished website, you push it live. After some light testing to confirm that all is well on the live domain, it’s time to initiate the offboarding process.
You’ll do this by sending along the last invoice. Better yet, your invoicing software should automatically be configured to do this upon reaching the final project milestone.
Because each of these elements exist within the same place, setting up and scheduling invoices based on your project’s milestones (including the launch date) is really easy to do.
Don’t move on to the next steps until you collect the payment due though. Letting a client go any more than seven days after the project’s end without final payment simply invites them to ask you to do more work.
Step 2: Send the Wrap-up Email
Upon confirming receipt of payment, send your client a wrap-up email.
This doesn’t have to be lengthy. The goal is to get them to schedule the closing call as soon as possible. Something like this should work:
Greetings, [client name]! I wanted to thank you for the opportunity to build this website for [company name]. I hope you’re just as pleased with it as I am! I know you’re excited to put this website to work for you now that you have it, but I have just a few things I want to show you as we wrap up. When you have a moment, please go to my Calendly and schedule a 15-minute Wrap-Up Session for some time this week. During this call, I’ll give you a behind-the-scenes tour of your website and show you how to edit your content. Afterwards, I will send along the login credentials you need to manage your website along with all of your design assets. Talk soon.
As I mentioned in the message above, Calendly is the tool I use to simplify my scheduling with clients.
All you have to do is create an event (like “Client Offboarding” or “Client Onboarding”), set up your availability, and then send the link to your clients to pick a time when you’re free. It makes life so much easier.
Step 3: Do the Wrap-Up Video Call
This final call with your client needs to be done over video or, at the very least, a screen-share. For this, I’d suggest using Zoom.
The above example is how I used to do my offboarding calls with WordPress clients.
I’d log into their website and then give them an orientation of all of the key areas they needed to know. I’d show them how to create a post, how to create a page, and explain the difference between the two. I’d also show them important areas like the Media folder, the area to manage Users, and maybe a few other things.
This “training” call is yours to do with as you like. Just make sure the client walks away feeling confident in taking the reins over from you.
Step 4: Deliver the Remaining Pieces
The website is done, you’ve collected the payment, and you’ve had the final call with your client. Now, it’s time to deliver the remaining pieces you owe them.
Logins – If you created any accounts from-scratch (e.g. WordPress, web hosting, social media, etc.), send along the login credentials.
Style guide – Did you create a style guide for the client? Package it up in a professional-looking PDF and send it over in case they decide to work with another designer in the future.
Design assets – Again, on the off chance they work with someone else, you’ll want to send along the design assets you created in their native formats.
Licenses – You may have licensed certain assets during this project, like stock photos or design templates. If that’s the case, you’ll need to bill them for the licenses (if you haven’t already) and transfer ownership to them now.
While you could send these along before the wrap-up call, you run the risk of the clients taking the materials and running away… Only to show up months later wanting to know what all this stuff is, what they’re supposed to do with it, and wondering if you’ll have time to walk them through the website now.
Or they don’t open any of it and then message you months down the line, urgently demanding access to their site, files, etc. To avoid this from happening, clearly label everything and send it along in a shared Dropbox folder.
Even if they lose the link to the Dropbox folder at any point, you don’t have to repackage up all their stuff again. You can simply grab the link from your end and resend.
Step 5: Follow Up in 60 Days
Set a reminder in your project management template to follow up with website clients 60 days after the wrap-up. This will give them enough time to sit with the website and either:
Become really comfortable using it;
Realize it’s too much work.
Either way, it’s a good idea to check in.
If they’re taking good care of the website and using it to promote their business, that’s great. This email will simply serve as a reminder that you remain their trusted ally and you’re here if they ever need anything.
And if they’re not taking care of it, this is an excellent opportunity to offer your assistance in providing (paid) support and maintenance.
Bringing Projects to a Close with an Offboarding Process
Then again, you know how clients can get. They’re so excited to actually have a website now that they can’t stop imagining the possibilities. So long as you’ve delivered what they paid for, though, you are under no obligation to keep this project open to entertain those ideas unless they start a new contract with you.
Use this offboarding checklist to ensure you give each of your web design projects as strong and final a close as possible.
With the improvement in technology, more and more players are joining the bandwagon to leverage it.
One of the late entrants is the finance sector also referred to as fintech companies who use technology to offer their services.
Off late, fintech is catering to a wide array of customers by rolling out solutions that impact nearly everyone. With the right technology in place, they broadened their reach while significantly increasing the flexibility and innovation level of financial services.
While there is technology in the finance industry, on one hand, there’s Artificial intelligence too introduced as a way to simplify human lives by integrating in a day to day life. And in the same manner, AI is integrated with the fintech to offer exceptional services in the simplest possible manner. But, with the increase in digital services and transactions, the vulnerabilities to have increased significantly. Not to mention, the cyber threat remains an all-time high for the online financial transaction and this why Fintech companies are aware of it too. Here’s where AI app development comes to play to by introducing smart applications and programs making it difficult or rather treacherous to access for the cybercriminals to hack systems.
Like any other technology, AI to carries a certain amount of threats and this is why utmost care must be taken while implementing the same for Fintech. Conditioning AI for future vulnerabilities is one way to get things started.
Enabling AI to distinguish between the access
Since AI is a system that is designed in a way to remain value-agnostic, feeding the understanding of Good and Wrongful access is a must as a safety measure against cyber threats. Being a value-agnostic concept, AI is not naturally capable to differentiate between the ethical transactions versus the wrongly intended ones. This is because the login condition tends to remain the same for each transaction/attempt.
For example, imagine a user moving his/her funds digitally from New York to London and then to Paris, and back again. You can expect a sizeable dip along the way due to different taxation policies and rate. Usually, a transaction of such nature will go down in the books of accounts as “usual business” under the AI engine scanner. Such transactions in the FinTech world will involve plenty of intricate calculations that refer to jurisdictions and statutes. Here’s when human intervention is required and that’s when the vulnerability is exposed. The AI will not be able to make any discretionary judgment and ethics and hence security becomes the primary concern.
Regulating the Access Control
One of the major cybersecurity vulnerabilities involves access control to a certain set of data in the fintech industry. For AI to control the access to data for security purpose, it gets difficult to manage the data security. The biggest vulnerability is that of the ability of cybercriminals and hackers to bypass the access controls with an aim to steal information or even pose as another legitimate individual.
They try to leverage identity theft for engaging in the phishing schemes. For fintech companies, the first line of defense to negate the power of unauthorized access is that of introducing e-signatures as a mandatory protocol for authorizing the business transaction. One can also involve other complex access control for advance security that involves bio-signatures or even biometric authorization that involves retina or irises scan, fingerprint login, etc.
Making the best use of AI and RPA for a Dynamic Security
AI has limited power to control security because of its scope. Perhaps, this is the reason why there is a difference between Robotic Process Intelligence (RPA) and AI. For any security engineer responsible for the fintech company’s security, the line between RPA and AI shouldn’t be blurred at any cost.
One has to move beyond the conventional idea of a security level when it comes to fighting vulnerabilities faced by fintech corporations. You can use RPA for a set of iterative tasks that involves protocol verification, ownership verification, and even monitoring and ensuring the right balance mechanisms.
And when it comes to security for banks, merely employing an intelligent system acting as a cybersecurity gatekeeper is not enough. This is because a banking transaction has far more depth than any other online transactions as it involves routing digital currency thru multiple devices and storage spaces.
This is where the breaching opportunities are opened up and vulnerabilities are exposed at each point. Engineering a static AI solution cannot cater to such all-pervasive threats, especially when fintech industry is experiencing a higher frequency of ‘zero-day’ attacks. The 76% of zero-day cybersecurity attacks took place in 2018 that stemmed from sources that were entirely unaccustomed for.
Doing away with the defensive, negative attitude
Challenges are everywhere, and most often when businesses face challenges on the technology front, the mindset differs. Much key business personnel are left feeling that it’s only a matter of time before their systems are hacked. The question boils down to ‘when will it get hacked’ rather than ‘will it ever’ or even better, ‘if it can get hacked’?
Here’s where the half battle is lost. One has to understand that AI is nothing sort of a magic bullet aiding the modern cybersecurity complexities. Fintech sites and transactions will inevitably need AI integrations to ensure safety. The benefits of bringing inefficiencies will evolve in the coming times as AI too wears off the age and presents a wider scope with deeper securities in terms of reducing manual efforts, and bringing down the salary costs while ensuring timeliness.
The best way to tackle threats is not to surrender to the idea of ‘one-stop-solution’ when it comes to securing transactions thru AI. The key is to look at the incremental transformation by taking an example from the likes of PayPal who are immune to large scale as they have adopted a continuous testing approach that keeps vulnerabilities at check.
Conclusion
The blend of Artificial intelligence in the fintech has become integral to both the industry wherein a dedicated mobile app development company plays a major role in bridging the gap.
This is because banking and investment have moved beyond the traditional brick and mortar way where even institutions are encouraging people to move their finances digital.
And this is why we see many traditional banks pair up with fintech firms to blend in the best of both worlds. And hence, ensuring the safety of all such transactions that take place on their portal becomes the liability for the banking and financial institutes.
Any failure in that regard will have customers losing faith in them and even risking the future prospects of investors as they fear to operate online.
Customers too have evolved when it comes to picking the fintech companies as they tend to go for the one that follows regulations and best practices.
They also consider whether or not such firms had any cyberattacks history and checks how well the company utilize precautions for cybersecurity.
Specific design systems, I mean. Design systems, as a concept, are something just about any site can benefit from.
A lot of hype goes into design systems these days. Just the other day, an organization’s published their design system publicly and I got a slew of DMs, emails, and Slack messages encouraging me to check it out. “Looks good to me,” I said. But I’m merely knocking on the hood of a new car, so to speak. I haven’t sat in it. I haven’t driven it around the block, let alone driven it cross-country or tried to dig Cheerios out from between the seats. I’m sure I’d have more opinions after building a site or 10 with it (excuse the mixed metaphors).
So that leads me to a few questions. Can I build a site with this design system? Should I build a site with it? Is it for me? Or wait… who is this for?
They all have accordions.
Well not all of them, but bear with me, because there is a point to be made.
Bootstrap has an accordion too! Developers totally understand Bootstrap.
Whatever you think of it, I don’t see much confusion around Bootstrap. You link up the CSS, you use the HTML they give you and — ? — you have components that are ready to rock.
It’s possible that Bootstrap is a more of a “pattern library” than a “design system.” I dunno. There is probably something to that distinction, but the naming semantics (if there are any) seem to be used interchangeably, so distinguishing Bootstrap as one or the other doesn’t alleviate any confusion.
Developers reach for Bootstrap because…
It helps them build faster.
They get good quality “out of the box” if they aren’t particularly great at HTML and CSS themselves.
They want to be accessible and Bootstrap has been through the accessibility ringer.
[Insert your reason.]
Appealing, yet these seem to be somewhat table stakes for any design system and not exclusive to Bootstrap alone.
Hmmmm… Maybe I’ll have a gander around and choose a non-Bootstrap solution for my next project.
A lot of people are in this boat.
Maybe the next project is React so we want a design system that makes React a first-class citizen. Maybe we had trouble customizing Bootstrap to our liking. Maybe we just saw the default look of another design system and thought that would be a better fit. Maybe we are just bored of Bootstrap. Lots of reasons to look outside of Bootstrap, just as there are lots of reasons to look to it.
Since other design systems have accordions, too, can’t I just… pick one?
Sorta?
One immediate consideration is the license. Salesforce’s Lightning Design System is often pointed to as a leader in the world of design systems and has influenced a lot of the current thinking around them. Yet, it is not open source licensed.
That’s not a problem — it’s probably a good thing for Lightning. It’s not a general purpose grab-it-and-go for all web developers on Earth as the target audience. It’s for Salesforce and the slew of teams on different development stacks building things for Salesforce. If you’re not building a Salesforce thing, it’s not for you.
Then why is it public and not some internal document for the Salesforce team? I can’t answer for them, but as I understand it, Salesforce is so enormous that they have both internal and external teams using it. So, perhaps making Lightning a public document is the most useful way to make it available to everyone who needs it.
This is something we’re actively discussing on the SLDS team. What exactly does “open source” mean to design systems when the usage is so specific? Is it more “source available?
There’s also the nice side effect that they get good press for it, and that can’t hurt hiring efforts. I’ve also heard having a public design system can spark interesting and useful conversations.
Carbon is the official implementation of the IBM Design Language for product and web designers, and represents an ever-growing ecosystem of design assets and guidance. With a comprehensive set of human interface guidelines, design kits, and documentation, Carbon helps designers work faster and smarter.
That doesn’t quite tell me what I want to know. It looks like IBM stuff out of the box, so it’s definitely for IBM.
It’s open source so I can use it if I want to. But is it really for me and my random projects? Do they want me to use it for that? Am I, random developer, who they are thinking about with this project? Or is it IBM-first, random developer second?
Company first, the world second.
If a design system is by a company, then it’s for the company. It might also be open source, but any ol’ random developer who wants to use it isn’t the target audience.
It might not even technically be a company who makes it. It could be a government!
One really great design system is the U.S. Web Design System, which just went 2.0. It’s gorgeous! It looks very complete and has some great features. It’s got a classy custom font, it was designed with incremental adoption in mind, it has both useful components as well as utilities, and was built atomically from design tokens. Perhaps the best feature is that it’s extremely accessible because it has to be by law.
The U.S. Web Design System is mostly public domain, so you totally can use it. But it’s not designed with you in mind; it’s designed to help people who make website for the government.
(By the way, The U.S. Web Design System is open to contribution, which is pretty cool because it’s a way you could make a significant impact on websites that are very important to people’s lives.)
My mindset is that open source design systems are not meant be reused and spin up for yourself, but to learn from and apply to your own
Here’s another kicker: There is a spectrum of customizability to design systems, on purpose.
Even if you technically can use a public design system you’ve found and like, you might consider the customizability angle. There is a whole spectrum to this, but let’s consider the extreme edges and middle:
Zero Customizability: We built this to strongly enforce consistency for ourselves.
Pre-Selected Variations: We’ve got accordions in three different colors.
BYO Theme: We’ll give you a skeleton that loosely achieves the pattern and you apply the styles to your liking.
There are design systems at all points on this spectrum. Bootstrap might be in between the last two, where you get a fully styled theme, but customizability largely comes via setting Sass variables and that creates infinite variations.
Polaris, Shopify’s design system, is open source, but definitely for Shopify stuff. They are intentionally not trying to do what Bootstrap does. It’s far more about enforced consistency and adhering to a cohesive brand than it is slapping together and customizing a page.
Material Design is definitely Google’s thing. In its early days, I feel like the messaging was that this is Google’s cohesive design work. But these days, they definitely encourage other developers to use it too. If you do, your thing will look a lot like a Google thing. Maybe that’s what you want, maybe it’s not. Either way, you should know what you’re getting into.
Google’s take on customizability so far is a Sketch plugin. They have no incentive to allow for a level of customization to make things not look like Google, because that would be antithetical to the whole thing.
In case this isn’t obvious (and I very much fear that it isn’t), design systems aren’t a commodity. We don’t get to simply pick the one that has the nicest accordion and use it on the next project. We might not even be allowed to use it. It might be intentionally branded for a specific company. There are all kinds of factors to consider here.
My parting advice is actually to the makers of public design systems: clearly identify who this design system is for and what they are able to do with it.
I’d also like to note that everyone who I’ve brought this up to in the last few weeks has had different opinions about all this target audience messaging stuff in design systems. Of course, I’d love to read your comments about how you feel about it.
A React component goes through different phases as it lives in an application, though it might not be evident that anything is happening behind the scenes.
Those phases are:
mounting
updating
unmounting
error handling
There are methods in each of these phases that make it possible to perform specific actions on the component during that phase. For example, when fetching data from a network, you’d want to call the function that handles the API call in the componentDidMount() method, which is available during the mounting phase.
Knowing the different lifecycle methods is important in the development of React applications, because it allows us to trigger actions exactly when they’re needed without getting tangled up with others. We’re going to look at each lifecycle in this post, including the methods that are available to them and the types of scenarios we’d use them.
The Mounting Phase
Think of mounting as the initial phase of a component’s lifecycle. Before mounting occurs, a component has yet to exist — it’s merely a twinkle in the eyes of the DOM until mounting takes place and hooks the component up as part of the document.
There are plenty of methods we can leverage once a component is mounted: constructor() , render(), componentDidMount() and static getDerivedStateFromProps(). Each one is handy in it’s own right, so let’s look at them in that order.
constructor()
The constructor() method is expected when state is set directly on a component in order to bind methods together. Here is how it looks:
// Once the input component is mounting...
constructor(props) {
// ...set some props on it...
super(props);
// ...which, in this case is a blank username...
this.state = {
username: ''
};
// ...and then bind with a method that handles a change to the input
this.handleInputChange = this.handleInputChange.bind(this);
}
It is important to know that the constructor is the first method that gets called as the component is created. The component hasn’t rendered yet (that’s coming) but the DOM is aware of it and we can hook into it before it renders. As a result, this isn’t the place where we’d call setState() or introduce any side effects because, well, the component is still in the phase of being constructed!
I wrote up a tutorial on refs a little while back, and once thing I noted is that it’s possible to set up ref in the constructor when making use of React.createRef(). That’s legit because refs is used to change values without props or having to re-render the component with updates values:
The render() method is where the markup for the component comes into view on the front end. Users can see it and access it at this point. If you’ve ever created a React component, then you’re already familiar with it — even if you didn’t realize it — because it’s required to spit out the markup.
class App extends React.Component {
// When mounting is in progress, please render the following!
render() {
return (
<div>
<p>Hello World!</p>
</div>
)
}
}
We can also use it to render components outside of the DOM hierarchy (a la React Portal):
// We're creating a portal that allows the component to travel around the DOM
class Portal extends React.Component {
// First, we're creating a div element
constructor() {
super();
this.el = document.createElement("div");
}
// Once it mounts, let's append the component's children
componentDidMount = () => {
portalRoot.appendChild(this.el);
};
// If the component is removed from the DOM, then we'll remove the children, too
componentWillUnmount = () => {
portalRoot.removeChild(this.el);
};
// Ah, now we can render the component and its children where we want
render() {
const { children } = this.props;
return ReactDOM.createPortal(children, this.el);
}
}
And, of course, render() can — ahem — render numbers and strings…
Does the componentDidMount() name give away what it means? This method gets called after the component is mounted (i.e. hooked to the DOM). In another tutorial I wrote up on fetching data in React, this is where you want to make a request to obtain data from an API.
It’s kind of a long-winded name, but static getDerivedStateFromProps() isn’t as complicated as it sounds. It’s called before the render() method during the mounting phase, and before the update phase. It returns either an object to update the state of a component, or null when there’s nothing to update.
To understand how it works, let’s implement a counter component which will have a certain value for its counter state. This state will only update when the value of maxCount is higher. maxCount will be passed from the parent component.
In the Counter component, we check to see if counter is less than maxCount. If it is, we set counter to the value of maxCount. Otherwise, we do nothing.
You can play around with the following Pen below to see how that works on the front end:
The updating phase occurs when a component when a component’s props or state changes. Like mounting, updating has its own set of available methods, which we’ll look at next. That said, it’s worth noting that both render() and getDerivedStateFromProps() also get triggered in this phase.
shouldComponentUpdate()
When the state or props of a component changes, we can make use of the shouldComponentUpdate() method to control whether the component should update or not. This method is called before rendering occurs and when state and props are being received. The default behavior is true. To re-render every time the state or props change, we’d do something like this:
When false is returned, the component does not update and, instead, the render() method is called to display the component.
getSnapshotBeforeUpdate()
One thing we can do is capture the state of a component at a moment in time, and that’s what getSnapshotBeforeUpdate() is designed to do. It’s called after render() but before any new changes are committed to the DOM. The returned value gets passed as a third parameter to componentDidUpdate().
It takes the previous state and props as parameters:
Use cases for this method are kinda few and far between, at least in my experience. It is one of those lifecycle methods you may not find yourself reaching for very often.
componentDidUpdate()
Add componentDidUpdate() to the list of methods where the name sort of says it all. If the component updates, then we can hook into it at that time using this method and pass it previous props and state of the component.
We’re pretty much looking at the inverse of the mounting phase here. As you might expect, unmounting occurs when a component is wiped out of the DOM and no longer available.
We only have one method in here: componentWillUnmount()
This gets called before a component is unmounted and destroyed. This is where we would want to carry out any necessary clean up after the component takes a hike, like removing event listeners that may have been added in componentDidMount(), or clearing subscriptions.
Things can go wrong in a component and that can leave us with errors. We’ve had error boundary around for a while to help with this. This error boundary component makes use of some methods to help us handle the errors we could encounter.
getDerivedStateFromError()
We use getDerivedStateFromError() to catch any errors thrown from a descendant component, which we then use to update the state of the component.
In this example, the ErrorBoundary component will display “Oops, something went wrong” when an error is thrown from a child component. We have a lot more info on this method in a wrap up on goodies that were released in React 16.6.0.
componentDidCatch()
While getDerivedStateFromError() is suited for updating the state of the component in cases where where side effects, like error logging, take place, we ought to make use of componentDidCatch() because it is called during the commit phase, when the DOM has been updated.
componentDidCatch(error, info) {
// Log error to service
}
Both getDerivedStateFromError() and componentDidCatch() can be used in the ErrorBoundary component:
There’s something neat about knowing how a React component interacts with the DOM. It’s easy to think some “magic” happens and then something appears on a page. But the lifecycle of a React component shows that there’s order to the madness and it’s designed to give us a great deal of control to make things happen from the time the component hits the DOM to the time it goes away.
We covered a lot of ground in a relatively short amount of space, but hopefully this gives you a good idea of not only how React handles components, but what sort of capabilities we have at various stages of that handling. Feel free to leave any questions at all if anything we covered here is unclear and I’d be happy to help as best I can!
Let’s say you’re rocking a JAMstack-style site (no server-side languages in use), but you want to do something rather dynamic like send an email. Not a problem! That’s the whole point of JAMstack. It’s not just static hosting. It’s that plus doing anything else you wanna do through JavaScript and APIs.
Here’s the setup: You need a service to help you send the email. Let’s just pick Sparkpost out of a hat. There are a number of them, and I’ll leave comparing their features and pricing to you, as we’re doing something extremely basic and low-volume here. To send an email with Sparkpost, you hit their API with your API key, provide information about the email you want to send, and Sparkpost sends it.
So, you’ll need to run a little server-side code to protect your API key during the API request. Where can you run that code? A Lambda is perfect for that (aka a serverless function or cloud function). There are lots of services to help you run these, but none are easier than Netlify, where you might be hosting your site anyway.
Get Sparkpost ready
I signed up for Sparkpost and made sure my account was all set up and verified. The dashboard there will give you an API key:
Toss that API Key into Netlify
Part of protecting our API key is making sure it’s only used in server-side code, but also that we keep it out of our Git repository. Netlify has environment variables that expose it to functions as needed, so we’ll plop it there:
Let’s spin up Netlify Dev, as that’ll make this easy to work with
Netlify Dev is a magical little tool that does stuff like run our static site generator for us. For the site I’m working on, I use Eleventy and Netlify Dev auto-detects and auto-runs it, which is super neat. But more importantly, for us, it gives us a local URL that runs our functions for testing.
Once it’s all installed, running it should look like this:
In the terminal screenshot above, it shows the website itself being spun up at localhost:8080, but it also says:
◈ Lambda server is listening on 59629
That’ll be very useful in a moment when we’re writing and testing our new function — which, by the way, we can scaffold out if we’d like. For example:
netlify functions:create --name hello-world
From there, it will ask some questions and then make a function. Pretty useful to get started quickly. We’ll cover writing that function in a moment, but first, let’s use this…
Sparkpost has their own Node lib
Sparkpost has an API, of course, for sending these emails. We could look at those docs and learn how to hit their URL endpoints with the correct data.
But things get even easier with their Node.js bindings. Let’s get this set up by creating all the folders and files we’ll need:
/project
... your entire website or whatever ...
/functions/
/send-email/
package.json
send-email.js
All we need the package.json</code file for is to yank in the Sparkpost library, so npm install sparkpost --save-dev will do the trick there.
Then the send-email.js imports that lib and uses it:
You'll want to look at their docs for error handling and whatnot. Again, we've just chosen Sparkpost out of a hat here. Any email sending service will have an API and helper code for popular languages.
Notice line 2! That's where we need the API key, and we don't need to hard-code it because Netlify Dev is so darn fancy that it will connect to Netlify and let us use the environment variable from there.
Test the function
When Netlify Dev is running, our Lamba functions have that special port they are running. We'll be able to have a URL like this to run the function:
This function is set up to run when it's hit, so we could simply visit that in a browser to run it.
Testing
Maybe you'll POST to this URL. Maybe you'll send the body of the email. Maybe you'll send the recipient's email address. It would be nice to have a testing environment for all of this.
Well, we can console.log() stuff and see it in the terminal, so that's always handy. Plus we can write our functions to return whatever, and we could look at those responses in some kind of API testing tool, like Postman or Insomnia.
Markdown is a lightweight text markup language that allows the marked text to be converted to various formats. The original goal of creating Markdown was of enabling people “to write using an easy-to-read and easy-to-write plain text format and to optionally convert it to structurally valid XHTML (or HTML). Currently, with WordPress supporting Markdown, the format has become even more widely used.
The purpose of writing the article is to show you how to use Node.js and the Express framework to create an API endpoint. The context in which we will be learning this is by building an application that converts Markdown syntax to HTML. We will also be adding an authentication mechanism to the API so as to prevent misuse of our application.
A Markdown Node.js Application
Our teeny-tiny application, which we will call ‘Markdown Convertor’, will enable us to post Markdown-styled text and retrieve an HTML version. The application will be created using the Node.js Express framework, and support authentication for conversion requests.
We will build the application in small stages — initially creating a scaffold using Express and then adding various features like authentication as we go along. So let us start with the initial stage of building the application by creating a scaffold.
Stage 1: Installing Express
Assuming you’ve already installed Node.js on your system, create a directory to hold your application (let’s call it “markdown-api”), and switch to that directory:
$ mkdir markdown-api
$ cd markdown-api
Use the npm init command to create a package.json file for your application. This command prompts you for a number of things like the name and version of your application.
For now, simply hit Enter to accept the defaults for most of them. I’ve used the default entry point file as index.js, but you could try app.js or some other depending on your preferences.
Now install Express in the markdown-api directory and save it in the dependencies list:
$ npm install express --save
Create an index.js file in the current directory (markdown-api) and add the following code to test if the Express framework is properly installed:
Now browse to the URL http://localhost:3000 to check whether the test file is working properly. If everything is in order, we will see a Hello World!’ greeting in the browser and we can proceed to build a base API to convert Markdown to HTML.
Stage 2: Building A Base API
The primary purpose of our API will be to convert text in a Markdown syntax to HTML. The API will have two endpoints:
/login
/convert
The login endpoint will allow the application to authenticate valid requests while the convert endpoint will convert (obviously) Markdown to HTML.
Below is the base API code to call the two endpoints. The login call just returns an “Authenticated” string, while the convert call returns whatever Markdown content you submitted to the application. The home method just returns a ‘Hello World!’ string.
We use the body-parser middleware to make it easy to parse incoming requests to the applications. The middleware will make all the incoming requests available to you under the req.body property. You can do without the additional middleware but adding it makes it far easier to parse various incoming request parameters.
You can install body-parser by simply using npm:
$ npm install body-parser
Now that we have our dummy stub functions in place, we will use Postman to test the same. Let’s first begin with a brief overview of Postman.
Postman Overview
Postman is an API development tool that makes it easy to build, modify and test API endpoints from within a browser or by downloading a desktop application (browser version is now deprecated). It has the ability to make various types of HTTP requests, i.e. GET, POST, PUT, PATCH. It is available for Windows, macOS, and Linux.
Here’s a taste of Postman’s interface:
To query an API endpoint, you’ll need to do the following steps:
Enter the URL that you want to query in the URL bar in the top section;
Select the HTTP method on the left of the URL bar to send the request;
Click on the ‘Send’ button.
Postman will then send the request to the application, retrieve any responses and display it in the lower window. This is the basic mechanism on how to use the Postman tool. In our application, we will also have to add other parameters to the request, which will be described in the following sections.
Using Postman
Now that we have seen an overview of Postman, let’s move forward on using it for our application.
Start your markdown-api application from the command-line:
$ node index.js
To test the base API code, we make API calls to the application from Postman. Note that we use the POST method to pass the text to convert to the application.
The application at present accepts the Markdown content to convert via the content POST parameter. This we pass as a URL encoded format. The application, currently, returns the string verbatim in a JSON format — with the first field always returning the string markdown and the second field returning the converted text. Later, when we add the Markdown processing code, it will return the converted text.
Stage 3: Adding Markdown Convertor
With the application scaffold now built, we can look into the Showdown JavaScript library which we will use to convert Markdown to HTML. Showdown is a bidirectional Markdown to HTML converter written in Javascript which allows you to convert Markdown to HTML and back.
Install the package using npm:
$ npm install showdown
After adding the required showdown code to the scaffold, we get the following result:
const express = require("express");
const bodyParser = require('body-parser');
const showdown = require('showdown');
var app = express();
app.use(bodyParser.urlencoded({ extended: true }));
app.use(bodyParser.json());
converter = new showdown.Converter();
app.get('/', function(req, res){
res.send('Hello World!');
});
app.post('/login', function(req, res) {
res.send("Authenticated");
},
);
app.post("/convert", function(req, res, next) {
if(typeof req.body.content == 'undefined' || req.body.content == null) {
res.json(["error", "No data found"]);
} else {
text = req.body.content;
html = converter.makeHtml(text);
res.json(["markdown", html]);
}
});
app.listen(3000, function() {
console.log("Server running on port 3000");
});
The main converter code is in the /convert endpoint as extracted and shown below. This will convert whatever Markdown text you post to an HTML version and return it as a JSON document.
...
} else {
text = req.body.content;
html = converter.makeHtml(text);
res.json(["markdown", html]);
}
The method that does the conversion is converter.makeHtml(text). We can set various options for the Markdown conversion using the setOption method with the following format:
converter.setOption('optionKey', 'value');
So, for example, we can set an option to automatically insert and link a specified URL without any markup.
As in the Postman example, if we pass a simple string (such as Google home http://www.google.com/) to the application, it will return the following string if simplifiedAutoLink is enabled:
<p>Google home <a href="http://www.google.com/">http://www.google.com/</a></p>
Without the option, we will have to add markup information to achieve the same results:
Google home <http://www.google.com/>
There are many options to modify how the Markdown is processed. A complete list can be found on the Passport.js website.
So now we have a working Markdown-to-HTML converter with a single endpoint. Let us move further and add authentication to have application.
Stage 4: Adding API Authentication Using Passport
Exposing your application API to the outside world without proper authentication will encourage users to query your API endpoint with no restrictions. This will invite unscrupulous elements to misuse your API and also will burden your server with unmoderated requests. To mitigate this, we have to add a proper authentication mechanism.
We will be using the Passport package to add authentication to our application. Just like the body-parser middleware we encountered earlier, Passport is an authentication middleware for Node.js. The reason we will be using Passport is that it has a variety of authentication mechanisms to work with (username and password, Facebook, Twitter, and so on) which gives the user the flexibility on choosing a particular mechanism. A Passport middleware can be easily dropped into any Express application without changing much code.
Install the package using npm.
$ npm install passport
We will also be using the local strategy, which will be explained later, for authentication. So install it, too.
$ npm install passport-local
You will also need to add the JWT(JSON Web Token) encode and decode module for Node.js which is used by Passport:
$ npm install jwt-simple
Strategies In Passport
Passport uses the concept of strategies to authenticate requests. Strategies are various methods that let you authenticate requests and can range from the simple case as verifying username and password credentials, authentication using OAuth (Facebook or Twitter), or using OpenID. Before authenticating requests, the strategy used by an application must be configured.
In our application, we will use a simple username and password authentication scheme, as it is simple to understand and code. Currently, Passport supports more than 300 strategies which can be found here.
Although the design of Passport may seem complicated, the implementation in code is very simple. Here is an example that shows how our /convert endpoint is decorated for authentication. As you will see, adding authentication to a method is simple enough.
app.post("/convert",
passport.authenticate('local',{ session: false, failWithError: true }),
function(req, res, next) {
// If this function gets called, authentication was successful.
// Also check if no content is sent
if(typeof req.body.content == 'undefined' || req.body.content == null) {
res.json(["error", "No data found"]);
} else {
text = req.body.content;
html = converter.makeHtml(text);
res.json(["markdown", html]);
}},
// Return a 'Unauthorized' message back if authentication failed.
function(err, req, res, next) {
return res.status(401).send({ success: false, message: err })
});
Now, along with the Markdown string to be converted, we also have to send a username and password. This will be checked with our application username and password and verified. As we are using a local strategy for authentication, the credentials are stored in the code itself.
Although this may sound like a security nightmare, for demo applications this is good enough. This also makes it easier to understand the authentication process in our example. Incidentally, a common security method used is to store credentials in environment variables. Still, many people may not agree with this method, but I find this relatively secure.
The complete example with authentication is shown below.
const express = require("express");
const showdown = require('showdown');
const bodyParser = require('body-parser');
const passport = require('passport');
const jwt = require('jwt-simple');
const LocalStrategy = require('passport-local').Strategy;
var app = express();
app.use(bodyParser.urlencoded({ extended: true }));
app.use(bodyParser.json());
converter = new showdown.Converter();
const ADMIN = 'admin';
const ADMIN_PASSWORD = 'smagazine';
const SECRET = 'secret#4456';
passport.use(new LocalStrategy(function(username, password, done) {
if (username === ADMIN && password === ADMIN_PASSWORD) {
done(null, jwt.encode({ username }, SECRET));
return;
}
done(null, false);
}));
app.get('/', function(req, res){
res.send('Hello World!');
});
app.post('/login', passport.authenticate('local',{ session: false }),
function(req, res) {
// If this function gets called, authentication was successful.
// Send a 'Authenticated' string back.
res.send("Authenticated");
});
app.post("/convert",
passport.authenticate('local',{ session: false, failWithError: true }),
function(req, res, next) {
// If this function gets called, authentication was successful.
// Also check if no content is sent
if(typeof req.body.content == 'undefined' || req.body.content == null) {
res.json(["error", "No data found"]);
} else {
text = req.body.content;
html = converter.makeHtml(text);
res.json(["markdown", html]);
}},
// Return a 'Unauthorized' message back if authentication failed.
function(err, req, res, next) {
return res.status(401).send({ success: false, message: err })
});
app.listen(3000, function() {
console.log("Server running on port 3000");
});
A Postman session that shows conversion with authentication added is shown below.
Here we can see that we have got a proper HTML converted string from a Markdown syntax. Although we have only requested to convert a single line of Markdown, the API can convert a larger amount of text.
This concludes our brief foray into building an API endpoint using Node.js and Express. API building is a complex topic and there are finer nuances that you should be aware of while building one, which sadly we have no time for here but will perhaps cover in future articles.
Accessing Our API From Another Application
Now that we have built an API, we can create a small Node.js script that will show you how the API can be accessed. For our example, we will need to install the request npm package that provides a simple way to make HTTP requests. (You will Most probably already have this installed.)
$ npm install request --save
The example code to send a request to our API and get the response is given below. As you can see, the request package simplifies the matter considerably. The markdown to be converted is in the textToConvert variable.
Before running the following script, make sure that the API application we created earlier is already running. Run the following script in another command window.
Note: We are using the(back-tick)sign to span multiple JavaScript lines for thetextToConvertvariable. This is not a single-quote.
var Request = require("request");
// Start of markdown
var textToConvert = `Heading
=======
## Sub-heading
Paragraphs are separated
by a blank line.
Two spaces at the end of a line
produces a line break.
Text attributes _italic_,
**bold**, 'monospace'.
A [link](http://example.com).
Horizontal rule:`;
// End of markdown
Request.post({
"headers": { "content-type": "application/json" },
"url": "http://localhost:3000/convert",
"body": JSON.stringify({
"content": textToConvert,
"username": "admin",
"password": "smagazine"
})
}, function(error, response, body){
// If we got any connection error, bail out.
if(error) {
return console.log(error);
}
// Else display the converted text
console.dir(JSON.parse(body));
});
When we make a POST request to our API, we provide the Markdown text to be converted along with the credentials. If we provide the wrong credentials, we will be greeted with an error message.
For a correctly authorized request, the above sample Markdown will be converted to the following:
[ 'markdown',
`<h1 id="heading">Heading</h1>
<h2 id="subheading">Sub-heading</h2>
<p>Paragraphs are separated by a blank line.</p>
<p>Two spaces at the end of a line<br />
produces a line break.</p>
<p>Text attributes <em>italic</em>,
<strong>bold</strong>, 'monospace'.
A <a href="http://example.com">link</a>.
Horizontal rule:</p>` ]
Although we have hardcoded the Markdown here, the text can come from various other sources — file, web forms, and so on. The request process remains the same.
Note that as we are sending the request as an application/json content type; we need to encode the body using json, hence the JSON.stringify function call. As you can see, it takes a very small example to test or API application.
Conclusion
In this article, we embarked on a tutorial with the goal of learning on how to use Node,js and the Express framework to build an API endpoint. Rather than building some dummy application with no purpose, we decided to create an API that converts Markdown syntax to HTML, which anchors or learning in a useful context. Along the way, we added authentication to our API endpoint, and we also saw ways to test our application endpoint using Postman.