Every week users submit a lot of interesting stuff on our sister site Webdesigner News, highlighting great content from around the web that can be of interest to web designers.
The best way to keep track of all the great stories and news being posted is simply to check out the Webdesigner News site, however, in case you missed some here’s a quick and useful compilation of the most popular designer news that we curated from the past week.
Note that this is only a very small selection of the links that were posted, so don’t miss out and subscribe to our newsletter and follow the site daily for all the news.
Apple is Designing for a Post-Facebook World
21 Tips to Become Better in JavaScript, Much Much Better!
I’ve Redesigned my Website and it Looks Exactly the Same
Firefox Monitor
Has Firefox Accidentally Revealed its New Logo?
CSS Grid Generator
Uibot: Infinite UI Designs
Apple Introduces New York Font, an All-new Serif
Mercury OS Concept
Samsung Announces its Answer to the MacBook Pro
24 Years of Amazon Website Design History
All Free Patterns
Free Static HTML Website Templates, 2019 Updated
The 4px Baseline Grid? – The Present
Webframe – > 800 Screenshots of Beautiful Designs and UX Patterns from the Internet’s Top Web Apps
Spero – Another Trendy Looking Free Bootstrap Template
All in One Bookmark Links for Designer
How to Become a UX Designer by Learning UX Design on your own
2019 Logo Trend Report
Does the Perfect Portfolio Exist? Top Creatives and Studios Offer their Advice
9 Timeless Ways to Increase Conversions
Apple Now a Privacy-as-a-service Company
Dungeons & Dragons & Design Thinking
Giphy Launches New Range of Animated Emojis Because that’s Life in 2019
Double Down on your Strengths
Want more? No problem! Keep track of top design news from around the web with Webdesigner News.
I’d like to tell you how I see code and design intersect and support one another. Specifically, I want to cover how designers can use code in their everyday work. I suggest this not because it’s a required skill, but because even a baseline understanding of coding can make designs better and the hand-off from design to development smoother.
As a UX Designer, I am always looking for good ways to both explore my UX design problems and communicate the final designs to others. Over the past 30 years, my work has always involved working alongside developers, but generally there has been a great divide between what I do and what developers do.
I can code at a basic level. For example, I’ve helped teach C to undergraduates back when I was a post-graduate student. I’ve worked on the usability of JDeveloper Oracle’s Integrated Development Environment (IDE) for Java. I also worked for a very short while on simplifying the UX of a WordPress content management environment to make it accessible to less technical users. This required a good understanding of HTML and CSS. I also volunteered on the design of the PHP website and had to develop some understanding of the server side of web development.
But even given these experiences, I am not a developer in any true sense of the word. If I happen to be looking at code, it’s in a “just in time” learning model — I look up what I need and then hack it until it works. Learning this way has often been frowned upon, a bit like learning to drive without lessons. You pick up bad development habits but maybe that’s OK for the work I do.
So, no I don’t develop or write code. My day-to-day work is mostly been spent drawing, talking and gathering requirements. As far as design goes, I’ll start by sketching concepts in a notebook or using Balsamiq. Then I draw up UX wireframes and prototypes using tools like Axure, Adobe XD, InVision Studio, Figma and Sketch. By the time I’m ready to hand off my deliverables to development, all the visual assets and documentation have been defined and communicated. But I don’t step over the line into code development. That is just not my area of expertise.
So, why should designers know code?
We’ve already established that I’m no developer, but I have recently become an advocate for designers getting a good feel for how design and code interact.
In fact, I’d call it “playing with code.” I am definitely not suggesting that UX designers become developers, but at the very least, I think designers would benefit by becoming comfortable with a basic understanding of what is currently possible with CSS and best practices in HTML.
Being experimental is a huge part of doing design. Code is just another medium with which we can experiment and build beautiful solutions. So, we’re going to look at a couple of ways designers can experiment with code, even with a light understanding of it. What we’re covering here may be obvious to developers, but there are plenty of designers out there who have never experimented with code and will be seeing these for the first time.
So, it’s for them (and maybe a refresher for you) that we look at the following browser tools.
DevTools: The ultimate code playground
One of the concerns a UX designer might have is knowing how a design holds up once it’s in the browser. Are the colors accurate? Are fonts legible throughout? How do the elements respond on various devices? Will my grey hover state work with the white/grey zebra striping on my application grids in practice? These are some of the styling and interaction issues designers are thinking about when we hand our work off for development.
This is where DevTools can be a designer’s best friend. Every browser has its own version of it. You may have already played with such tools before. It’s that little “Inspect” option when right clicking on the screen.
What makes DevTools such a wonderful resource is that it provides a way to manipulate the code of a live website or web application without having to set up a development environment. And it’s disposable. Any edits you make are for your eyes only and are washed away the very moment the browser refreshes.
Further, DevTools can mimic other devices.
And, if you haven’t seen it yet, Firefox released a wonderful new shape path editor that’s very valuable for exploring interesting designs.
Over the past few months, I have been working on a complex web client for an enterprise-level application. Last sprint, my UX design story was to explore the look of the entry page of the web application and how to introduce a new color scheme. It was hard to envision how the changes I was making were going to impact the tool as a whole as some of the components I was changing are used throughout the product.
One day, when discussing a design decision, one of the developers tested out my suggested change to a component using the latest DevTools in his browser. I was amazed by how much DevTools has grown and expanded since I last remember it. I could immediately see the impact of that change across our whole web application and how different design decisions interacted with the existing design. I started to experiment with DevTools myself and was able to preview and experiment with how my own simple CSS changes to components would impact the whole web application. Super helpful!
However, it was also a little frustrating to not be able store my experiments and changes for future reference. So, I went exploring to see what else was out there to support my design process.
Chrome browser extensions
DevTools is are amazing right out of the box, but experimenting with code gets even more awesome when browser extensions are added to the mix. Chrome, in particular, has a couple that I really like.
Chrome Extension 1: User CSS
User CSS is a Chrome browser extension that allows you to save the changes you make in DevTools in an editable CSS code tab. These CSS changes are automatically executed on that page if User CSS is enabled. This means that you can set up CSS overrides for any page on the web, view them later, and share them with others. This can be an incredible tool when, say, doing a design review of a staging site prior to release, or really any design exploration for a web application or website that is viewable in a browser.
User CSS has a nice built-in code editor, so my code is always well formatted and includes syntax highlighting so I don’t have to worry about that sort of thing. I particularly like the fact that overrides are executed immediately so you can see changes on the fly. It also has a useful switch that allows you to turn your overrides on and off. This makes it very simple to demonstrate a set of changes to a team. This extension has allowed me to easily present a comparison between an existing page design and proposed changes. I’ve even used it to make a simple video demonstrating the proposed design changes.
In this video I make some simple edits to my web page and then show how I can turn on and off the edits by simply clicking the on/off button on User CSS:
This extension is a perfect if you all you need to do is edit CSS, particularly if you have some very simple design changes to make want to those changes to persist. However, the complexity of a design increases, I have found myself wanting to save more than one snippet of code at a time. Copying and pasting code in and out of the User CSS editor was becoming a pain. I wanted something with more flexibility and freedom to experiment. I also wanted to be able to look at simple changes to the HTML of my web application and even play with a bit of JavaScript.
That’s where the next extension comes into play.
Chrome Extension 2: Web Overrides
The second Chrome extension I found is called Web Override and it provides a way to override HTML, CSS and JavaScript. All of them! This is presented as three tabs, much the same way CodePen does, which makes it a very powerful tool for creating rough working design prototypes.
Web Overrides also allows you to save multiple files so that you can switch different parts of a design on or off in different combinations. It also quickly switches between the different options to show off different design concepts.
This video shows how I added an HTML element into a page and edited the new element with some basic CSS:
Using the HTML tab makes it possible to edit any element on the page, like swap out a logo, remove unnecessary elements, or add new ones. The JavaScript tab is similar in that I can do simple changes, like inject additional text into the website title so that I can test how dynamic changes might affect the layout — this can be useful for testing different scenarios, such as differences with internationalization.
These edits may be trivial from a coding perspective, but they allow me to explore hundreds of alternative designs in a much shorter time and with a lot less risk than scooting pixels around in a design application. I literally could not explore as many ideas as quickly using my traditional UX prototyping tools as I can with this one extension.
And, what is more, both me and my team have confidence in the design deliverables because we tested them early on in the browser. Even the most pixel-perfect Photoshop file can get lost in translation when the design is in the browser because it’s really just a snapshot of a design in a static state. Testing designs first in the browser using these extensions prove that what we have designed is possible.
On the flip side of this, you might want to check out how Jon Kantner used similar browser extensions to disable CSS as a means to audit the semantic markup various sites. It’s not exactly design-related, but interesting to see how these tools can have different use cases.
What I’ve learned so far
I am excited about what I have learned since leaning into DevTools and browser extensions. I believe my designs are so much better as a result. I also find myself able to have more productive conversations with developers because we now have a way to communicate well. The common ground between design and code in rapid prototypes really helps facilitate discussion. And, because I am playing around with actual code, I have a much better sense about how the underlying code will eventually be written and can empathize a lot more with the work developers do — and perhaps how I can make their jobs easier in the process.
It has also created a culture of collaborative rapid prototyping on my team which is a whole other story.
Playing with code has opened up new ideas and encouraged me to adapt my work to the context of the web. It’s been said that getting into the browser earlier in the design process is ideal and these are the types of tools that allow me (and you) to do just that!
Do you have other tools or processes that you use to facilitate the collaboration between design and code? Please share them in the comments!
Šime posts regular content for web developers on webplatform.news.
New Feature Policy API in Chrome
Pete LePage: You can use the document.featurePolicy.allowedFeatures method in Chrome to get a list of all Feature Policy-controlled features that are allowed on the current page.
This API can be useful when implementing a feature policy (and updating an existing feature policy) on your website.
Open your site in Chrome and run the API in the JavaScript console to check which Feature Policy-controlled features are allowed on your site.
Read about individual features on featurepolicy.info and decide which features should be disabled ('none' value), and which features should be disabled only in cross-origin elements ('self' value).
Add the Feature-Policy header to your site’s HTTP responses (policies are separated by semicolons).
Repeat Step 1 to confirm that your new feature policy is in effect. You can also scan your site on securityheaders.com.
In other news…
Dave Camp: Firefox now blocks cookies from known trackers by default (when the cookie is used in a third-party context). This change is currently in effect only for new Firefox users; existing users will be automatically updated to the new policy “in the coming months.”
Pete LePage: Chrome for Android now allows websites to share images (and other file types) via the navigator.share method. See Web Platform News Issue 1014 for more information about the Web Share API. Ayooluwa Isaiah’s post from yesterday is also a good reference on how to use it.
Valerie Young: The ECMAScript Internationalization APIs for date and time formatting (Intl.DateTimeFormat constructor), and number formatting (Intl.NumberFormat constructor) are widely supported in browsers.
Alan Jeffrey: Patrick Walton from Mozilla is working on a vector graphics renderer that can render text smoothly at all angles when viewed with an Augmented Reality (AR) headset. We plan to use it in our browsers for AR headsets (Firefox Reality).
Pinterest Engineering: Our progressive web app is now available as a standalone desktop application on Windows 10. It can be installed via the Microsoft Store, which “treats the packaged PWA as a first class citizen with access to Windows 10 feature APIs.”
Jonathan Davis: The flow-root value for the CSS display property has landed in Safari Technology Preview. This value is already supported in Chrome and Firefox. See Web Platform News Issue 871 for a use case.
Ire Aderinokun writes about a new way to set a performance budget (and stick to it) with Lighthouse, Google’s suite of tools that help developers see how performant and accessible their websites are:
Until recently, I also hadn’t setup an official performance budget and enforced it. This isn’t to say that I never did performance audits. I frequently use tools like PageSpeed Insights and take the feedback to make improvements. But what I had never done was set a list of metrics that I needed the site to meet, and enforce them using some automated tool.
The reasons for this were a combination of not knowing what exact numbers I should use for budgets as well as there being a disconnect between setting a budget and testing/enforcing it. This is why I was really excited when, at Google I/O this year, the Lighthouse team announced support for performance budgets that can be integrated with Lighthouse. We can now define a simple performance budget in a JSON file, which will be tested as part of the lighthouse audit!
I completely agree with Ire, and much in the same way I’ve tended to neglect sticking to a performance budget simply because the process of testing was so manual and tedious. But no more! As Ire shows in this post, you can even set Lighthouse up to test your budget with every PR in GitHub. That tool is called lighthousebot and it’s just what I’ve been looking for – an automated and predictable way to integrate a performance budget into every change that I make to a codebase.
Today lighthousebot will comment on your PR after a test is complete and it will show you the before and after score:
How neat is that? This reminds me of Gareth Clubb’s recent post about improving web performance and building a culture around budgets in an organization. What better way to remind everyone about performance than right in GitHub after each and every change that they make?
Drawing is one of the most common and fun past times. Nowadays, with the release of new and powerful Android tablets and iPads, you don’t even have to break out your drawing pad and favorite drawing utensils.
There are dozens of drawing apps that allow you to draw digitally and hone your artistic skills. The only problem is, which drawing app to pick? To help you answer that question, we’ve rounded up the best drawing apps for Android and iPad.
Procreate
Procreate is an exclusive iPad app that was built with professionals in mind and works flawlessly with the Apple Pencil. It has an unobtrusive UI and and easy-to-use color picker as well as over 136 brushes.
A few exclusive tools include dual-texture brushes and incredibly responsive smudging tools. You can export your finished masterpiece as PSD, native .procreate, TIFF, transparent PNG, multi-page PDF, and JPEG file formats.
Affinity Designer
Affinity Designer is another well-known app for all of you who want to use iPad and Apple Pencil to create works of art and digital drawings. Affinity Designer supports both CMYK and RGB color formats and comes with a full-blown Pantone library in the swatch panel.
The finished drawing can be exported to a wide variety of formats such as JPG, PNG, PDF and SVG. You will also find more than a 100 brushes, in a variety of styles including paints, pencils, inks, pastels, and gouaches.
Adobe Illustrator Draw
Be sure to check out Adobe Illustrator Draw if you use an Android tablet. Adobe Illustrator Draw has full layer support, much like its desktop counterpart. It also supports zoom up to x64 so you can easily see all the fine details.
The app comes with 5 different pen tips, all of which have various customization features. You can export your work to other devices and open it later in the desktop version of Illustrator or Photoshop. Adobe Illustrator Draw is free and it’s also available as an iPad app.
ArtRage
ArtRage is a full-fledged painting and drawing software that can be used not only on your Windows or Mac computer but also on your Android tablets and iOS devices. The app features a range of tools that will help you create a digital drawing or painting. You will find various canvas presets, brushes, paper options, pencils, crayons, rollers, and pastels.
ArtRage also comes with the ability to create custom brushes and powerful perspective layout tools.
ArtFlow
Voted as one of the Editor’s Choice apps on the Google Play Store, ArtFlow is a powerful drawing app for Android devices. ArtFlow comes with more than 80 brushes, layer features, and layer blending.
Once you’re done with your drawing, you can easily export it to a variety of formats, including JPEG, PNG, and PSD. What’s more if you have an Nvidia device, you’ll have access to Nvidia’s DirectStylus support.
Ibis Paint
If you’re looking for an app that’s easy and fun to use for both kids and adults, look no further than Ibis Paint. With more than 140 brushes, numerous filters, various blending modes, clipping mask features, and the ability to record your drawing process, this app is definitely worth checking out.
On top of the above features, Ibis Paint offers stroke stabilization feature and various ruler features.
LayerPaint HD
LayerPaint HD is another Android app that has plenty of features to create beautiful digital drawings. The app has a ton of features, however, the most notables ones are pen pressure support, layer features, and the ability to export your work as PSD so you can finish it off in Photoshop if you so desire.
In addition to that, LayerPaint HD comes with full support for keyboard shortcuts which makes your digital drawing process even easier.
Linea Sketch
Linea Sketch is an easy to use iPad drawing app that allows you to draw as well as take handwritten notes. The app is optimized for Apple pencil. What makes it standout from the others, is the ability to help you create the perfect circle and other shapes and automatically recommends matching colors for the shades you’re already using.
Other features include layer support, split screen, and exporting your projects as PSD, JPG, or PNG files.
MediBang Paint
MediBang Paint is a free drawing app available for Android, iOS, and PC/Mac computers. The app makes it easy to create drawings and comic books and is packed with features such as brushes, fonts, premade backgrounds, and other resources.
What’s more, the app has a vibrant community so there are plenty of resources and tutorials online that will help you learn the ropes quickly.
Autodesk SketchBook
Autodesk SketchBook used to be a paid app but is now completely free to download and use on both Android and iOS devices. The app has a simple user interface and comes with dozens of various brushes, intuitive gestures, and includes the exclusive Copic® Color Library.
The app also lets you export your work as JPG, PNG, BMP, TIFF, and PSD. Lastly, Autodesk SketchBook has full support for layer features.
Haben Girma, disability rights advocate and Harvard Law’s first deafblind graduate, made the following statement in her keynote address at the AccessU digital accessibility conference last month:
“I define disability as an opportunity for innovation.”
She charmed and impressed the audience, telling us about learning sign language by touch, learning to surf, and about the keyboard-to-braille communication system that she used to take questions after her talk.
Contrast this with the perspective many of us take building apps: web accessibility is treated as an afterthought, a confusing collection of rules that the team might look into for version two. If that sounds familiar (and you’re a developer, designer or product manager), this article is for you.
I hope to shift your perspective closer to Haben Girma’s by showing how web accessibility fits into the broader areas of technology, disability, and design. We’ll see how designing for different sets of abilities leads to insight and innovation. I’ll also shed some light on how the history of browsers and HTML is intertwined with the history of assistive technology.
Assistive Technology
An accessible product is one that is usable by all, and assistive technology is a general term for devices or techniques that can aid access, typically when a disability would otherwise preclude it. For example, captions give deaf and hard of hearing people access to video, but things get more interesting when we ask what counts as a disability.
On the ‘social model’ definition of disability adopted by the World Health Organization, a disability is not an intrinsic property of an individual, but a mismatch between the individual’s abilities and environment. Whether something counts as a ‘disability’ or an ‘assistive technology’, doesn’t have such a clear boundary and is contextual.
Addressing mismatches between ability and environment has lead to not only technological innovations but also to new understandings of how humans perceive and interact with the world.
Access + Ability, a recent exhibit at the Cooper Hewitt Smithsonian design museum in New York, showcased some recent assistive technology prototypes and products. I’d come to the museum to see a large exhibit on designing for the senses, and ended up finding that this smaller exhibit offered even more insight into the senses by its focus on cross-sensory interfaces.
Seeing is done with the brain, and not with the eyes. This is the idea behind one of the items in the exhibit, Brainport, a device for those who are blind or have low vision. Your representation of your physical environment from sight is based on interpretations your brain makes from the inputs that your eyes receive.
What if your brain received the information your eyes typically receive through another sense? A camera attached to Brainport’s headset receives visual inputs which are translated into a pixel-like grid pattern of gentle shocks perceived as “bubbles” on the wearer’s tongue. Users report being able to “see” their surroundings in their mind’s eye.
Soundshirt also translates inputs typically perceived by one sense to inputs that can be perceived by another. This wearable tech is a shirt with varied sound sensors and subtle vibrations corresponding to different instruments in an orchestra, enabling a tactile enjoyment of a symphony. Also on display for interpreting sound was an empathetically designed hearing aid that looks like a piece of jewelry instead of a clunky medical device.
Designing for different sets of abilities often leads to innovations that turn out to be useful for people and settings beyond their intended usage. Curb cuts, the now familiar mini ramps on the corners of sidewalks useful to anyone wheeling anything down the sidewalk, originated from disability rights activism in the ’70s to make sidewalks wheelchair accessible. Pellegrino Turri invented the early typewriter in the early 1800s to help his blind friend write legibly, and the first commercially available typewriter, the Hansen Writing Ball, was created by the principal of Copenhagen’s Royal Institute for the Deaf-Mutes.
Vint Cerf cites his hearing loss as shaping his interest in networked electronic mail and the TCP/IP protocol he co-invented. Smartphone color contrast settings for color blind people are useful for anyone trying to read a screen in bright sunlight, and have even found an unexpected use in helping people to be less addicted to their phones.
So, designing for different sets of abilities gives us new insights into how we perceive and interact with the environment, and leads to innovations that make for a blurry boundary between assistive technology and technology generally.
With that in mind, let’s turn to the web.
Assistive Tech And The Web
The web was intended as accessible to all from the start. A quote you’ll run into a lot if you start reading about web accessibility is:
“The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect.”
— Tim Berners-Lee, W3C Director and inventor of the World Wide Web
What sort of assistive technologies are available to perceive and interact with the web? You may have heard of or used a screen reader that reads out what’s on the screen. There are also braille displays for web pages, and alternative input devices like an eye tracker I got to try out at the Access + Ability exhibit.
It’s fascinating to learn that web pages are displayed in braille; the web pages we create may be represented in 3D! Braille displays are usually made of pins that are raised and lowered as they “translate” each small part of the page, much like the device I saw Haben Girma use to read audience questions after her AccessU keynote. A newer company, Blitab (named for “blind” + “tablet”), is creating a braille Android tablet that uses a liquid to alter the texture of its screen.
People proficient with using audio screen readers get used to faster speech and can adjust playback to an impressive rate (as well as saving battery life by turning off the screen). This makes the screen reader seem like an equally useful alternative mode of interacting with web sites, and indeed many people take advantage of audio web capabilities to dictate or hear content. An interface intended for some becomes more broadly used.
Web accessibility is about more than screen readers, however, we’ll focus on them here because — as we’ll see — screen readers are central to the technical challenges of an accessible web.
Imagine you had to design a screen reader. If you’re like me before I learned more about assistive tech, you might start by imagining an audiobook version of a web page, thinking your task is to automate reading the words on the page. But look at this page. Notice how much you use visual cues from layout and design to tell you what its parts are for how to interact with them.
How would your screen reader know when the text on this page belongs to clickable links or buttons?
How would the screen reader determine what order to read out the text on the page?
How could it let the user “skim” this page to determine the titles of the main sections of this article?
The earliest screen readers were as simple as the audiobook I first imagined, as they dealt with only text-based interfaces. These “talking terminals,” developed in the mid-’80s, translated ASCII characters in the terminal’s display buffer to an audio output. But graphical user interfaces (or GUI’s) soon became common. “Making the GUI Talk,” a 1991 BYTE magazine article, gives a glimpse into the state of screen readers at a moment when the new prevalence of screens with essentially visual content made screen readers a technical challenge, while the freshly passed Americans with Disabilities Act highlighted their necessity.
OutSpoken, discussed in the BYTE article, was one of the first commercially available screen readers for GUI’s. OutSpoken worked by intercepting operating system level graphics commands to build up an offscreen model, a database representation of what is in each part of the screen. It used heuristics to interpret graphics commands, for instance, to guess that a button is drawn or that an icon is associated with nearby text. As a user moves a mouse pointer around on the screen, the screen reader reads out information from the offscreen model about the part of the screen corresponding to the cursor’s location.
This early approach was difficult: intercepting low-level graphics commands is complex and operating system dependent, and relying on heuristics to interpret these commands is error-prone.
The Semantic Web And Accessibility APIs
A new approach to screen readers arose in the late ’90s, based on the idea of the semantic web. Berners-Lee wrote of his dream for a semantic web in his 1999 book Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web:
I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web — the content, links, and transactions between people and computers. A “Semantic Web”, which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy, and our daily lives will be handled by machines talking to machines. The “intelligent agents” people have touted for ages will finally materialize.
Berners-Lee defined the semantic web as “a web of data that can be processed directly and indirectly by machines.” It’s debatable how much this dream has been realized, and many now think of it as unrealistic. However, we can see the way assistive technologies for the web work today as a part of this dream that did pan out.
The emergence of the World Wide Web has made it possible for individuals with appropriate computer and telecommunications equipment to interact as never before. It presents new challenges and new hopes to people with disabilities.
HTML4, developed in the late ’90s and released in 1998, emphasized separating document structure and meaning from presentational or stylistic concerns. This was based on semantic web principles, and partly motivated by improving support for accessibility. The HTML5 that we currently use builds on these ideas, and so supporting assistive technology is central to its design.
So, how exactly do browsers and HTML support screen readers today?
Many front-end developers are unaware that the browser parses the DOM to create a data structure, especially for assistive technologies. This is a tree structure known as the accessibility tree that forms the API for screen readers, meaning that we no longer rely on intercepting the rendering process as the offscreen model approach did. HTML yields one representation that the browser can use both to render on a screen, and also give to audio or braille devices.
Let’s look at the accessibility API in a little more detail to see how it handles the challenges we considered above. Nodes of the accessibility tree, called “accessible objects,” correspond to a subset of DOM nodes and have attributes including role (such as button), name (such as the text on the button), and state (such as focused) inferred from the HTML markup. Screen readers then use this representation of the page.
This is how a screen reader user can know an element is a button without making use of the visual style cues that a sighted user depends on. How could a screen reader user find relevant information on a page without having to read through all of it? In a recent survey, screen reader users reported that the most common way they locate the information they are looking for on a page is via the page’s headings. If an element is marked up with an h1–h6 tag, a node in the accessibility tree is created with the role heading. Screen readers have a “skip to next heading” functionality, thereby allowing a page to be skimmed.
Some HTML attributes are specifically for the accessibility tree. ARIA (Accessible Rich Internet Applications) attributes can be added to HTML tags to specify the corresponding node’s name or role. For instance, imagine our button above had an icon rather than text. Adding aria-label="sign up" to the button element would ensure that the button had a label for screen readers to represent to their users. Similarly, we can add alt attributes to image tags, thereby supplying a name to the corresponding accessible node and providing alternative text that lets screen reader users know what’s on the page.
The downside of the semantic approach is that it requires developers to use HTML tags and aria attributes in a way that matches their code’s intent. This, in turn, requires awareness among developers, and prioritization of accessibility by their teams. Lack of awareness and prioritization, rather than any technical limitation, is currently the main barrier to an accessible web.
So the current approach to assistive tech for the web is based on semantic web principles and baked into the design of browsers and HTML. Developers and their teams have to be aware of the accessibility features built into HTML to be able to take advantage of them.
Machine Learning (ML) and Artificial Intelligence (AI) come to mind when we read Berners-Lee’s remarks about the dream of the semantic web today. When we think of computers being intelligent agents analyzing data, we might think of this as being done via machine learning approaches. The early offscreen model approach we looked at used heuristics to classify visual information. This also feels reminiscent of machine learning approaches, except that in machine learning, heuristics to classify inputs are based on an automated analysis of previously seen inputs rather than hand-coded.
What if in the early days of figuring out how to make the web accessible we had been thinking of using machine learning? Could such technologies be useful now?
Machine learning has been used in some recent assistive technologies. Microsoft’s SeeingAI and Google’s Lookout use machine learning to classify and narrate objects seen through a smartphone camera. CTRL Labs is working on a technology that detects micro-muscle movements interpreted with machine learning techniques. In this way, it seemingly reads your mind about movement intentions and could have applications for helping with some motor impairments. AI can also be used for character recognition to read out text, and even translate sign language to text. Recent Android advances using machine learning let users augment and amplify sounds around them, and to automatically live transcribe speech.
AI can also be used to help improve the data that makes its way to the accessibility tree. Facebook introduced automatically generated alternative text to provide user images with screen reader descriptions. The results are imperfect, but point in an interesting direction. Taking this one step further, Google recently announced that Chrome will soon be able to supply automatically generated alternative text for images that the browser serves up.
What’s Next
Until (or unless) machine learning approaches become more mature, an accessible web depends on the API based on the accessibility tree. This is a robust solution, but taking advantage of the assistive tech built into browsers requires people building sites to be aware of them. Lack of awareness, rather than any technical difficulty, is currently the main challenge for web accessibility.
Key Takeaways
Designing for different sets of abilities can give us new insights and lead to innovations that are broadly useful.
The web was intended to be accessible from the start, and the history of the web is intertwined with the history of assistive tech for the web.
Assistive tech for the web is baked into the current design of browsers and HTML.
Designing assistive tech, particularly involving AI, is continuing to provide new insights and lead to innovations.
The main current challenge for an accessible web is awareness among developers, designers, and product managers.
Has Mozilla Firefox accidentally leaked its new logo before the grand reveal?
[Image: CNET]
There has been a leak, people. A big ‘ole leak.
Firefox announced last year that there would be a redesign on their sleek fox logo. They even asked their designer fans to help them come up with a new logo design.
After a year of anticipation and long waiting to see the new design, and well… It’s been leaked a week early.
The redesign was scheduled to be released on June 10th, 2019. Firefox started pumping up their users by releasing a teaser video on Twitter explaining why people should use their browser, as opposed to others, and the benefits that come along with using Firefox.
In the comment section of the video, you’ll see, Sean Martell, Firefox’s communication design leader,posted a picture. And to the standard by-passer, it’s just a cool picture of Mozilla’s new merch. Right? Wrong.
Look closely and amongst the merch, you’ll see the new, sleek design all over the place.
We really dig this new, modern design that Firefox has come out with. The color gradient is similar to the old one, and it looks like the fox got a little grooming on his coat as well.
Also, did you notice that blue-green logo? It’s a big change, but it looks amazing, and we’re living for it.
The logo is still very recognizable and distinguishable and is not as shocking as some other logo redesigns we have seen in the past. It’s almost always a great idea when redesigning your logo to try to keep some original elements from your first logo so that your users and clients will still recognize your logo from a crowd.
This time, the fox is looking toward us, as opposed to the past fox that was looking away from us. It almost gives you the sensation that the fox is looking out for you.
But although the logo redesign is amazing, that doesn’t change their number of monthly users which has continued to decline over time. But Mozilla is not about to quit. They’ve come up with amazing safety and user privacy features that completely distinguish themselves from Google Chrome, their biggest competitor.
Getting back to Mozilla’s new logo design though, if you remember, Tim Murray, Mozilla’s creative director, came up with two different styling systems for their redesign last year; system 1 and system 2 And to us, it seems like they went forward with the ideas of system one, and we’re thrilled about it.
With its clean, sleek, and stylized graphics, it still embraces the current flat design trend, instead of a displaying a skeuomorphic design. So new the logo is still pushing the same flat design trend as it did before, staying true to its original design.
Mozilla still hasn’t commented on the early leak, but all in all, we are happy it was released early because we were impatiently waiting to see what Mozilla new logo would look like.
If you’re planning on a redesign, we have 10 features of a good logo that you need to consider and implement during you’re redesign process.
Let’s Talk
We want to know you think about Mozilla’s potential new logo. Was the leak intentional? Was it actually just a PR move?
Let us know down in the comments below what you think and let’s discuss the new design, particularly the green one.
Say you have an image you’re using in an that is 800×600 pixels. Will it actually display as 800px wide on your site? It’s very likely that it will not. We tend to put images into flexible container elements, and the image inside is set to width: 100%;. So perhaps that image ends up as 721px wide, or 381px wide, or whatever. What doesn’t change is that images aspect ratio, lest you squish it awkwardly (ignoring the special use-case object-fit scenario).
So—we don’t know how much vertical space an image is going to occupy until that image loads. This is the cause of jank! Terrible jank! It’s everywhere and it’s awful.
There are ways to create aspect-ratio sized boxes in HTML/CSS today. None of the options are particularly elegant, relying on the “hack” of setting a zero height and pushing the boxes height with padding. Wouldn’t it be nicer to have a platform feature to help us here? The first crack at fixing this problem that I know about is an intrinsicsize attribute. Eric Portis wrote about how this works wonderfully in Jank-Free Image Loads.
We’d get this:
<img src="image.jpg" intrinsicsize="800x600" />
This is currently behind a flag in Chrome 71+, and it really does help solve this problem.
But…
The intrinsicsize property is brand new. It will only help on sites where the developers know about it and take the care to implement it. And it’s tricky! Images tend to be of arbitrary size, and the intrinsicsize attribute will need to be custom on every image to be accurate and do its job. That is, if it makes it out of standards at all.
There is another possibility! Eric also talked about the aspect-ratio property in CSS as a potential solution. It’s also still just a draft spec. You might say, but how is this helpful? It needs to be just as bespoke as intrinsicsize does, meaning you’d have to do it as inline styles to be helpful. Maybe that’s not so bad if it solves a big problem, but inline styles are such a pain to override and it seems like the HTML attribute approach is more inline with the spirit of images. Think of how srcset is a hint to browsers on what images are available to download and allowing it to pick the best. Telling the browser about the aspect-ratio upfront is similarly useful.
I heard from Jen Simmons about an absolutely fantastic way to handle this: put a default aspect ratio into the UA stylesheet based on the elements existing width and height attributes. Like this:
It automatically has the correct aspect ratio as the page loads. That’s awesome.
It’s easy to understand.
A ton of the internet already has these attributes sitting on their images already.
New developers will have no trouble understanding this, and old developers will be grateful there is little if any work to do here.
I like the idea of the CSS feature. But I like 100 times more the idea of putting it into the UA stylesheet so that the entire web benefits. Changing a UA stylesheet, I’m sure, is no small thing to consider, and I’m not qualified to understand all the implications of that, but it feels like a very awesome thing at first consideration.
The Web Share API is one that has seemingly gone under the radar since it was first introduced in Chrome 61 for Android. In essence, it provides a way to trigger the native share dialog of a device (or desktop, if using Safari) when sharing content — say a link or a contact card — directly from a website or web application.
While it’s already possible for a user to share content from a webpage via native means, they have to locate the option in the browser menu, and even then, there’s no control over what gets shared. The introduction of this API allows developers to add sharing functionality into apps or websites by taking advantage of the native content sharing capabilities on a user’s device.
This approach provides a number of advantages over conventional methods:
The user is presented with a wide range of options for sharing content compared to the limited number you might have in your DIY implementation.
You can improve your page load times by doing away with third-party scripts from individual social platforms.
You don’t need to add a series of buttons for different social media sites and email. A single button is sufficient to trigger the device’s native sharing options.
Users can customize their preferred share targets on their own device instead of being limited to just the predefined options.
A note on browser support
Before we get into the details of how the API works, let’s get the issue of browser support out of the way. To be honest, browser support isn’t great at this time. It’s only available for Chrome for Android, and Safari (desktop and iOS).
This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.
Desktop
Chrome
Opera
Firefox
IE
Edge
Safari
No
No
No
No
No
12.1
Mobile / Tablet
iOS Safari
Opera Mobile
Opera Mini
Android
Android Chrome
Android Firefox
12.2
No
No
No
74
No
But don’t let that discourage you from adopting this API on your website. It’s pretty easy to implement a fallback for supporting browsers that don’t offer support for it, as you’ll see.
A few requirements for using it
Before you can adopt this API on your own web project, there are two major things to note:
Your website has to be served over HTTPS. To facilitate local development, the API also works when your site is running over localhost.
To prevent abuse, the API can only be triggered in response to some user action (such as a click event).
Here’s an example
To demonstrate how to use this API, I’ve prepared a demo that works essentially the same as it does on my site. Here’s how it looks like:
At this moment, once you click the share button, a dialog pops out and shows a few options for sharing the content. Here’s the part of the code that helps us achieve that:
Let’s go ahead and convert this example to use the Web Share API instead. The first thing to do is check if the API is indeed supported on the user’s browser as shown below:
if (navigator.share) {
// Web Share API is supported
} else {
// Fallback
}
Using the Web Share API is as simple as calling the navigator.share() method and passing an object that contains at least one of the following fields:
url: A string representing the URL to be shared. This will usually be the document URL, but it doesn’t have to be. You share any URL via the Web Share API.
title: A string representing the title to be shared, usually document.title.
text: Any text you want to include.
Here’s how that looks in practice:
shareButton.addEventListener('click', event => {
if (navigator.share) {
navigator.share({
title: 'WebShare API Demo',
url: 'https://codepen.io/ayoisaiah/pen/YbNazJ'
}).then(() => {
console.log('Thanks for sharing!');
})
.catch(console.error);
} else {
// fallback
}
});
At this point, once the share button is clicked in a supported browser, the native picker will pop out with all the possible targets that the user can share the data with. Targets can be social media apps, email, instant messaging, SMS or other registered share targets.
The API is promised-based, so you can attach a .then() method to perhaps display a success message if the share was successful, and handle errors with .catch(). In a real-world scenario, you might want to grab the page’s title and URL using this snippet:
For the URL, we first check if the page has a canonical URL and, if so, use that. Otherwise, we grab the href off document.location.
Providing a fallback is a good idea
In browsers where the Web Share API isn’t supported, we need to provide a fallback mechanism so that users on those browsers still get some sharing options.
In our case, we have a dialog that pops out with a few options for sharing the content and the buttons in our demo do not actually link to anywhere since, well, it’s a demo. But if you want to learn about how you can create your own links to share web pages without third-party scripts, Adam Coti’s article is a good place to start.
What we want to do is display the fallback dialog for users on browsers without support for the Web Share API. This is as simple as moving the code that opens the share dialog into the else block:
shareButton.addEventListener('click', event => {
if (navigator.share) {
navigator.share({
title: 'WebShare API Demo',
url: 'https://codepen.io/ayoisaiah/pen/YbNazJ'
}).then(() => {
console.log('Thanks for sharing!');
})
.catch(console.error);
} else {
shareDialog.classList.add('is-open');
}
});
Now, all users are covered regardless of what browser they’re on. Here’s a comparison of how the share button behaves on two mobile browsers, one with Web Share API support, and the other without:
Try it out! Use a browser that supports Web Share, and one that doesn’t. It should work similarly to the above demonstration.
This covers pretty much the baseline for what you need to know about the Web Share API. By implementing it on your website, visitors can share your content more easily across a wider variety of social networks, with contacts and other native apps.
Although browser support is spotty, a fallback is easily implemented, so I see no reason why more websites shouldn’t adopt this. If you want to learn more about this API, you can read the specification here.
Have you used the Web Share API? Please share it in the comments.
Hey, so we talked a little bit about An Event Apart Boston 2019 leading up to the event and now there are a ton of resources available from it. I stopped counting the number of links after 50 because there’s way more than that. Seriously, there’s stuff in there on subgrid, working with CSS Regions, design systems, using prefers-reduced-motion… and much, much more, so check it out.
And, while you’re at it, you should consider attending the next installment of An Event Apart. It takes place July 29-31 in Washington D.C. and seating — as you might expect — is limited. Like Boston, you can expect to get a treasure trove of useful information, educational content, and valuable training on topics that will help you sharpen your front-end chops and grow your career. Plus, the best part is getting to meet the rest of the great folks at the event — that’s where your network grows and real conversations take place.
Can’t make it to Washington D.C.? No worries, because An Event Apart is also slated to take place in Chicago (August 26-28), Denver (October 28-30) and San Francisco (December 9-11). Now’s the time to start planning your trip and begging your boss for a well-deserved self-investment in leveling up.
And if you’re wondering whether we have a discount code for you… of course we do! Enter the AEACP at checkout to knock $100 off the price. ?