Archive

Archive for November, 2018

5 Trends of Voice UI Design

November 14th, 2018 No comments

At its core, the concept of interaction was always about communication. Human-Computer Interaction has never been about graphical user interfaces, which is why Voice User Interfaces (VUIs) are the future of user interface design.

An interface is just a medium people use to interact with a system—whether it’s a GUI, VUI or something else. So Why is VUI So Important? Two reasons:

1. Conversational interfaces are so fascinating because conversation is a form of communication everyone understands.

  1. It’s a natural means of interaction. People associate voice communication with other people rather than with technology.
  2. Users don’t need to learn to interpret any symbology or new terminology (the language of GUI), they can use English (or any other native language) to operate with a system. It doesn’t mean that users don’t have to learn how to use a system but the learning curve be reduced significantly.

2. User expectations are changing. According to Statista, 39% of millennials use voice search. This audience is ready to be the early adopters of VUI systems.

Top 5 VUI Trends

When it comes to designing VUI, voice interaction represents the biggest UX challenge for designers since the birth of the original iPhone. But the great news is that the most fundamental principles of UI design that we use when creating products with GUI are still applicable to VUI design. Below you can find a few trends that will shape VUI design in next decades.

1. VUI That Builds Trust

Trust helps to build a bridge between a person and a machine. If trust is absent, users will be unlikely to interact with a particular voice user interface.

The importance of the valid outcome (VUI should give the person understanding that s/he will receive exactly what s/he requested). It’s possible to achieve this goal by focusing on the following things:

  • Improving the accuracy of speech recognition (more sophisticated NLP algorithms).
  • Focusing on understanding the user’s intent (a reason for interacting in the first place). When users interact with a system, they have a particular problem they want to solve, and the goal of the designer is to understand what this problem is.
  • Providing meaningful error messages.
  • Crafting contextually driven flows. While it’s impossible to predict all commands that users might ask the system, designers need to at least design a user flow that is contextually driven. The system should anticipate users’ intent at each point of a conversation and provide users with information on what they can do next. For example, finding a restaurant near the user. When users search for a restaurant, the system should match exactly what the user is looking for.

The importance of user control (one of the 10 Usability Heuristics for User Interface Design by Jakob Nielsen is still applicable to VUI design).

  • The system should consider the natural limitations of a human brain (short-term memory limitations). The information provided by the system should be overwhelming. When people hear the system response, most users remember only the last phrase. Thus, it’s better to stay away from long phrases or providing a dozen different options while the user can remember just a couple of them at one time.
  • The system should react to a user request with appropriate feedback. This feedback should give users a full understanding of what the system is doing right now. For example, visual feedback lets the user know that the system is ready and listening; or in POD (Process of Doing). When a user sends a request to the system, the system shows a POD. POD isn’t a loading animation, it doesn’t just state the fact that users have to wait while a system is doing something, it provides valuable information of what the system does. For example, a POD for a command on pulling out a file from Dropbox might look like as someone search for a right file in storage.

2. Adaptive User Interface

An adaptive user interface (also known as AUI) is a user interface (UI) which adapts to the needs of the user or context. VUI of the future will adapt for users — the system will analyze all information it has about users (including the information about current mental state and health condition) and their current context to provide more relevant responses to user requests.

For example, if a user has a high blood pressure at the current moment and decides to set a meeting in 2 hours, a digital assistant might suggest avoiding that, or suggest lowering blood pressure with exercise before the meeting starts.

3. VUI That Conveys Personality

Visual designers have a lot of options to introduce the personality in graphical user interfaces – fonts, color, illustration, motion, just to name a few. But what about VUI? Designers can convey personality using language itself — by playing with words, voice, and tone. Speaking of voice, a voice is part of the persona and it shapes its identity. Once we’ve associated a voice with something, it becomes part of its identity. And we experience emotions when we interact with such an interface, just like we when we interact with real people. People want human-understandable voices — not a voice that sounds human, but a voice that speaks coherently human!

Bad example: Siri voice by Susan Bennett – the voice that sounds almost human but people still know that it’s a machine. You can’t really have a dialogue with Siri. While you can ask Siri something like, “What is the weather like today?” You can’t ask more sophisticated questions such as, “What should I wear today?” As a result, you don’t have deep feelings for Siri, you know it’s just a robot.

Good example: Samantha voice from the film Her — the voice that sounds coherently human and people can be in love with it.

4. From Narrow AI Towards General Intelligence

Human-computer interactions are shifting to conversation, but users expect more. Most of AI systems available today are still limited to Narrow AI — such systems use Machine Learning to solve a clearly defined (and, in most cases, way too narrow) problem. Narrow AIs have zero knowledge outside of their training data. It means that when a user wants to solve a slightly different problem, or the problem itself evolves, the system won’t be able to solve it and it’ll respond with something like, “I don’t understand.” So that you, as a user, face a wall.

In comparison to Narrow AI, General Intelligence is not limited to narrow domains. The concept of learning is at the foundation of GI systems — the fundamental difference between Narrow AI and General AI is that the General Intelligence systems learn without being expressly programmed (machines learn by themselves). GI system uses two types of learning — reinforcement learning (when a system uses all available information to solve a particular user problem) and supervised learning (when a system needs user assistance to solve a problem for the first time). Another difference is that a General AI system can learn to utilize other AI for general and specific purposes. As a result, different Machine Learning models can be trained dependently and work cooperatively. An advanced NLP GI system is able to learn from the first attempt by combining and processing information from multiple different data sources.

5. Impact on Society

Widespread acceptance of VUI systems. Improving the quality of VUI AI-based systems will lead to better user engagement. The relationships between human and computer will be interactive and collaborative — people and computers will work together. This will impact society — just imagine that in ten years, you’ll walk into the house and just talk and control all kinds of machines.

This future will be with omnipresent AI: As users, we’ll trust AI even with the most important decisions such as “What school should I choose for my children?” VUI will improve the quality of life of older people and people with disabilities.

Conclusion

“The best interface is no interface“ is a famous quote of Golden Krishna, the author of the book The Best Interface Is No Interface. He and many other designers believe that people don’t want more time with screens, in fact they want less. Thus, technology should stop celebrating screen-based solutions. And it’ll happen relatively soon — the interactions of the future won’t be made of buttons.

With the rise of computer processing power, we’ll have more systems that will be able to calculate up to 1000 steps in 1 second. A user and a machine will work together, enabling General Intelligence.

Featured image via DepositPhotos.

Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!

Source

Categories: Designing, Others Tags:

Use Case For Augmented Reality In Design

November 14th, 2018 No comments
Form Design Patterns — a practical guide for anyone who needs to design and code web forms

Use Case For Augmented Reality In Design

Use Case For Augmented Reality In Design

Suzanne Scacca

2018-11-14T14:00:49+01:002018-11-14T13:45:20+00:00

Augmented reality has been on marketers’ minds for years now — and there’s a good reason for it. Augmented reality (or AR) is a technology that layers computer-generated images on top of the real world. With the pervasiveness of the mobile device around the globe, the majority of consumers have instant access to AR-friendly devices. All they need is a smartphone connected to the Internet, a high-resolution screen, and a camera viewfinder. It’s then up to you as a marketer or developer to create digital animations to superimpose on top of their world.

This reality-bending technology is consistently named as one of the hot development and design trends of the year. But how many businesses and marketers are actually making use of it?

As with other cutting-edge technologies, many have been reluctant to adopt AR into their digital marketing strategy.

Part of it is due to the upfront cost of using and implementing AR. There’s also the learning curve to think about when it comes to designing new kinds of interactions for users. Hesitation may also come from marketers and designers because they’re unsure of how to use this technology.

Augmented reality has some really interesting use cases that you should start exploring for your mobile app. The following post will provide you with examples of what’s being done in the AR space now and hopefully inspire your own efforts to bring this game-changing tech to your mobile app in the near future.

The Future Is Here: AR & VR Icon Set

Looking for an icon set that’ll take you on a journey through AR and VR technology? We’ve got your back. Check out the freebie ?

Augmented Reality: A Game-Changer You Can’t Ignore

Unlike virtual reality, which requires users to purchase pricey headsets in order to be immersed in an altered experience, augmented reality is a more feasible option for developers and marketers. All your users need is a device with a camera that allows them to engage with the external world, instead of blocking it out entirely.

And that’s essentially the crux of why AR will be so important for mobile app companies.

This is a technology that enables mobile app users to view the world through your “filter.” You’re not asking them to get lost in another reality altogether. Instead, you want to merge their world with your own. And this is something websites have been unable to accomplish as most interactions are lacking in this level of interactivity.

Let’s take e-commerce websites, for example. Although e-commerce sales increase year after year, people still flock to brick-and-mortar stores in droves (especially for the holiday season). Why? Well, part of it has to do with the fact that they can get their hands on products, test things out and talk to people in real time as they ponder a purchase. Online, it’s a gamble.

As you can imagine, AR in a mobile app can change all that. Augmented reality allows for more meaningful engagements between your mobile app (and brand) and your user. That’s not all though. Augmented reality that connects to geolocation features could make users’ lives significantly easier and safer too. And there’s always the entertainment application of it.

If you’re struggling with retention rates for your app, developing a useful and interactive AR experience could be the key to winning more loyal users in the coming year.

Inspiring Examples Of Augmented Reality

To determine what kind of augmented reality makes the most sense for your website or app, look to examples of companies that have already adopted and succeeded in using this technology.

As Google suggests:

“Augmented reality will be a valuable addition to a lot of existing web pages. For example, it can help people learn on education sites and allow potential buyers to visualize objects in their home while shopping.”

But those aren’t the only applications of AR in mobile apps, which is why I think many mobile app developers and marketers have shied away from it thus far. There are some really interesting examples of this out there though, and I’d like to introduce you to them in the hopes it’ll inspire your own efforts in 2019 and beyond.

Social Media AR

For many of us, augmented reality is already part of our everyday lives, whether we’re the ones using it or we’re viewing content created by others using it. What am I talking about? Social media, of course.

There are three platforms, in particular, that make use of this technology right now.

Snapchat was the first:


Snapchat filter
Trying out a silly filter on Snapchat (Source: Snapchat) (Large preview)

Snapchat could have included a basic camera integration so that users could take and send photos and videos of themselves to others. But it’s taken it a step further with face mapping software that allows users to apply different “filters” to themselves. Unlike traditional filters which alter the gradients or saturation of a photo, however, these filters are often animated and move as the user moves.

Instagram is another social media platform that has adopted this tech:


Instagram filter
Instagram filters go beyond making a face look cute. (Source: Instagram) (Large preview)

Instagram’s Stories allow users to apply augmented filters that “stick” to the face or screen. As with Snapchat, there are some filters that animate when users open their mouths, raise their eyebrows or make other movements with their faces.

One other social media channel that’s gotten into this — that isn’t really a social media platform at all — is Facebook’s Messenger service:


Messenger filters
Users can have fun while sending photos or video chatting on Messenger. (Source: Messenger) (Large preview)

Seeing as how users have flocked to AR filters on Snapchat and Instagram, it makes sense that Facebook would want to get in on the game with its mobile property.

Use Case

Your mobile app doesn’t have to be a major social network in order to reap the benefits of image and video filters.

If your app provides a networking or communication component — in-app chat with other users, photo uploads to profiles and so on — you could easily adopt similar AR filters to make the experience more modern and memorable for your users.

Video Objects AR

It’s not just your users’ faces that can be mapped and altered through the use of augmented reality. Spaces can be mapped as well.

While I will go on to talk about pragmatic applications of space mapping and AR shortly, I do want to address another way in which it can be used.

Take a look at 3DBrush:

Adding 3D objects to video with 3DBrush. (Source: 3DBrush)

At first glance, it might appear to be just another mobile app that enables users to draw on their photos or videos. But what’s interesting about this is the 3D and “sticky” aspects of it. Users can draw shapes of all sizes, colors and complexities within a 3D space. Those elements then stick to the environment. No matter where the users’ cameras move, the objects hold in place.

LeoApp AR is another app that plays with space in a fun way:


LeoApp surface mapping
LeoApp maps a flat surface for object placement. (Source: LeoApp AR) (Large preview)

As you can see here, I’m attempting to map this gorilla onto my desk, but any flat surface will do.

Dancing gorilla projection
A gorilla dances on my desk, thanks to LeoApp AR. (Source: LeoApp AR)

I now have a dancing gorilla making moves all over my workspace. This isn’t the only kind of animation you can put into place and it’s not the only size either. There are other holographic animations that can be sized to fit your actual physical space. For example, if you wanted to chill out side-by-side with them or have them accompany you as you give a presentation.

Use Case

The examples I’ve presented above aren’t the full representation of what can be done with these mobile apps. While users could use these for social networking purposes (alongside other AR filters), I think an even better use of this would be to liven up professional video.

Video plays such a big part in marketing and will continue to do so in the future. It’s also something we can all readily do now with our smartphones; no special equipment is needed.

As such, I think that adding 3D messages or objects into a branded video might be a really cool use case for this technology. Rather than tailor your mobile app to consumers who are already enjoying the benefits of AR on social media, this could be marketed to businesses that want to shake things up for their brand.

Gaming AR

Thanks to all the hubbub surrounding Pokémon Go a few years back, gaming is one of the better known examples of augmented reality in mobile apps today.


Pokemon Go animates environment
My dog hides in the bushes from Pokemon. (Source: Pokémon Go) (Large preview)

The app is still alive and well and that may be because we’re not hearing as many stories about people becoming seriously injured (or even dying) from playing it anymore.

This is something that should be taken into close consideration before developing an AR mobile app. When you ask users to take part in augmented reality outside the safety of a confined space, there’s no way to control what they do afterwards. And that could do some serious damage to your brand if users get injured while playing or just generally wreak havoc out in the public forum (like all those PG users who were banned from restaurants).

This is probably why we see AR more used in games like AR Sports Basketball these days.

Play basketball anywhere
Users can map a basketball hoop onto any flat surface with AR Sports Basketball. (Source: AR Sports Basketball)

The app maps a flat surface — be it a smaller version on a desk or a larger version placed on your floor — and allows users to shoot hoops. It’s a great way to distract and entertain oneself or even challenge friends, family or colleagues to a game of HORSE.

Use Case

You could, of course, build an entire mobile app around an AR game as these two examples have shown.

You could also think of ways to gamify other mobile app experiences with AR. I imagine this could be used for something like a restaurant app. For example, a pizza restaurant wants to get more users to install the app and to order food from them. With a big sporting event like the Super Bowl coming up, a “Play” tab is added to the app, letting users throw pizzas down the field. It would certainly be a fun distraction while waiting for their real pizzas to arrive.

Bottom line: get creative with this. AR games aren’t just for gaming apps.

Home Improvement AR

As you’ve already seen, augmented reality enables us to map physical spaces and stick interactive objects to them. In the case of home improvement, this technology is being used to help consumers make purchasing decisions from the comfort of their home (or at their job or on their commute to work, etc.)

IKEA is one such brand that’s capitalized on this opportunity.


 IKEA product placement
Place IKEA products around your home or office. (Source: IKEA) (Large preview)

To start, here is my attempt at shopping for a new desk for my workspace. I selected the product I was interested in and then I placed it into my office. Specifically, I put the accurately sized 3D desk projection in front of my current desk, so I could get a sense for how the two differ and how this new one would fit.

While product specifications online are all well and good, consumers still struggle with making purchases since they can’t truly envision how those products will (physically) fit into their lives. The IKEA Place app is aiming to change all of that.


IKEA product search
Take a photo with the IKEA map and search related products. (Source: IKEA) (Large preview)

The IKEA app is also improving the shopping experience with the feature above.

Users open their camera and point it at any object they find in the real world. Maybe they were impressed by a bookshelf they saw at a hotel they stayed in or they really liked some patio chairs their friends had. All they have to do is snap a picture and let IKEA pair them with products that match the visual description.


IKEA search results
IKEA pairs app users with relevant product results. (Source: IKEA) (Large preview)

As you can see, IKEA has given me a number of options not just for the chair I was interested in, but also a full table set.

Use Case

If you have or want to build a mobile app that sells products to B2C or B2B consumers and these products need to fit well into their physical environments, think about what a functionality like this would do for your mobile app sales. You could save time having to schedule on-site appointments or conduct lengthy phone calls whereby salespeople try to convince them that the products, equipment or furniture will fit. Instead, you let the consumers try it for themselves.

Self-Improvement AR

It’s not just the physical spaces of consumers that could use improvement. Your mobile app users want to better themselves as well. In the past, they’d either have to go somewhere in person to try on the new look or they’d have to gamble with an online purchase. Thanks to AR, that isn’t the case anymore.

L’Oreal has an app called Style My Hair:


L'Oreal hair color tryout
Try out a new realistic hair color with the L’Oreal app. (Source: Style My Hair) (Large preview)

In the past, these hair color tryouts used to look really bad. You’d upload a photo of your face and the website would slap very fake-looking hair onto your head. It would give users an idea of how the color or style worked with their skin tone, eye shape and so on, but it wasn’t always spot-on which would make the experience quite unhelpful.

As you can see here, not only does this app replace my usually mousy-brown hair color with a cool new blond shade, but it stays with me as I turn my head around:


L'Oreal hair mapping example
L’Oreal applies new hair color any which way users turn. (Source: Style My Hair) (Large preview)

Sephora is another beauty company that’s taking advantage of AR mapping technology.


Sephora makeup testing
Try on beauty products with the Sephora app. (Source: Sephora) (Large preview)

Here is an example of me feeling not so sure about the makeup palette I’ve chosen. But that’s the beauty of this app. Rather than force customers to buy a bunch of expensive makeup they think will look great or to try and figure out how to apply it on their own, this AR app does all the work.

Use Case

Anyone remember the movie The Craft? I totally felt like that using this app.

The Craft magic
The Craft hair-changing clip definitely inspired this example. (Source: The Craft)

If your app sells self-improvement or beauty products, or simply advises users on next steps they should take, think about how AR could transform that experience. You want your users to be confident when making big changes — whether it be how they wear their makeup for date night or the next tattoo they put on their body. This could be what convinces them to take the leap.

Geo AR

Finally, I want to talk about how AR has and is about to transform users’ experiences in the real world.

Now, I’ve already mentioned Pokémon Go and how it utilizes the GPS of a users’ mobile device. This is what enables them to chase those little critters anywhere they go: restaurants, stores, local parks, on vacation, etc.

But what if we look outside the box a bit? Geo-related AR doesn’t just help users discover things in their physical surroundings. It could simply be used as a way to improve the experience of walking about in the real world.

Think about the last time you traveled to a foreign destination. You may have used a translation guidebook to look up phrases you didn’t know. You might have also asked your voice assistant to translate something for you. But think about how great it would be if you didn’t have to do all that work to understand what’s right in front of you. A road sign. A menu. A magazine article.

The Google Translate app is attempting to bridge this divide for us:


Google Translate camera search
Google Translate uses the camera to find foreign text. (Source: Google Translate) (Large preview)

In this example, I’ve scanned an English phrase I wrote out: “Where is the bathroom?” Once I selected the language I wanted to translate from and to, as well as indicated which text I wanted to focus on, Google Translate attempted to provide a translation:


Google provides a translation
Google Translate provides a translation of photographed text. (Source: Google Translate) (Large preview)

It’s not 100% accurate — which may be due to my sloppy handwriting — but it would certainly get the job done for users who need a quick way to translate text on the go.

Use Case

There are other mobile apps that are beginning to make use of this geo-related AR.

For instance, there’s one called Find My Car that I took for a test spin. I don’t think the technology is fully ready yet as it couldn’t accurately “pin” my car’s location, but it’s heading in the right direction. In the future, I expect to see more directional apps — especially, Google and Apple Maps — use AR to improve directional awareness and guidance for users.

Wrapping Up

There are challenges in using AR, that’s for sure. The cost of developing AR is one. Finding the perfect application of AR that’s unique to your brand and truly improves the mobile app user experience is another. There’s also the fact it requires users to download a mobile app, so there’s a lot of work to be done to motivate them to do so.

Gimmicks just won’t work — especially if you expect users to download your app and make use of it (remember: retention rates aren’t just about downloads). You have to make the augmented reality feature something that’s worth engaging. The first place to start is with your data. As Jordan Thomson wrote:

“AR is a lot more dependent on customer activity than VR, which is far older technology and is perhaps most synonymous with gaming. Designers should make use of big data and analytics to understand their customers’ wants and needs.”

I’d also advise you to spend some time in the apps above. Get a sense for how the technology works and discover what makes it so appealing on a personal level. Compare it to your own mobile app’s goals and see if there’s a way to take AR from just being an idea you’re tossing around to a reality.

Smashing Editorial(ra, yk, il)
Categories: Others Tags:

Foundation 6.5 Released

November 14th, 2018 No comments

After an extended wait, Zurb have finally released the much anticipated version 6.5 of its popular Foundation framework. (This is in place of the originally intended 6.4.4 release.)

So, what has changed? Well, this is not a major release so we’re not getting CSS Grid yet, but the changelog is pretty long. The folks behind this release have grouped the changes into 3 categories: improved stability, improved accessibility, and improved browser support.

What this means is that there are lot of bug fixes, but not so much in the way of features. There are also a lot of fixes to the documentation. But that’s not to say that’s a bad thing, in fact, it’s a very good thing.

A few changes might result in changes to the appearance of your work so it is worth checking

First of all, there are no breaking changes and 6.5 is fully compatible with previous versions. A few changes might result in changes to the appearance of your work so it is worth checking. But the good news is that many of the fixes are to correct unexpected behaviour.

Work has been done on dynamically created components, to ensure they initialize correctly, and an issue with older browsers handling breakpoints badly has been improved. Support for smaller font sizes is also improved.

At the heart of Foundation is its XY Grid, and this release sees some very welcome improvements, including fixes for some slightly dodgy behaviour; a vertical frame will now take the full height, and grid frames now get the vertical scrollbar they should.

The Abide form validation plugin has been improved to allow for escaped characters in URLs, and a11y attributes will be set automatically on form fields, labels, and on errors. There have been accessibilty improvements in all the menu plugins, too, with accessibility best practices being applied to other components as well.

Possibly the biggest improvement on the Javascript front is that not only is Foundation’s Javascript now provided as CommonJS, ES Modules and ES6 bundles, it is distributed by default as a UMD bundle. This means you can import Foundation in any Javascript environment, including Node.js, with no transpilation necessary.

the point of using a framework like Foundation, is so that you don’t need to peak behind the curtain

Many of the changes in this release will probably not jump out at most users of Foundation, but a lot of work has gone into behind the scenes. That is not just what it does, but how it actually does it. And the point of using a framework like Foundation, is so that you don’t need to peak behind the curtain. But, at the same time, it’s good to know that the wizard is in tip top shape.

While Foundation 6.5 does not have lots of showy new features, it does have a sleeker, more finely tuned engine. Its developers have listened to its community and have done a lot to address issues and niggles that the community raised.

Ease of use should be increased, and anything that helps ease the frustrations of making designs look good on all browsers and all viewport sizes is definitely to be welcomed. Plus, the docs have been improved with some better examples and more comprehensive coverage in places.

For a little while there, it looked as though Foundation might have reached its terminus, but it seems they checked the vault and found some inspiration.

Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!

Source

Categories: Designing, Others Tags:

Thoughts on the iPhone Xs Max

November 14th, 2018 No comments

As a typical post-industrial type designer, I’ve given up the need to buy new stuff. This MacBook Retina I’m writing this post on is Mid 2012. So such new product “reviews” are pretty rare here. However, due to extensive casing damage on my iPhone 6+, I decided to take a painful (cost wise) jump to the iPhone Xs Max.

Here are some of my thoughts:

  • The full edge to edge screen is just amazing. Looking back at my iPhone 6+, the black borders around the screen do look outdated. Even the iPhone XR’s screen edges seem a bit chunky.
  • Apple investing in that A12 Chip has paid dividends. Screen swipes and iOS 12 interaction design is just amazingly smooth and fluid.
View this post on Instagram

iPhone Xs radius offset. Just beautiful.

A post shared by Brian Ling (@mrbrianling) on Oct 24, 2018 at 2:17am PDT

  • As an Industrial Designer, I marvel and appreciate at the effort taken in the housing design and construction. You do pay for what you get.
  • I went with Silver as black looks oily after a while. I decided on going with 256 GB memory for the phone rather than 512GB. Unless you are doing a lot of video work, I don’t think the $500 up-cost for 512 GB is worth it. This is a break from my usual practice of maxing out memory for everyone phone. I think we are hitting the point of memory sufficiency, where just like computing sufficiency where processors more faster than humans, we don’t really need any more space. Well most of us anyway.
  • Finally, the iPhone Xs Max is not cheap. You are basically paying for a laptop in your pocket + a good meal. How much more can Apple push the price? In a saturated mobile phone market, Apple seems to be the only one that keeps on raising prices. You know why? Apple does not compete on price. And you know what? Many will find a way to justify buying this product. Mine was “if this is going to last me another 4 to 5 years, the price amortizes well.” Yeah…right. Heh heh.

Love to hear your thoughts. Please leave your comments below.

The post Thoughts on the iPhone Xs Max appeared first on Design Sojourn. Please click above if you cannot see this post.

Categories: Designing, Others Tags:

CSS and Network Performance

November 13th, 2018 No comments

JavaScript and images tend to get the bulk of the blame for slow websites, but Harry explains very clearly why CSS is equally to blame and harder to deal with:

  1. A browser can’t render a page until it has built the Render Tree;
  2. the Render Tree is the combined result of the DOM and the CSSOM;
  3. the DOM is HTML plus any blocking JavaScript that needs to act upon it;
  4. the CSSOM is all CSS rules applied against the DOM;
  5. it’s easy to make JavaScript non-blocking with async and defer
    attributes;
  6. making CSS asynchronous is much more difficult;
  7. so a good rule of thumb to remember is that your page will only render as quickly as your slowest stylesheet.

There are lots of options to do better with this, including some interesting things that HTTP/2 unlocks.

Check out Šime Vidas’s takeaways as well. It’s all fascinating, but the progressive rendering stuff is particularly cool. I suspect many CSS-in-JS libraries could/should help with doing things this way.

Direct Link to ArticlePermalink

The post CSS and Network Performance appeared first on CSS-Tricks.

Categories: Designing, Others Tags:

The “C” in CSS: The Cascade

November 13th, 2018 No comments

Following up from Geoff’s intro article on The Second “S” in CSS, let’s now move the spotlight to the “C” in CSS — what we call the Cascade. It’s where things start to get messy, and even confusing at times.

Have you ever written a CSS property and the value doesn’t seem to work? Maybe you had to turn to using !important to get it going. Or perhaps you resorted to writing the CSS inline on the element in the HTML file.

<div style="background:orange; height:100px; width:100px;">
  Ack, inline!
</div>

Speaking of inline styles, have you wondered why SVG editors use them instead of a separate CSS file? That seems kinda weird, right?

<svg id="icon-logo-star" viewBox="0 0 362.62 388.52" width="100%" height="100%">
  <style>
    .logo {
      fill: #ff9800;
    }
  </style>
  <title>CSS Tricks Logo</title>
  <path class="logo" d="M156.58 239l-88.3 64.75c-10.59 7.06-18.84 11.77-29.43 11.77-21.19 0-38.85-18.84-38.85-40 0-17.69 14.13-30.64 27.08-36.52l103.6-44.74-103.6-45.92C13 142.46 0 129.51 0 111.85 0 90.66 18.84 73 40 73c10.6 0 17.66 3.53 28.25 11.77l88.3 64.75-11.74-104.78C141.28 20 157.76 0 181.31 0s40 18.84 36.5 43.56L206 149.52l88.3-64.75C304.93 76.53 313.17 73 323.77 73a39.2 39.2 0 0 1 38.85 38.85c0 18.84-12.95 30.61-27.08 36.5l-103.61 45.91L335.54 239c14.13 5.88 27.08 18.83 27.08 37.67 0 21.19-18.84 38.85-40 38.85-9.42 0-17.66-4.71-28.26-11.77L206 239l11.77 104.78c3.53 24.72-12.95 44.74-36.5 44.74s-40-18.84-36.5-43.56z"></path>
</svg>

Well, the cascade has a lot to do with this. Read on to find out how styling methods affect what’s being applied to your elements and how to use the cascade to your advantage because, believe me, it’s a wonderful thing when you get the hang of it.

TL;DR: Jump right to the CSS order diagram for a visual of how everything works.

The cascade cares about how and where styles are written

There are a myriad of ways you can apply CSS rules to an element. Below is an example of how stroke: red; can be applied to the same element. The examples are ordered in ascending priority, where the highest priority is at the bottom:

<!-- Inheritance -->
<g style="stroke: red">
  <rect x="1" y="1" width="10" height="10" /> <!-- inherits stroke: red -->
</g>

<!-- Inline attributes -->
<rect x="1" y="1" width="10" height="10" stroke="red" />

<!-- External style sheet -->
<link rel="stylesheet" href="/path/to/stylesheet.css">

<!-- Embedded styles -->
<style>
  rect { stroke: red; }
</style>

<!-- Different specificity or selectors -->
rect { stroke: red; }
.myClass { stroke: red; }
#myID { stroke: red; }

<!-- Inline style -->
<g style="stroke: red"></g>

<!-- Important keyword -->
<g style="stroke: red !important"></g>

Inheritance? Embedded? External? Inline? Specificity? Important? Yeah, lots of terms being thrown around. Let’s break those down a bit because each one determines what the browser ends up using when a web page loads.

Elements can inherit styles from other elements

Both HTML and SVG elements can inherit CSS rules that are applied to other elements. We call this a parent-child relationship, where the element the CSS is applied to is the parent and the element contained inside the parent is the child.

<div class="parent">
  <div class="child">I'm the child because the parent is wrapped around me.</div>
</div>

If we set the text color of the parent and do not declare a text color on the child, then the child will look up to the parent to know what color its text should be. We call that inheritance and it’s a prime example of how a style cascades down to an element it matches… or “bubbles up” the chain to the next matched style.

However, inheritance has the lowest priority among styling methods. In other words, if a child has a rule that is specific to it, then the inherited value will be ignored, even though the inherited value may have an important keyword. The following is an example:

<div class="parent" style="color: red !important;">
  <div class="child">I'm the child because the parent is wrapped around me.</div>
</div>

See the Pen Child ignores inline inheritance with !important by Geoff Graham (@geoffgraham) on CodePen.

SVG inline attributes

For SVG elements, we can also apply styles using inline attributes, where those have the second lowest priority in the cascade. This means the CSS rules in a stylesheet will be able to override them.

<rect x="1" y="1" width="10" height="10" stroke="red" />
rect {
  stroke: blue;
}

See the Pen Stylesheet overrides SVG inline attributes by Geoff Graham (@geoffgraham) on CodePen.

Most SVG editors use inline attributes for portability; that is, the ability to copy some elements and paste them elsewhere without losing the attributes. Users can then use the resultant SVG and style its elements using an external stylesheet.

Stylesheets

Stylesheets are divided into two flavors: external and embedded:

<!-- External style sheet -->
<link rel="stylesheet" href="/path/to/stylesheet.css">

<!-- Embedded styles -->
<style>
  div { border: 1px solid red }
</style>

Embedded styles have a higher priority than external stylesheets. Therefore, if you have the same CSS rules, those in the embedded style will be applied.

See the Pen Embedded styles override stylesheet rules by Geoff Graham (@geoffgraham) on CodePen.

All stylesheets also follow ordering rules, where files that are defined later, will have higher priority than those defined earlier. In this example, stylesheet-2.css will take precedence over the stylesheet-1.css file because it is defined last.

<link rel="stylesheet" href="/path/to/stylesheet-1.css">
<link rel="stylesheet" href="/path/to/stylesheet-2.css">

Specificity or selectors

How you select your elements will also determine which rules are applied, whereby tags (e.g.

,

), classes (e.g. .my-class) and IDs (e.g. #myI-id) have ascending priorities.

See the Pen Specificity by selectors by Geoff Graham (@geoffgraham) on CodePen.

In the example above, if you have a div element with both .my-class and #my-id, the border will be red because IDs have higher priority than classes and tags.

*Specificity has higher priority than ordering rules, therefore, irrespective if your rule is at the top or bottom. Specificity still has higher priority and will be applied.

Ordering

CSS rules always prioritize from left-to-right, then from top-to-bottom.

<!-- Blue will be applied because it is on the right -->
<div style="1px solid red; 1px solid blue;"></div> 

<style>
  div {
    border: 1px solid red;
    border: 1px solid blue; /* This will be applied because it is at the bottom */
  }
</style>

Inline styles

Inline styles have the second highest priority, just below the !important keyword. This means that inline styles are only overridden by the important keyword and nothing else. Within inline styles, normal ordering rules applies, from left-to-right and top-to-bottom.

<div style="1px solid red;"></div>

The important keyword

Speaking of the !important keyword, it is used to override ordering, specificity and inline rules. In other words, it wields incredible powers.

Overriding inline rules

<style>
  div {
    /* This beats inline styling */
    border: 1px solid orange !important;
    /* These do not */
    height: 200px;
    width: 200px;
  }
</style>

<div style="border: 1px solid red; height: 100px; width: 100px;"></div>

In the example above, without the important keyword, the div would have a red border because inline styling has higher priority than embedded styles. But, with the important keyword, the div border becomes orange, because the important keyword has higher priority than inline styling.

Using !important can be super useful, but should be used with caution. Chris has some thoughts on situations where it makes sense to use it.

Overriding specificity rules

Without the important keyword, this div border will be blue, because classes have higher priority than tags in specificity.

<style>
  /* Classes have higher priority than tags */
  .my-class {
    border: 1px solid blue;
    height: 100px;
    width: 100px;
  }
  
  div { 
    border: 1px solid red;
    height: 200px;
    width: 200px;
  }
</style>

<div class="my-class"></div>

See the Pen Classes beat tags by Geoff Graham (@geoffgraham) on CodePen.

But! Adding the important keyword to the tag rules tells the element to ignore the cascade and take precedence over the class rules.

<style>
  .my-class { border: 1px solid red; }
  
  /* The important keyword overrides specificity priority */
  .my-class { border: 1px solid blue !important; }
</style>

<div class="my-class"></div>

See the Pen !important ignores the cascade by Geoff Graham (@geoffgraham) on CodePen.

Overriding ordering rules

OK, so we’ve already talked about how the order of rules affects specificity: bottom beats top and right beats left. The surefire way to override that is to put !important into use once again.

In this example, the div will take the red border, even though the blue border is the bottom rule. You can thank !important for that handiwork.

<style>
  div { border: 1px solid red !important; } /* This wins, despite the ordering */
  div { border: 1px solid blue; }
</style>

<div></div>

See the Pen Important wins over ordering by Geoff Graham (@geoffgraham) on CodePen.

Visualizing the cascade

Who knew there was so much meaning in the “C” of CSS? We covered a ton of ground here and hopefully it helps clarify the way styles are affected and applied by how we write them. The cascade is a powerful feature. There are opinions galore about how to use it properly, but you can see the various ways properties are passed and inherited by elements.

More of a visual learner? Here’s a chart that pulls it all together.

Download chart

The post The “C” in CSS: The Cascade appeared first on CSS-Tricks.

Categories: Designing, Others Tags:

Sending Emails Asynchronously Through AWS SES

November 13th, 2018 No comments

Sending Emails Asynchronously Through AWS SES

Sending Emails Asynchronously Through AWS SES

Leonardo Losoviz

2018-11-13T14:30:53+01:002018-11-13T14:40:42+00:00

Most applications send emails to communicate with their users. Transactional emails are those triggered by the user’s interaction with the application, such as when welcoming a new user after registering in the site, giving the user a link to reset the password, or attaching an invoice after the user does a purchase. All these previous cases will typically require sending only one email to the user. In some other cases though, the application needs to send many more emails, such as when a user posts new content on the site, and all her followers (which, in a platform like Twitter, may amount to millions of users) will receive a notification. In this latter situation, not architected properly, sending emails may become a bottleneck in the application.

That is what happened in my case. I have a site that may need to send 20 emails after some user-triggered actions (such as user notifications to all her followers). Initially, it relied on sending the emails through a popular cloud-based SMTP provider (such as SendGrid, Mandrill, Mailjet and Mailgun), however the response back to the user would take seconds. Evidently, connecting to the SMTP server to send those 20 emails was slowing the process down significantly.

After inspection, I found out the sources of the problem:

  1. Synchronous connection
    The application connects to the SMTP server and waits for an acknowledgment, synchronously, before continuing the execution of the process.
  2. High latency
    While my server is located in Singapore, the SMTP provider I was using has its servers located in the US, making the roundtrip connection take considerable time.
  3. No reusability of the SMTP connection When calling the function to send an email, the function sends the email immediately, creating a new SMTP connection on that moment (it doesn’t offer to collect all emails and send them all together at the end of the request, under a single SMTP connection).

Because of #1, the time the user must wait for the response is tied to the time it takes to send the emails. Because of #2, the time to send one email is relatively high. And because of #3, the time to send 20 emails is 20 times the time it takes to send one email. While sending only one email may not make the application terribly slower, sending 20 emails certainly does, affecting the user experience.

Let’s see how we can solve this issue.

Paying Attention To The Nature Of Transactional Emails

Before anything, we must notice that not all emails are equal in importance. We can broadly categorize emails into two groups: priority and non-priority emails. For instance, if the user forgot the password to access the account, she will expect the email with the password reset link immediately on her inbox; that is a priority email. In contrast, sending an email notifying that somebody we follow has posted new content does not need to arrive on the user’s inbox immediately; that is a non-priority email.

The solution must optimize how these two categories of emails are sent. Assuming that there will only be a few (maybe 1 or 2) priority emails to be sent during the process, and the bulk of the emails will be non-priority ones, then we design the solution as follows:

  • Priority emails can simply avoid the high latency issue by using an SMTP provider located in the same region where the application is deployed. In addition to good research, this involves integrating our application with the provider’s API.
  • Non-priority emails can be sent asynchronously, and in batches where many emails are sent together. Implemented at the application level, it requires an appropriate technology stack.

Let’s define the technology stack to send emails asynchronously next.

Defining The Technology Stack

Note: I have decided to base my stack on AWS services because my website is already hosted on AWS EC2. Otherwise, I would have an overhead from moving data among several companies’ networks. However, we can implement our soluting using other cloud service providers too.

My first approach was to set-up a queue. Through a queue, I could have the application not send the emails anymore, but instead publish a message with the email content and metadata in a queue, and then have another process pick up the messages from the queue and send the emails.

However, when checking the queue service from AWS, called SQS, I decided that it was not an appropriate solution, because:

  • It is rather complex to set-up;
  • A standard queue message can store only up top 256 kb of information, which may not be enough if the email has attachments (an invoice for instance). And even though it is possible to split a large message into smaller messages, the complexity grows even more.

Then I realized that I could perfectly imitate the behavior of a queue through a combination of other AWS services, S3 and Lambda, which are much easier to set-up. S3, a cloud object storage solution to store and retrieve data, can act as the repository for uploading the messages, and Lambda, a computing service that runs code in response to events, can pick a message and execute an operation with it.

In other words, we can set-up our email sending process like this:

  1. The application uploads a file with the email content + metadata to an S3 bucket.
  2. Whenever a new file is uploaded into the S3 bucket, S3 triggers an event containing the path to the new file.
  3. A Lambda function picks the event, reads the file, and sends the email.

Finally, we have to decide how to send emails. We can either keep using the SMTP provider that we already have, having the Lambda function interact with their APIs, or use the AWS service for sending emails, called SES. Using SES has both benefits and drawbacks:

Benefits:
  • Very simple to use from within AWS Lambda (it just takes 2 lines of code).
  • It is cheaper: Lambda fees are computed based on the amount of time it takes to execute the function, so connecting to SES from within the AWS network will take a shorter time than connecting to an external server, making the function finish earlier and costing less. (Unless SES is not available in the same region where the application is hosted; in my case, because SES is not offered in the Asian Pacific (Singapore) region, where my EC2 server is located, then I might be better off connecting to some Asia-based external SMTP provider).
Drawbacks:
  • Not many stats for monitoring our sent emails are provided, and adding more powerful ones requires extra effort (eg: tracking what percentage of emails were opened, or what links were clicked, must be set-up through AWS CloudWatch).
  • If we keep using the SMTP provider for sending the priority emails, then we won’t have our stats all together in 1 place.

For simplicity, in the code below we will be using SES.

We have then defined the logic of the process and stack as follows: The application sends priority emails as usual, but for non-priority ones, it uploads a file with email content and metadata to S3; this file is asynchronously processed by a Lambda function, which connects to SES to send the email.

Let’s start implementing the solution.

Differentiating Between Priority And Non-Priority Emails

In short, this all depends on the application, so we need to decide on an email by email basis. I will describe a solution I implemented for WordPress, which requires some hacks around the constraints from function wp_mail. For other platforms, the strategy below will work too, but quite possibly there will be better strategies, which do not require hacks to work.

The way to send an email in WordPress is by calling function wp_mail, and we don’t want to change that (eg: by calling either function wp_mail_synchronous or wp_mail_asynchronous), so our implementation of wp_mail will need to handle both synchronous and asynchronous cases, and will need to know to which group the email belongs. Unluckily, wp_mail doesn’t offer any extra parameter from which we could asses this information, as it can be seen from its signature:

function wp_mail( $to, $subject, $message, $headers = '', $attachments = array() )

Then, in order to find out the category of the email we add a hacky solution: by default, we make an email belong to the priority group, and if $to contains a particular email (eg: nonpriority@asynchronous.mail), or if $subject starts with a special string (eg: “[Non-priority!]“), then it belongs to the non-priority group (and we remove the corresponding email or string from the subject). wp_mail is a pluggable function, so we can override it simply by implementing a new function with the same signature on our functions.php file. Initially, it contains the same code of the original wp_mail function, located in file wp-includes/pluggable.php, to extract all parameters:

if ( !function_exists( 'wp_mail' ) ) :

function wp_mail( $to, $subject, $message, $headers = '', $attachments = array() ) {

  $atts = apply_filters( 'wp_mail', compact( 'to', 'subject', 'message', 'headers', 'attachments' ) );

  if ( isset( $atts['to'] ) ) {
    $to = $atts['to'];
  }

  if ( !is_array( $to ) ) {
    $to = explode( ',', $to );
  }

  if ( isset( $atts['subject'] ) ) {
    $subject = $atts['subject'];
  }

  if ( isset( $atts['message'] ) ) {
    $message = $atts['message'];
  }

  if ( isset( $atts['headers'] ) ) {
    $headers = $atts['headers'];
  }

  if ( isset( $atts['attachments'] ) ) {
    $attachments = $atts['attachments'];
  }

  if ( ! is_array( $attachments ) ) {
    $attachments = explode( "n", str_replace( "rn", "n", $attachments ) );
  }
  
  // Continue below...
}
endif;

And then we check if it is non-priority, in which case we then fork to a separate logic under function send_asynchronous_mail or, if it is not, we keep executing the same code as in the original wp_mail function:

function wp_mail( $to, $subject, $message, $headers = '', $attachments = array() ) {

  // Continued from above...

  $hacky_email = "nonpriority@asynchronous.mail";
  if (in_array($hacky_email, $to)) {

    // Remove the hacky email from $to
    array_splice($to, array_search($hacky_email, $to), 1);

    // Fork to asynchronous logic
    return send_asynchronous_mail($to, $subject, $message, $headers, $attachments);
  }

  // Continue all code from original function in wp-includes/pluggable.php
  // ...
}

In our function send_asynchronous_mail, instead of uploading the email straight to S3, we simply add the email to a global variable $emailqueue, from which we can upload all emails together to S3 in a single connection at the end of the request:

function send_asynchronous_mail($to, $subject, $message, $headers, $attachments) {
  
  global $emailqueue;
  if (!$emailqueue) {
    $emailqueue = array();
  }
  
  // Add email to queue. Code continues below...
}

We can upload one file per email, or we can bundle them so that in 1 file we contain many emails. Since $headers contains email meta (from, content-type and charset, CC, BCC, and reply-to fields), we can group emails together whenever they have the same $headers. This way, these emails can all be uploaded in the same file to S3, and the $headers meta information will be included only once in the file, instead of once per email:

function send_asynchronous_mail($to, $subject, $message, $headers, $attachments) {
  
  // Continued from above...

  // Add email to the queue
  $emailqueue[$headers] = $emailqueue[$headers] ?? array();
  $emailqueue[$headers][] = array(
    'to' => $to,
    'subject' => $subject,
    'message' => $message,
    'attachments' => $attachments,
  );

  // Code continues below
}

Finally, function send_asynchronous_mail returns true. Please notice that this code is hacky: true would normally mean that the email was sent successfully, but in this case, it hasn’t even been sent yet, and it could perfectly fail. Because of this, the function calling wp_mail must not treat a true response as “the email was sent successfully,” but an acknowledgment that it has been enqueued. That’s why it is important to restrict this technique to non-priority emails so that if it fails, the process can keep retrying in the background, and the user will not expect the email to already be in her inbox:

function send_asynchronous_mail($to, $subject, $message, $headers, $attachments) {
  
  // Continued from above...

  // That's it!
  return true;
}

Uploading Emails To S3

In my previous article “Sharing Data Among Multiple Servers Through AWS S3”, I described how to create a bucket in S3, and how to upload files to the bucket through the SDK. All code below continues the implementation of a solution for WordPress, hence we connect to AWS using the SDK for PHP.

We can extend from the abstract class AWS_S3 (introduced in my previous article) to connect to S3 and upload the emails to a bucket “async-emails” at the end of the request (triggered through wp_footer hook). Please notice that we must keep the ACL as “private” since we don’t want the emails to be exposed to the internet:

class AsyncEmails_AWS_S3 extends AWS_S3 {

  function __construct() {

    // Send all emails at the end of the execution
    add_action("wp_footer", array($this, "upload_emails_to_s3"), PHP_INT_MAX);
  }

  protected function get_acl() {

    return "private";
  }

  protected function get_bucket() {

    return "async-emails";
  }

  function upload_emails_to_s3() {

    $s3Client = $this->get_s3_client();

    // Code continued below...
  }
}
new AsyncEmails_AWS_S3();

We start iterating through the pairs of headers => emaildata saved in global variable $emailqueue, and get a default configuration from function get_default_email_meta for if the headers are empty. In the code below, I only retrieve the “from” field from the headers (the code to extract all headers can be copied from the original function wp_mail):

class AsyncEmails_AWS_S3 extends AWS_S3 {

  public function get_default_email_meta() {

    // Code continued from above...

    return array(
      'from' => sprintf(
        '%s ',
        get_bloginfo('name'),
        get_bloginfo('admin_email')
      ),
      'contentType' => 'text/html',
      'charset' => strtolower(get_option('blog_charset'))
    );
  }

  public function upload_emails_to_s3() {

    // Code continued from above...

    global $emailqueue;
    foreach ($emailqueue as $headers => $emails) {

      $meta = $this->get_default_email_meta();

      // Retrieve the "from" from the headers
      $regexp = '/From:s*(([^?s*n/i';
      if(preg_match($regexp, $headers, $matches)) {
        
        $meta['from'] = sprintf(
          '%s ',
          $matches[2],
          $matches[3]
        );
      }

      // Code continued below... 
    }
  }
}

Finally, we upload the emails to S3. We decide how many emails to upload per file with the intention to save money. Lambda functions charge based on the amount of time they need to execute, calculated on spans of 100ms. The more time a function requires, the more expensive it becomes.

Sending all emails by uploading 1 file per email, then, is more expensive than uploading 1 file per many emails, since the overhead from executing the function is computed once per email, instead of only once for many emails, and also because sending many emails together fills the 100ms spans more thoroughly.

So we upload many emails per file. How many emails? Lambda functions have a maximum execution time (3 seconds by default), and if the operation fails, it will keep retrying from the beginning, not from where it failed. So, if the file contains 100 emails, and Lambda manages to send 50 emails before the max time is up, then it fails and it retries executing the operation again, sending the first 50 emails once again. To avoid this, we must choose a number of emails per file that we are confident is enough to process before the max time is up. In our situation, we could choose to send 25 emails per file. The number of emails depends on the application (bigger emails will take longer to be sent, and the time to send an email will depend on the infrastructure), so we should do some testing to come up with the right number.

The content of the file is simply a JSON object, containing the email meta under property “meta”, and the chunk of emails under property “emails”:

class AsyncEmails_AWS_S3 extends AWS_S3 {

  public function upload_emails_to_s3() {

    // Code continued from above...
    foreach ($emailqueue as $headers => $emails) {

      // Code continued from above...

      // Split the emails into chunks of no more than the value of constant EMAILS_PER_FILE:
      $chunks = array_chunk($emails, EMAILS_PER_FILE);
      $filename = time().rand();
      for ($chunk_count = 0; $chunk_count  $meta,
          'emails' => $chunks[$chunk_count],
        );

        // Upload to S3
        $s3Client->putObject([
          'ACL' => $this->get_acl(),
          'Bucket' => $this->get_bucket(),
          'Key' => $filename.$chunk_count.'.json',
          'Body' => json_encode($body),
        ]);  
      }   
    }
  }
}

For simplicity, in the code above, I am not uploading the attachments to S3. If our emails need to include attachments, then we must use SES function SendRawEmail instead of SendEmail (which is used in the Lambda script below).

Having added the logic to upload the files with emails to S3, we can move next to coding the Lambda function.

Coding The Lambda Script

Lambda functions are also called serverless functions, not because they do not run on a server, but because the developer does not need to worry about the server: the developer simply provides the script, and the cloud takes care of provisioning the server, deploying and running the script. Hence, as mentioned earlier, Lambda functions are charged based on function execution time.

The following Node.js script does the required job. Invoked by the S3 “Put” event, which indicates that a new object has been created on the bucket, the function:

  1. Obtains the new object’s path (under variable srcKey) and bucket (under variable srcBucket).
  2. Downloads the object, through s3.getObject.
  3. Parses the content of the object, through JSON.parse(response.Body.toString()), and extracts the emails and the email meta.
  4. Iterates through all the emails, and sends them through ses.sendEmail.
var async = require('async');
var aws = require('aws-sdk');
var s3 = new aws.S3();
      
exports.handler = function(event, context, callback) {

  var srcBucket = event.Records[0].s3.bucket.name;
  var srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/+/g, " ")); 

  // Download the file from S3, parse it, and send the emails
  async.waterfall([

    function download(next) {

      // Download the file from S3 into a buffer.
      s3.getObject({
        Bucket: srcBucket,
        Key: srcKey
      }, next);
    },
    function process(response, next) {
          
      var file = JSON.parse(response.Body.toString());
      var emails = file.emails;
      var emailsMeta = file.meta;
        
      // Check required parameters
      if (emails === null || emailsMeta === null) {
        callback('Bad Request: Missing required data: ' + response.Body.toString());
        return;
      }
      if (emails.length === 0) {
        callback('Bad Request: No emails provided: ' + response.Body.toString());
        return;
      }
      
      var totalEmails = emails.length;
      var sentEmails = 0;
      for (var i = 0; i 

Next, we must upload and configure the Lambda function to AWS, which involves:

  1. Creating an execution role granting Lambda permissions to access S3.
  2. Creating a .zip package containing all the code, i.e. the Lambda function we are creating + all the required Node.js modules.
  3. Uploading this package to AWS using a CLI tool.

How to do these things is properly explained on the AWS site, on the Tutorial on Using AWS Lambda with Amazon S3.

Hooking Up S3 With The Lambda Function

Finally, having the bucket and the Lambda function created, we need to hook both of them together, so that whenever there is a new object created on the bucket, it will trigger an event to execute the Lambda function. To do this, we go to the S3 dashboard and click on the bucket row, which will show its properties:


Displaying bucket properties inside the S3 dashboard
Clicking on the bucket’s row displays the bucket’s properties. (Large preview)

Then clicking on Properties, we scroll down to the item “Events”, and there we click on Add a notification, and input the following fields:

  • Name: name of the notification, eg: “EmailSender”;
  • Events: “Put”, which is the event triggered when a new object is created on the bucket;
  • Send to: “Lambda Function”;
  • Lambda: name of our newly created Lambda, eg: “LambdaEmailSender”.

Setting up S3 with Lambda
Adding a notification in S3 to trigger an event for Lambda. (Large preview)

Finally, we can also set the S3 bucket to automatically delete the files containing the email data after some time. For this, we go to the Management tab of the bucket, and create a new Lifecycle rule, defining after how many days the emails must expire:


Lifecycle rule
Setting up a Lifecycle rule to automatically delete files from the bucket. (Large preview)

That’s it. From this moment, when adding a new object on the S3 bucket with the content and meta for the emails, it will trigger the Lambda function, which will read the file and connect to SES to send the emails.

I implemented this solution on my site, and it became fast once again: by offloading sending emails to an external process, whether the applications send 20 or 5000 emails doesn’t make a difference, the response to the user who triggered the action will be immediate.

Conclusion

In this article we have analyzed why sending many transactional emails in a single request may become a bottleneck in the application, and created a solution to deal with the issue: instead of connecting to the SMTP server from within the application (synchronously), we can send the emails from an external function, asynchronously, based on a stack of AWS S3 + Lambda + SES.

By sending emails asynchronously, the application can manage to send thousands of emails, yet the response to the user who triggered the action will not be affected. However, to ensure that the user is not waiting for the email to arrive in the inbox, we also decided to split emails into two groups, priority and non-priority, and send only the non-priority emails asynchronously. We provided an implementation for WordPress, which is rather hacky due to the limitations of function wp_mail for sending emails.

A lesson from this article is that serverless functionalities on a server-based application work pretty well: sites running on a CMS like WordPress can improve their performance by implementing only specific features on the cloud, and avoid a great deal of complexity that comes from migrating highly dynamic sites to a fully serverless architecture.

Smashing Editorial(rb, ra, yk, il)
Categories: Others Tags:

How Google is slowing innovation

November 13th, 2018 No comments
Cosmos

It was 1995 and the universe was deep in battle…

A fearsome empire was striving for world domination and crushing their competitors with an iron fist.

Their strategy?

Embrace, extend, extinguish.

Meanwhile, an impertinent upstart was lurking in the wings?—?one that would eventually bid to overcome the empire.

The upstart’s motto?

Don’t be evil.

Ok, this might sound like the intro to a superhero movie.

But it’s the tale of two of the mightiest corporations of all time: Microsoft and Google.

The killer twist?

The good guys turned to the dark side.

They took Microsoft’s rallying cry and made it their own.

Standards exist for a reason

I’m sitting in my favorite coffee shop as I write this. Their lattes are strong and the wifi is speedy.

Of course, once in a while, I have login problems, or poor data speeds?—?but it’s rare.

It’s rare because hardware manufacturers and software developers have agreed on a standard. In fact, a few of them: as Andy Tanenbaum said,

the nice thing about standards is that you have so many to choose from.

Our modern world relies on people agreeing to work by common rules. And in the online sphere, this begins with open standards. As the principles state:

Open standards make it possible for the smallest supplier to compete with the largest. They make data open for any citizen to audit. They unlock the transformative power of open source software.

Think of good old plain text files.

For the English-speaking world, the underlying standard is ASCII, which sets down the rules for encoding the alphabet as 0s and 1s.

Now, imagine a universe in which you had to pay a $1 licensing fee every time you wanted to read or write a text file in ASCII. It would be a nightmare, right?

Luckily, that would never happen. Because we have rules.

But sometimes, people try to game the system

Take HTML, the standard language for writing web pages, invented in the ’90s by Tim Berners-Lee.

The HTML specification has evolved over the years, and the W3C acts as a forum for gaining specification consensus from large players such as Adobe, Apple, Google, Intel, and Microsoft.

But there’s been a history of skirmishes, with different companies proposing their own variants. During the first browser wars, Netscape proposed a tag, while Microsoft came up with , which was meant to cause text to scroll in various directions.

There’s no problem with that (apart from lousy aesthetics), right? Wrong. Because only Netscape Navigator knew what to do with , and only Internet Explorer knew what to do with .

They were modifying a standard so that it would only run with their software.

They were trying to build a monopoly.

The quest for domination

Microsoft famously coined the phrase “embrace, extend, extinguish” to describe their strategy for dominating markets where competitors benefited from open standards.

Here’s an example of how it played out.

Back in the day, the most powerful PC software package was Lotus 1–2– 3. It was the classic killer app for the IBM PC and Microsoft’s MS-DOS operating system.

To overcome Lotus, Microsoft knew it had to embrace what made the product unique. This meant it had to load Lotus files and the macros that came with them. Enter Excel, a spreadsheet program that initially ran on Macs.

The functionality of Excel was as similar to Lotus as it could be without being a blatant rip-off. So close, in fact, that people could switch from Lotus to Excel with minimal pain.

What’s more, Microsoft used the graphics capabilities of Macs to equip Excel with a cool GUI. This was a leap ahead of standard MS-DOS packages like Lotus 1–2–3.

Next, Microsoft extended by creating Office: the holy trinity of Excel, Word, and Powerpoint, all running together on Windows. By 1995, these programs were working together well, and although there were a number of word processors to choose from, there weren’t any compelling competitors for Excel on Windows.

Microsoft sharpened their competitive edge with company discounts and clever Office 95 marketing, and as a result, most major businesses were adopting it as their standardized software suite?—?and Excel was part of the bundle. No need to buy a standalone package like Lotus 1–2–3.

Meanwhile, Symphony?—?the Lotus integrated package for MS-DOS that aimed to compete with Office?—?never prospered and was eventually abandoned. Microsoft had officially extinguished Lotus 1–2–3.

They wanted Office to become the gold standard for productivity software. And they succeeded. But not long after establishing the dominance of their desktop operating system, Microsoft realized that another challenge was looming.

The World Wide Web was becoming wildly successful, to an extent that few people had foreseen.

Not only could people browse websites that were outside Microsoft’s control, but Netscape introduced the JavaScript scripting language which allowed developers to write code that ran in the browser. In effect, Netscape was inventing a new operating system, distributed between the client-side browser and the remote server.

Even worse, content on the web was platform-agnostic: browsers worked just fine on Macs and Unix as well as Windows, so an application that ran in the browser would rip open the Microsoft business model.

In order to get a piece of the action, Microsoft launched Internet Explorer (IE) in 1995 as a direct competitor to Netscape Navigator. Initially, it only had a tiny market share: less than 10% by the close of 1996. So this was more of an air kiss than a full embrace of the internet.

Things heated up with the release of IE3, bundled as a free component of Windows in 1996, and integrating a number of apps that were part of the Microsoft ecosystem: an internet mail client (later to become Outlook Express), an address book, and the Windows Media Player. IE4 continued the extend theme by bundling programs for the chat and video conferencing.

At the same time, Microsoft re-engineered the Windows desktop look and feel to make it more like browsing a web page. How did Netscape Navigator fit into this cozy set up?

Not at all?—?it functioned increasingly worse on the Microsoft operating system. By the end of the decade, Internet Explorer had 86% of the browser market.

– Game over for Netscape.

Today, Microsoft is working hard to shed its ‘evil’ reputation, contributing to open source and supporting open standards.

But we may have a new villain on our hands…

Google: the new king of Embrace, Extend, Extinguish

It was March 31, 2004.

The headlines were in a frenzy:

Google, the dominant Internet search company, is planning to up the stakes in its intensifying competition with Yahoo and Microsoft by unveiling a new consumer-oriented electronic mail service.

At the time, the news seemed outrageous. A search engine company? Launching a free email service? With an alleged storage capacity of 1GB?—?500 times bigger than what Microsoft’s Hotmail offered?!

In fact, when April 1st rolled around and Google issued a press release officially announcing Gmail, most people took it as a far-fetched hoax.

But Gmail was no April Fool’s Day joke.

Boasting massive storage, a slick interface, instant search, and personalization options, it was real?—?and revolutionary.

Not only did Gmail blow Hotmail and Yahoo Mail out of the water, but it was also the first app with the potential to replace conventional PC software.

According to Georges Harik, who was responsible for most of Google’s new products at the time:

“It was a pretty big moment for the Internet. Taking something that hadn’t been worked on for years but was central, and fixing it.”

Google had officially extended email. And, while they didn’t extinguish other email providers entirely, they certainly came close.

Then there’s AMP. The Accelerated Mobile Pages Project (AMP) is a technology that enables web pages to load more rapidly on mobile devices.

AMP was originally targeted at news publishers, to compete with Facebook’s Instant Articles, but it has now far outstripped the latter, after being adopted by platforms such as Reddit, Twitter, and LinkedIn

As a strategy, AMP is Google’s most brazen. It serves as a vehicle for routing users through the Google Content Delivery Network even if they’re reading content from other websites. Sites that don’t adopt AMP get pushed out of Google mobile search results and into oblivion.

Or, extinguished.

There’s also the infamous case of Google Reader, which dug the grave for RSS (Rich Site Summary).

RSS’s decline was evident before Google axed it, but killing Reader dealt a massive blow to any of RSS’s remaining momentum. Google said themselves they wanted to consolidate users onto the rest of their services?—?none of which support any open syndication standards.

Tech writer Ed Bott summarizes eloquently:

The short life and sad death of Google Reader tells a familiar story of how Google swept into a crowded field, killed off almost all credible competition with a free product, and then arbitrarily killed that product when it no longer had a use for it.

Last but not least, there’s PDF.

To recap:

PDF was a proprietary format controlled by Adobe until it was released as an open standard in 2008. When it was published by the International Organization for Standardization as ISO 32000–1:2008, control of the specification passed to an ISO Committee of volunteer industry experts. In 2008, Adobe published a Public Patent License to ISO 32000–1 granting royalty-free rights for all patents owned by Adobe that are necessary to make, use, sell, and distribute PDF compliant implementations.

PDFs have a feature that allows forms to be submitted. This feature previously worked on all PDF viewers (such as Adobe Acrobat and Apple Preview). That is until Chrome started their own viewers for PDF files.

As Google’s browser gained market share (now hitting over 60% in the usage stakes), most people began viewing PDFs in Chrome’s native PDF reader. But, here’s the kicker: Chrome doesn’t support all of PDF’s features.

For example, my company, JotForm, has a feature called fillable PDF Forms. It lets you create PDF forms, which you can submit.

So, forms created with Adobe or JotForm’s PDF tool often don’t work on Chrome. We have to instruct people to use Adobe Acrobat instead, which creates needless friction.

In a nutshell, Google’s behavior prevents us from investing more deeply in the PDF Forms.

Our feature is being extinguished before our eyes.

So all of this begs the question:

Does Google really support open source?

Google vs. Apple

In 1995, it was Microsoft vs. Netscape.

In 2018, it’s Google vs. Apple.

The only difference lies in strategy. Google is playing the long game to take Apple down.

Rather than create products that are a dramatic improvement on Apple’s, they make them almost-as-good, or equally good?—?and cheaper.

Take Chromebooks. They aren’t as slick and speedy as Macbooks. But they offer similar usability?—?and you can buy three for the cost of one iPad. Plus, they’re brilliantly marketed.

Or Android. It’s as close a replica to iOS as you can imagine.

Or Pixel. Compared to the iPhone, it has a better camera, faster charging, smoother performance, and a more useful digital assistant, for a lower price.

Google is extending with their growing selection of products, including an Amazon Echo competitor, a smart router, TV, a VR headset, and a list of nest devices. Although these products will work mostly with iOS devices, they will work better with Android phones, and/or the Pixel.

All of these factors make migration look increasingly more promising. Apple has been cutting manufacturing costs while pricing its products ever higher, which means the user experience has plummeted.

Not to mention the scandal that erupted when we learned that Apple deliberately slows older products in a bid to encourage users to upgrade.

All of these factors lay fertile ground for Google to overtake Apple.

In fact, Apple customer loyalty is arguably the only real obstacle in Google’s way. But if enough people get frustrated with Apple’s pricing strategy, it could signal the end of Apple’s reign as we know it.

The drive for innovation

Twenty years ago, the browser wars were raging.

There was stiff competition?—?and that was a good thing because it prevented a monopoly.

With competition comes innovation. In fact, this period of intense rivalry led to the web we have now.

But today? The startup culture is less “what can we build next?” and more “what’s our exit strategy?”

The Big Tech Five continue to swallow up smaller companies. And as their monopoly grows, I’d argue that innovation is dwindling.

Openness and added value are being sacrificed at the altar of revenue and market share. And Google is at the forefront of this. Most recently, Chrome announced their “most controversial initiative yet”: fundamentally rethinking URLs across the web. Without a URL, the only way to access a page is via Google.

Ed Bott compares Google to Godzilla:

… sweeping through the landscape and crushing anything in its path because few startups can compete with a free product from Google.

And he’s right. Google’s convenience and power are overwhelming. But we can’t let that blind us to the reality of what they’re doing.

However you look at it, embrace, extend, extinguish is pivotal to Google’s strategy. Granted, no one in Google is sending explicit instructions as Bill Gates once did, but they don’t need to?—?the end result is the same.

EEE certainly looks different today than it did in 2000; it’s subtler, friendlier, more politically correct.

But it’s just as dangerous. The war isn’t over. We must fight to diversify the internet, uphold open standards, and stamp out monopoly.

Categories: Others Tags:

Take Your Business Online with Ease: Create Your Website with Mozello

November 13th, 2018 No comments

Few of the current crop of page builders are as simple as Mozello. You don’t need any previous experience, Mozello takes care of everything for you. Just run through the quick set-up and you’ll have a working website, online, in minutes.

Mozello is a surprisingly easy-to-use platform that offers you the opportunity to create a personal or business site, with no design or code knowledge. But where it really excels is in lowering the entry-level for anyone hoping to create an ecommerce site.

create a brochure, a blog, or even an ecommerce store

Mozello is also an ideal option for web designers and developers who need to develop a site simple enough for a client to self-manage. Mozello is so simple to use, your client can make any changes they like, without blowing up your phone every 5 minutes.

Ultra-Fast Setup

Creating an account with Mozello couldn’t be easier. Just navigate to the site, enter your email address and a password.

Mozello will ask you for an account name, but don’t worry about getting this right, because you can always change it later. Decide whether you want to create a brochure, a blog, or even an ecommerce store.

Once you’re ready to publish your site you have a simple choice. You can either publish under a free Mozello web address, or use a domain name. Using a domain is probably best for most users, and Mozello allows you to use one you already own, or register a new name.

48 Mobile-Ready Templates

Once you’ve created an account, Mozello presents you with 48 template designs to choose from.

Unlike some page builders, Mozello’s templates have a whole heap of variety, there’s something here for everyone, so you can be sure you’ll find something you love.

Each of the templates is mobile-ready, meaning that your site will work perfectly across all devices.

Mozello’s templates cover 90% of the use cases you’ll ever need, but if you do need to create something out of the ordinary, rest assured that all of Mozello’s templates are easily customizable.

Simple to Customize

Mozello has fewer features than some site builders, but by making the difficult design decisions and baking them into the defaults, Mozello allows you to focus on creating a distinctive brand.

Mozello allows you to focus on creating a distinctive brand

You can change your color scheme with a single click, and choose from professionally curated Google font pairings, without needing any special understanding of design.

It takes about 20 to 25 minutes to get a site up and running on Mozello’s platform—even an ecommerce store—few site builders can boast that kind of speed.

If you’re running the type of business that has a bricks-and-mortar counterpart, then you’ll appreciate the ability to easily add a map, and if you’re running an international business you can even post content in multiple languages, with just a few clicks—perfect if your customers speak dual languages, like English and Spanish.

Easily Edit Content

Editing content is a breeze with Mozello. Just click on the item you want to edit, and it will switch to edit mode. You can switch text and images, and even edit styles inline to adjust the site design.

As well as editing on-page content, Mozello gives you the power to edit the meta data of the page that search engines will use to rank your site. You can easily edit title tags, insert Google Analytics, and integrate your site with social media.

Create an Online Store

There are plenty of site builders available, but very few of them allow you to create a fully functional ecommerce store with no experience. All the features you need to promote your products, from fast payments, to coupon codes, are all included.

Mozello’s store is packed with the type of features you normally only expect from a premium provider, even the free version. Designing a store could not be easier, because it’s all built around your product catalog; simply define your products, adding a name, a description, shipping costs, and the terms of sale.

no matter what you’re selling, you can present it in a way that encourages sales

For stores with more than a few products, you’re going to want to create some categories. And Mozello even has that covered: each product can be assigned a category, so if you’re selling sporting goods, you can group the equipment by sport, by gender, or even by price range. You can even add product variants, allowing you to offer customers different options.

Perhaps best of all, you can customize your catalog layout, as well as the layout of product images. So no matter what you’re selling, you can present it in a way that encourages sales in your sector, helping customers browse, and boosting revenue.

Mozello gathers every order you receive, so you can process them for your new customers. And because this all takes place in the same simple-to-use interface, it’s easy for anyone to run an online store. You can even manage your ecommerce store from your cellphone, making it simple to run your whole online business right out of your pocket—great for anyone working remotely.

If you’re managing a larger store you’ll benefit from the Premium Plus plan which delivers tons of extra functionality, and is easily capable of handling hundreds of products, but for many fledgling stores, the free starter plan is a risk-free route to getting online.

An Excellent Entry-Level Site Builder

Mozello is designed to allow anyone, even those with no design or code knowledge, to create a website easily.

The live site editor is simple to use, but if you do run into a problem, Mozello is backed by a friendly support team that answers email queries promptly.

the free starter plan is a risk-free route to getting online

Mozello doesn’t offer all the bells and whistles of some big-name providers, and that focus on essential tools is exactly what most first-time website builders need.

Free by default, meaning that anyone can build a site with no risk, if you want to up your game then there is a premium plan for just $7 per month and a premium plus plan for $14 per month. If hassle-free web design is what you’re after, give Mozello a look.

[ — This is a sponsored post on behalf of Mozello — ]

Add Realistic Chalk and Sketch Lettering Effects with Sketch’it – only $5!

Source

Categories: Designing, Others Tags:

A Bunch of Options for Looping Over querySelectorAll NodeLists

November 12th, 2018 No comments

A common need when writing vanilla JavaScript is to find a selection of elements in the DOM and loop over them. For example, finding instances of a button and attaching a click handler to them.

const buttons = document.querySelectorAll(".js-do-thing");
// There could be any number of these! 
// I need to loop over them and attach a click handler.

There are SO MANY ways to go about it. Let’s go through them.

forEach

forEach is normally for arrays, and interestingly, what comes back from querySelectorAll is not an array but a NodeList. Fortunately, most modern browsers support using forEach on NodeLists anyway.

buttons.forEach((button) => {
  button.addEventListener('click', () => {
    console.log("forEach worked");
  });
});

If you’re worried that forEach might not work on your NodeList, you could spread it into an array first:

[...buttons].forEach((button) => {
  button.addEventListener('click', () => {
    console.log("spread forEach worked");
  });
});

But I’m not actually sure if that helps anything since it seems a bit unlikely there are browsers that support spreads but not forEach on NodeLists. Maybe it gets weird when transpiling gets involved, though I dunno. Either way, spreading is nice in case you want to use anything else array-specific, like .map(), .filter(), or .reduce().

A slightly older method is to jack into the array’s natural forEach with this little hack:

[].forEach.call(buttons, (button) => {
  button.addEventListener('click', () => {
    console.log("array forEach worked");
  });
});

Todd Motto once called out this method pretty hard though, so be advised. He recommended building your own method (updated for ES6):

const forEach = (array, callback, scope) => {
  for (var i = 0; i < array.length; i++) {
    callback.call(scope, i, array[i]); 
  }
};

…which we would use like this:

forEach(buttons, (index, button) => {
  console.log("our own function worked");
});

for .. of

Browser support for for .. of loops looks pretty good and this seems like a super clean syntax to me:

for (const button of buttons) {
  button.addEventListener('click', () => {
    console.log("for .. of worked");
  });
}

Make an array right away

const buttons = Array.prototype.slice.apply(
  document.querySelectorAll(".js-do-thing")
);

Now you can use all the normal array functions.

buttons.forEach((button) => {
  console.log("apply worked");
});

Old for loop

If you need maximum possible browser support, there is no shame in an ancient classic for loop:

for (let i = 0; i < buttons.length; ++i) {
  buttons[i].addEventListener('click', () => {
    console.log("for loop worked");
  });
}

Libraries

If you’re using jQuery, you don’t even have to bother….

$(".buttons").on("click", () => {
  console.log("jQuery works");
});

If you’re using a React/JSX setup, you don’t need think about this kind of binding at all.

Lodash has a _.forEach as well, which presumably helps with older browsers.

_.forEach(buttons, (button, key) => {
  console.log("lodash worked");
});

Poll

Twitter peeps:

const els = document.querySelectorAll(“.foo”);

// which loop do you use? one of these? other?

— Chris Coyier (@chriscoyier) November 7, 2018

Also here’s a Pen with all these options in it.

The post A Bunch of Options for Looping Over querySelectorAll NodeLists appeared first on CSS-Tricks.

Categories: Designing, Others Tags: