Content marketing is the planning, creating, sharing and publishing of content to reach your target audience. It helps you increase brand awareness, engage with prospects, and convince them to convert.
Content marketing offers businesses a much more efficient alternative to paid advertising. In fact, content marketing gets 3 times as many leads as paid search. Content marketing also boasts conversion rates 6 times higher than other advertising tools. And when it comes to blogging, those businesses who publish more than 16 posts monthly get more than 3.5 times the traffic than those who post 3-5 times.
It’s clear that content marketing is essential to growing your business. However, it has never been easy. Before any piece of content is laid out, it intentionally goes through different stages of production.
There are countless tools that can help you get the most out of your content marketing strategy. And in this article, we want to draw your attention to 7 of those essential tools. They are some of the most effective ways to boost your content creation.
1. Buzzsumo Hunts for Your Content Relevancy
Do you find it hard to find topics that are trending in your industry?
Do you find it hard to know what your competitor is writing about?
Buzzsumo is a great content research tool. It is one of the major tools to help analyze and understand what topics in your niche perform best for you and your competitors. You’ll find metrics like shares, and the influencers that are most relevant for your niche.
Once you’re registered, you’ll constantly receive alerts about content that mentions your keywords. These reveal topics you should write about to reach your audience’s needs, and make it more fun to brainstorm content ideas.
My favourite feature is Content Research. Before 2014 it was their only feature. And even after they added a bunch of other features like Facebook Analyzer, Monitoring, and Question Analyzer, it is still the one I frequently use for relevant content creation.
First, begin by researching your keywords with Google’s keyword planner. Once you’ve found one that has potential, you can enter them into Buzzsumo, and remain aware of trending topics people are following on different social engines.
Buzzsumo also enables you to get in touch with bloggers who continuously spread the word about what you need your industry to be heard for.
Buzzsumo is mainly geared towards professionals in marketing. So if you are serious about marketing and SEO, I highly recommend you to give this tool a shot.
Once you are well-aware of what you are going to jot down, time to pass on to the next tool.
2. Google Docs is as Important for Marketers as a Kitchen is to Chefs
Personally, I don’t know any content marketers who don’t use Google Docs to craft articles. It is also super easy and convenient to organise productive teamwork.
I especially love headings in this tool. By adding headings to the document, text gets logically divided into semantic groups. It becomes more suitable to find the necessary paragraph instead of scrolling on up and down for the right passage to edit.
Recently, Google added a new feature to ease the burden of excessive writing.
Can you guess what it is?
It is voice typing.
Yes, you read it right. VOICE TYPING. You speak what’s on your mind, and the text writes itself. Isn’t that cool?
Also, Google Docs gives you the option to download your content in different formats, including PDF, .docx, .odt, .rtf, .txt, .html zipped, .epub. All in one place.
Google Docs also helps you access your content from anywhere you want.
As your writing must look professional in all aspects to make your readers trust you, I feel obliged to bring you to our next tool which is Grammarly.
3. Grammarly Provides FREE Writing Assistance
Grammarly has changed the game once and for all.
With Grammarly, you definitely won’t slip through any grammar mistakes that to some extent litter articles. Even the most detail-oriented editor might not notice the errors Grammarly brings to the surface.
It has reduced my error rate by around 50-80% by analyzing any types of texts, highlighting errors, and making suggestions on what I can replace them with.
The free version covers basic grammar errors, punctuation, spellings, tone detection and conciseness. However, if you want more succinct content, Grammarly offers a premium package. It includes style suggestions and clarity improvements in terms of fluency, readability, word choice, plagiarism, inclusive language, formality.
Grammarly greatly helps to make content sound more native. It contributes to the smooth flow of language in texts. This is pretty vital for people like me who weren’t born with English as a first language. Quite often, I use it to make sure the text sounds truly “English English”.
The overuse of definite articles is the most common problem at present that I have to deal with. That’s why I immensely appreciate Grammarly in this effort.
This is an ultimate tool for you to use if you want your writing pieces to look ubiquitous.
Tip: After you finish up your body parts, then craft a headline. Only then you know what your content contains as a whole.
Headline writing is another artwork that my next favourite tool will make easy for you.
4. Portent’s Content Idea Generator
How many times have you drafted headlines and deleted them in a second because they weren’t attractive enough?
It’s no secret that headlines are a top-notch hook to attract readers on their first glance. Thus, it is super important to bring your attention to this point. I understand it is tough to come up with exclusive headlines to rank higher than your competitors.
Here Portent’s Content Idea Generator walks in to assist. It analyzes headlines based on a variety of factors, like sentiment and length. Just type any keyword you want, it will show you a ready headline with in-depth explanations on each component of the tail.
Not pleased with what you have been offered?
Just rinse it with the arrow and repeat the search until you find the most perfect variant for you.
As you are now reading this article, it is a genuine proof of how well the headline generator worked for us.
Next, we are going to talk about the most popular CMS to upload and manage your content and make it available for readers.
5. WordPress is Home to Your Content
WordPress is the most widely used content management system. According to Venturebeat, it powers 30% of the Internet in general.
Social proof might lead us astray, but WordPress truly is the best in comparison with its rivals both at the beginning stages and as you continue to grow your content marketing strategy. For instance, even sites like the New Yorker are hosted on WordPress.
At its core, WordPress helps to build websites. You can host your website via WordPress.com. It contains plugin architecture and templates to easily customize and fit your business peculiarities.
This is used as a great tool to build websites as personal bloggers or for clients. It contains all the necessary plugins to build on. So don’t ever be afraid to get your feet wet and dive into WordPress. It’s pretty simple. You don’t have to be super tech-savvy to wrap your head around this system.
Once the content is uploaded, time to sprinkle some magic power to rank it higher on search engines.
6. Yoast is a Real Magic Power for Content Optimization
Yoast is one of the best-known tools for producing SEO-based content.
It’s a WordPress plugin for SEO that literally does everything, ranging from optimising for a keyword to editing and previewing meta-descriptions, URL slugs, abstracting away some minor SEO tasks, and finding relevant internal links.
When you’re working with a self-hosted WordPress site, Yoast is the best on-page SEO tool you can ever come across. It is easy to understand whether you’re running a personal blog or managing a website for a client.
Yoast will make your site as search-engine-friendly as possible. The main functionality of the tool is free, and it still builds the best-performing sites.
If you can afford the premium version you’ll also benefit from Yoast preventing and fixing dead links with the redirect manager. 58% per cent Yoast users praise the feature of redirect manager as one of the best. Premium also lets you optimize your text for synonyms and related words so Google and other search engines can determine what a text is about by detecting any other word forms in your content. It helps you get your internal linking right.
It works really hard to rank your content higher than your competitors.
They have 9,000,000 downloads—4 out of 5 stars.
Everyone I know who uses WordPress, uses Yoast. It’s just an amazing plugin.
Nevertheless, producing and optimizing quality content requires more effort to appeal to the audience’s interests which takes us to the next tool.
7. Kred Indicates Your Authority
When it comes to content marketing, just producing and uploading content won’t build credibility and trust for your brand.
You need to collaborate with influencers in your industry to amplify your messages. They will help you not to get lost, as the content is rapidly produced every second.
Unlike other tools, Kred allows you to build up your own influencer status and become an authority in your niche so your online presence is enhanced.
Kred lets users know how influential they are based on social media accounts. With the help of a comprehensive algorithm, Kred scores users based on two factors: outreach and influence. It keeps you alerted on where your content stands now so you can continue improving further.
The platform measures your influencer status analyzing Facebook and Twitter activities so you can reach out to other relevant influencers who will impact your business the most.
Once you build up your authority, your content will have more credibility. When people know who you are, they are more likely to click on your content and engage with it.
Sounds interesting? Go ahead and try them all.
Which Tool Will You Use First?
Content marketing tools will not save a bad strategy or bad product but they will certainly help you get your job done faster and more efficiently.
All these 7 tools guide you on your content marketing journey, step by step. They help you start from the first phases of writing, and on to editing, uploading and optimizing. These tools help to ramp up your productive work and get the best results.
Buzzsumo will help you delve into your audience’s interests and understand the relevancy of your content. Google Docs will contribute to gathering all your thoughts in one compact place, Grammarly will help you sound more natural, and Portent’s Idea Generator will get you a powerful headline for a hook.
WordPress gives you a space to show your content in a well-designed layout, and don’t forget to use Yoast to optimize your SEO so Google finds you upon the very first search. Then check back with Kred to see if your efforts are building your trust and credibility among your audience.
As you now know what tools are most effective to develop your content and get it noticed by the largest number of people, you are totally armed to create phenomenal pieces of writing with ease.
There are obviously more content tools than mentioned above but these are the main ones to consider while creating content at the initial stage.
If you’re looking for even more, please, head to our website for more marketing tools and hacks.
The JAMstack has proven itself to be one of the top ways of producing content-driven sites, but it’s also a great place to house applications, as well. If you’ve been using the JAMstack for your performant websites, the demos in this article will help you extend those philosophies to applications as well.
When using the JAMstack to build applications, you need a data service that fits into the most important aspects of the JAMstack philosophy:
Global distribution
Zero operational needs
A developer-friendly API.
In the JAMstack ecosystem there are plenty of software-as-a-service companies that provide ways of getting and storing specific types of data. Whether you want to send emails, SMS or make phone calls (Twilio) or accept form submissions efficiently (Formspree, Formingo, Formstack, etc.), it seems there’s an API for almost everything.
These are great services that can do a lot of the low-level work of many applications, but once your data is more complex than a spreadsheet or needs to be updated and store in real-time, it might be time to look into a database.
The service API can still be in use, but a central database managing the state and operations of your app becomes much more important. Even if you need a database, you still want it to follow the core JAMstack philosophies we outlined above. That means, we don’t want to host our own database server. We need a Database-as-a-Service solution. Our database needs to be optimized for the JAMstack:
Optimized for API calls from a browser or build process.
Flexible to model your data in the specific ways your app needs.
Global distribution of our data like a CDN houses our sites.
Hands-free scaling with no need of a database administrator or developer intervention.
Whatever service you look into needs to follow these tenets of serverless data. In our demos, we’ll explore FaunaDB, a global serverless database, featuring native GraphQL to assure that we keep our apps in step with the philosophies of the JAMstack.
Let’s dive into the code!
A JAMstack Guestbook App With Gatsby And Fauna
I’m a big fan of reimagining the internet tools and concepts of the 1990s and early 2000s. We can take these concepts and make them feel fresh with the new set of tools and interactions.
In this demo, we’ll create an application that was all the rage in that time period: the guestbook. A guestbook is nothing but app-generated content and interaction. A user can come to the site, see all the signatures of past “guests” and then leave their own.
To start, we’ll statically render our site and build our data from Fauna during our build step. This will provide the fast performance we expect from a JAMstack site. To do this, we’ll use GatsbyJS.
Initial setup
Our first step will be to install Gatsby globally on our computer. If you’ve never spent much time in the command line, Gatsby’s “part 0” tutorial will help you get up and running. If you already have Node and NPM installed, you’ll install the Gatsby CLI globally and create a new site with it using the following commands:
npm install -g gatsby-cli
gatsby new <directory-to-install-into> <starter>
Gatsby comes with a large repository of starters that can help bootstrap your project. For this demo, I chose a simple starter that came equipped with the Bulma CSS framework.
gatsby new guestbook-app https://github.com/amandeepmittal/gatsby-bulma-quickstart
This gives us a good starting point and structure. It also has the added benefit of coming with styles that are ready to go.
Let’s do a little cleanup for things we don’t need. We’ll start by simplifying our components.header.js
import React from 'react';
import './style.scss';
const Header = ({ siteTitle }) => (
<section className="hero gradientBg ">
<div className="hero-body">
<div className="container container--small center">
<div className="content">
<h1 className="is-uppercase is-size-1 has-text-white">
Sign our Virtual Guestbook
</h1>
<p className="subtitle has-text-white is-size-3">
If you like all the things that we do, be sure to sign our virtual guestbook
</p>
</div>
</div>
</div>
</section>
);
export default Header;
This will get rid of much of the branded content. Feel free to customize this section, but we won’t write any of our code here.
Next we’ll clean out the components/midsection.js file. This will be where our app’s code will render.
In this code, we’ve mostly removed the “site” content and added in a couple new components. A that will contain our form for submitting a signature and a component to contain the list of signatures.
Now that we have a relatively blank slate, we can set up our FaunaDB database.
Setting Up A FaunaDB Collection
After logging into Fauna (or signing up for an account), you’ll be given the option to create a new Database. We’ll create a new database called guestbook.
Inside this database, we’ll create a “Collection” called signature. Collections in Fauna a group of Documents that are in turn JSON objects.
In this new Collection, we’ll create a new Document with the following JSON:
{
name: "Bryan Robinson",
message:
"Lorem ipsum dolor amet sum Lorem ipsum dolor amet sum Lorem ipsum dolor amet sum Lorem ipsum dolor amet sum"
}
This will be the simple data schema for each of our signatures. For each of these Documents, Fauna will create additional data surrounding it.
{
"ref": Ref(Collection("signatures"), "262884172900598291"),
"ts": 1586964733980000,
"data": {
"name": "Bryan Robinson",
"message": "Lorem ipsum dolor amet sum Lorem ipsum dolor amet sum Lorem ipsum dolor amet sum Lorem ipsum dolor amet sum "
}
}
The ref is the unique identifier inside of Fauna and the ts is the time (as a Unix timestamp) the document was created/updated.
After creating our data, we want an easy way to grab all that data and use it in our site. In Fauna, the most efficient way to get data is via an Index. We’ll create an Index called allSignatures. This will grab and return all of our signature Documents in the Collection.
Now that we have an efficient way of accessing the data in Gatsby, we need Gatsby to know where to get it. Gatsby has a repository of plugins that can fetch data from a variety of sources, Fauna included.
Setting up the Fauna Gatsby Data Source Plugin
npm install gatsby-source-faunadb
After we install this plugin to our project, we need to configure it in our gatsby-config.js file. In the plugins array of our project, we’ll add a new item.
{
resolve: `gatsby-source-faunadb`,
options: {
// The secret for the key you're using to connect to your Fauna database.
// You can generate on of these in the "Security" tab of your Fauna Console.
secret: process.env.YOUR_FAUNADB_SECRET,
// The name of the index you want to query
// You can create an index in the "Indexes" tab of your Fauna Console.
index: `allSignatures`,
// This is the name under which your data will appear in Gatsby GraphQL queries
// The following will create queries called `allBird` and `bird`.
type: "Signatures",
// If you need to limit the number of documents returned, you can specify a
// Optional maximum number to read.
// size: 100
},
},
In this configuration, you provide it your Fauna secret Key, the Index name we created and the “type” we want to access in our Gatsby GraphQL query.
Where did that process.env.YOUR_FAUNADB_SECRET come from?
In your project, create a .env file — and include that file in your .gitignore! This file will give Gatsby’s Webpack configuration the secret value. This will keep your sensitive information safe and not stored in GitHub.
YOUR_FAUNADB_SECRET = "value from fauna"
We can then head over to the “Security” tab in our Database and create a new key. Since this is a protected secret, it’s safe to use a “Server” role. When you save the Key, it’ll provide your secret. Be sure to grab that now, as you can’t get it again (without recreating the Key).
Once the configuration is set up, we can write a GraphQL query in our components to grab the data at build time.
Getting the data and building the template
We’ll add this query to our Midsection component to make it accessible by both of our components.
const Midsection = () => {
const data = useStaticQuery(
graphql`
query GetSignatures {
allSignatures {
nodes {
name
message
_ts
_id
}
}
}`
);
// ... rest of the component
}
This will access the Signatures type we created in the configuration. It will grab all the signatures and provide an array of nodes. Those nodes will contain the data we specify we need: name, message, ts, id.
We’ll set that data into our state — this will make updating it live easier later.
At this point, if you start your Gatsby development server, you should have a list of signatures currently existing in your database. Run the following command to get up and running:
gatsby develop
Any signature stored in our database will build HTML in that component. But how can we get signatures INTO our database?
Let’s set up a signature form component to send data and update our Signatures list.
Let’s Make Our JAMstack Guestbook Interactive
First, we’ll set up the basic structure for our component. It will render a simple form onto the page with a text input, a textarea, and a button for submission.
import React from 'react';
import faunadb, { query as q } from "faunadb"
var client = new faunadb.Client({ secret: process.env.GATSBY_FAUNA_CLIENT_SECRET })
export default class SignForm extends React.Component {
constructor(props) {
super(props)
this.state = {
sigName: "",
sigMessage: ""
}
}
handleSubmit = async event => {
// Handle the submission
}
handleInputChange = event => {
// When an input changes, update the state
}
render() {
return (
<form onSubmit={this.handleSubmit}>
<div className="field">
<div className="control">
<label className="label">Label
<input
className="input is-fullwidth"
name="sigName"
type="text"
value={this.state.sigName}
onChange={this.handleInputChange}
/>
</label>
</div>
</div>
<div className="field">
<label>
Your Message:
<textarea
rows="5"
name="sigMessage"
value={this.state.sigMessage}
onChange={this.handleInputChange}
className="textarea"
placeholder="Leave us a happy note"></textarea>
</label>
</div>
<div className="buttons">
<button className="button is-primary" type="submit">Sign the Guestbook</button>
</div>
</form>
)
}
}
To start, we’ll set up our state to include the name and the message. We’ll default them to blank strings and insert them into our and .
When a user changes the value of one of these fields, we’ll use the handleInputChange method. When a user submits the form, we’ll use the handleSubmit method.
The input change will accept the event. From that event, it will get the current target’s value and name. We can then modify the state of the properties on our state object — sigName, sigMessage or anything else.
Once the state has changed, we can use the state in our handleSubmit method.
This function will call a new createSignature() method. This will connect to Fauna to create a new Document from our state items.
The addSignature() method will update our Signatures list data with the response we get back from Fauna.
In order to write to our database, we’ll need to set up a new key in Fauna with minimal permissions. Our server key is allowed higher permissions because it’s only used during build and won’t be visible in our source.
This key needs to only allow for the ability to only create new items in our signatures collection.
Note: A user could still be malicious with this key, but they can only do as much damage as a bot submitting that form, so it’s a trade-off I’m willing to make for this app.
For this, we’ll create a new “Role” in the “Security” tab of our dashboard. We can add permissions around one or more of our Collections. In this demo, we only need signatures and we can select the “Create” functionality.
After that, we generate a new key that uses that role.
To use this key, we’ll instantiate a new version of the Fauna JavaScript SDK. This is a dependency of the Gatsby plugin we installed, so we already have access to it.
import faunadb, { query as q } from "faunadb"
var client = new faunadb.Client({ secret: process.env.GATSBY_FAUNA_CLIENT_SECRET })
By using an environment variable prefixed with GATSBY_, we gain access to it in our browser JavaScript (be sure to add it to your .env file).
By importing the query object from the SDK, we gain access to any of the methods available in Fauna’s first-party Fauna Query Language (FQL). In this case, we want to use the Create method to create a new document on our Collection.
We pass the Create function to the client.query() method. Create takes a Collection reference and an object of information to pass to a new Document. In this case, we use q.Collection and a string of our Collection name to get the reference to the Collection. The second argument is for our data. You can pass other items in the object, so we need to tell Fauna, we’re specifically sending it the data property on that object.
Next, we pass it the name and message we collected in our state. The response we get back from Fauna is the entire object of our Document. This includes our data in a data object, as well as a Fauna ID and timestamp. We reformat that data in a way that our Signatures list can use and return that back to our handleSubmit function.
Our submit handler will then pass that data into our setSigData prop which will notify our Signatures component to rerender with that new data. This gives our user immediate feedback that their submission has been accepted.
Rebuilding the site
This is all working in the browser, but the data hasn’t been updated in our static application yet.
From here, we need to tell our JAMstack host to rebuild our site. Many have the ability to specify a webhook to trigger a deployment. Since I’m hosting this demo on Netlify, I can create a new “Deploy webhook” in their admin and create a new triggerBuild function. This function will use the native JavaScript fetch() method and send a post request to that URL. Netlify will then rebuild the application and pull in the latest signatures.
Both Gatsby Cloud and Netlify have implemented ways of handling “incremental” builds with Gatsby drastically speeding up build times. This sort of build can happen very quickly now and feel almost as fast as a traditional server-rendered site.
Every signature that gets added gets quick feedback to the user that it’s been submitted, is perpetually stored in a database, and served as HTML via a build process.
Still feels a little too much like a typical website? Let’s take all these concepts a step further.
Create A Mindful App With Auth0, Fauna Identity And Fauna User-Defined Functions (UDF)
Being mindful is an important skill to cultivate. Whether it’s thinking about your relationships, your career, your family, or just going for a walk in nature, it’s important to be mindful of the people and places around you.
This app intends to help you focus on one randomized idea every day and review the various ideas from recent days.
To do this, we need to introduce a key element to most apps: authentication. With authentication, comes extra security concerns. While this data won’t be particularly sensitive, you don’t want one user accessing the history of any other user.
Since we’ll be scoping data to a specific user, we also don’t want to store any secret keys on browser code, as that would open up other security flaws.
We could create an entire authentication flow using nothing but our wits and a user database with Fauna. That may seem daunting and moves us away from the features we want to write. The great thing is that there’s certainly an API for that in the JAMstack! In this demo, we’ll explore integrating Auth0 with Fauna. We can use the integration in many ways.
Setting Up Auth0 To Connect With Fauna
Many implementations of authentication with the JAMstack rely heavily on Serverless functions. That moves much of the security concerns from a security-focused company like Auth0 to the individual developer. That doesn’t feel quite right.
The typical flow would be to send a login request to a serverless function. That function would request a user from Auth0. Auth0 would provide the user’s JSON Web Token (JWT) and the function would provide any additional information about the user our application needs. The function would then bundle everything up and send it to the browser.
There are a lot of places in that authentication flow where a developer could introduce a security hole.
Instead, let’s request that Auth0 bundle everything up for us inside the JWT it sends. Keeping security in the hands of the folks who know it best.
We’ll do this by using Auth0’s Rules functionality to ask Fauna for a user token to encode into our JWT. This means that unlike our Guestbook, we won’t have any Fauna keys in our front-end code. Everything will be managed in memory from that JWT token.
Setting up Auth0 Application and Rule
First, we’ll need to set up the basics of our Auth0 Application.
Following the configuration steps in their basic walkthrough gets the important basic information filled in. Be sure to fill out the proper localhost port for your bundler of choice as one of your authorized domains.
After the basics of the application are set up, we’ll go into the “Rules” section of our account.
Click “Create Rule” and select “Empty Rule” (or start from one of their many templates that are helpful starting points).
We give the rule a function that takes the user, context, and a callback from Auth0. We need to set up and grab a Server token to initialize our Fauna JavaScript SDK and initialize our client. Just like in our Guestbook, we’ll create a new Database and manage our Tokens in “Security”.
From there, we want to send a query to Fauna to create or log in our user. To keep our Rule code simple (and make it reusable), we’ll write our first Fauna “User-Defined Function” (UDF). A UDF is a function written in FQL that runs on Fauna’s infrastructure.
First, we’ll set up a Collection for our users. You don’t need to make a first Document here, as they’ll be created behind the scenes by our Auth0 rule whenever a new Auth0 user is created.
Next, we need an Index to search our users Collection based on the email address. This Index is simpler than our Guestbook, so we can add it to the Dashboard. Name the Index user_by_email, set the Collection to users, and the Terms to data.email. This will allow us to pass an email address to the Index and get a matching user Document back.
It’s time to create our UDF. In the Dashboard, navigate to “Functions” and create a new one named user_login_or_create.
Query(
Lambda(
["userEmail", "userObj"], // Arguments
Let(
{ user: Match(Index("user_by_email"), Var("userEmail")) }, // Set user variable
If(
Exists(Var("user")), // Check if the User exists
Create(Tokens(null), { instance: Select("ref", Get(Var("user"))) }), // Return a token for that item in the users collection (in other words, the user)
Let( // Else statement: Set a variable
{
newUser: Create(Collection("users"), { data: Var("userObj") }), // Create a new user and get its reference
token: Create(Tokens(null), { // Create a token for that user
instance: Select("ref", Var("newUser"))
})
},
Var("token") // return the token
)
)
)
)
)
Our UDF will accept a user email address and the rest of the user information. If a user exists in a users Collection, it will create a Token for the user and send that back. If a user doesn’t exist, it will create that user Document and then send a Token to our Auth0 Rule.
We can then store the Token as an idToken attached to the context in our JWT. The token needs a URL as a key. Since this is a Fauna token, we can use a Fauna URL. Whatever this is, you’ll use it to access this in your code.
This Token doesn’t have any permissions yet. We need to go into our Security rules and set up a new Role.
We’ll create an “AuthedUser” role. We don’t need to add any permissions yet, but as we create new UDFs and new Collections, we’ll update the permissions here. Instead of generating a new Key to use this Role, we want to add to this Role’s “Memberships”. On the Memberships screen, you can select a Collection to add as a member. The documents in this Collection (in our case, our Users), will have the permissions set on this role given via their Token.
Now, when a user logs in via Auth0, they’ll be returned a Token that matches their user Document and has its permissions.
import createAuth0Client from '@auth0/auth0-spa-js';
import { changeToHome } from './layouts/home'; // Home Layout
import { changeToMission } from './layouts/myMind'; // Current Mindfulness Mission Layout
let auth0 = null;
var currentUser = null;
const configureClient = async () => {
// Configures Auth0 SDK
auth0 = await createAuth0Client({
domain: "mindfulness.auth0.com",
client_id: "32i3ylPhup47PYKUtZGRnLNsGVLks3M6"
});
};
const checkUser = async () => {
// return user info from any method
const isAuthenticated = await auth0.isAuthenticated();
if (isAuthenticated) {
return await auth0.getUser();
}
}
const loadAuth = async () => {
// Loads and checks auth
await configureClient();
const isAuthenticated = await auth0.isAuthenticated();
if (isAuthenticated) {
// show the gated content
currentUser = await auth0.getUser();
changeToMission(); // Show the "Today" screen
return;
} else {
changeToHome(); // Show the logged out "homepage"
}
const query = window.location.search;
if (query.includes("code=") && query.includes("state=")) {
// Process the login state
await auth0.handleRedirectCallback();
currentUser = await auth0.getUser();
changeToMission();
// Use replaceState to redirect the user away and remove the querystring parameters
window.history.replaceState({}, document.title, "/");
}
}
const login = async () => {
await auth0.loginWithRedirect({
redirect_uri: window.location.origin
});
}
const logout = async () => {
auth0.logout({
returnTo: window.location.origin
});
window.localStorage.removeItem('currentMindfulItem')
changeToHome(); // Change back to logged out state
}
export { auth0, loadAuth, currentUser, checkUser, login, logout }
First, we configure the SDK with our client_id from Auth0. This is safe information to store in our code.
Next, we set up a function that can be exported and used in multiple files to check if a user is logged in. The Auth0 library provides an isAuthenticated() method. If the user is authenticated, we can return the user data with auth0.getUser().
We set up a login() and logout() functions and a loadAuth() function to handle the return from Auth0 and change the state of our UI to the “Mission” screen with today’s Mindful idea.
Once this is all set up, we have our authentication and user login squared away.
We’ll create a new function for our Fauna functions to reference to get the proper token set up.
const AUTH_PROP_KEY = "https://faunad.com/id/secret";
var faunadb = require('faunadb'),
q = faunadb.query;
async function getUserClient(currentUser) {
return new faunadb.Client({ secret: currentUser[AUTH_PROP_KEY]})
}
This returns a new connection to Fauna using our Token from Auth0. This token works the same as the Keys from previous examples.
Generate a random Mindful topic and store it in Fauna
To start, we need a Collection of items to store our list of Mindful objects. We’ll create a Collection called “mindful” things, and create a number of items with the following schema:
{
"title": "Career",
"description": "Think about the next steps you want to make in your career. What's the next easily attainable move you can make?",
"color": "#C6D4FF",
"textColor": "black"
}
From here, we’ll move to our JavaScript and create a function for adding and returning a random item from that Collection.
async function getRandomMindfulFromFauna(userObj) {
const client = await getUserClient(userObj);
try {
let mindfulThings = await client.query(
q.Paginate(
q.Documents(q.Collection('mindful_things'))
)
)
let randomMindful = mindfulThings.data[Math.floor(Math.random()*mindfulThings.data.length)];
let creation = await client.query(q.Call('addUserMindful', randomMindful));
return creation.data.mindful;
} catch (error) {
console.log(error)
}
}
To start, we’ll instantiate our client with our getUserClient() method.
From there, we’ll grab all the Documents from our mindful_things Collection. Paginate() by default grabs 64 items per page, which is more than enough for our data. We’ll grab a random item from the array that’s returned from Fauna. This will be what Fauna refers to as a “Ref”. A Ref is a full reference to a Document that the various FQL functions can use to locate a Document.
We’ll pass that Ref to a new UDF that will handle storing a new, timestamped object for that user stored in a new user_things Collection.
We’ll create the new Collection, but we’ll have our UDF provide the data for it when called.
We’ll create a new UDF in the Fauna dashboard with the name addUserMindful that will accept that random Ref.
As with our login UDF before, we’ll use the Lambda() FQL method which takes an array of arguments.
Without passing any user information to the function, FQL is able to obtain our User Ref just calling the Identity() function. All we have from our randomRef is the reference to our Document. We’ll run a Get() to get the full object. We’ll the Create() a new Document in the user_things Collection with our User Ref and our random information.
We then return the creation object back out of our Lambda. We then go back to our JavaScript and return the data object with the mindful key back to where this function gets called.
Render our Mindful Object on the page
When our user is authenticated, you may remember it called a changeToMission() method. This function switches the items on the page from the “Home” screen to markup that can be filled in by our data. After it’s added to the page, the renderToday() function gets called to add content by a few rules.
The first rule of Serverless Data Club is not to make HTTP requests unless you have to. In other words, cache when you can. Whether that’s creating a full PWA-scale application with Service Workers or just caching your database response with localStorage, cache data, and fetch only when necessary.
The first rule of our conditional is to check localStorage. If localStorage does contain a currentMindfulItem, then we need to check its date to see if it’s from today. If it is, we’ll render that and make no new requests.
The second rule of Serverless Data Club is to make as few requests as possible without the responses of those requests being too large. In that vein, our second conditional rule is to check the latest item from the current user and see if it is from today. If it is, we’ll store it in localStorage for later and then render the results.
Finally, if none of these are true, we’ll fire our getRandomMindfulFromFauna() function, format the result, store that in localStorage, and then render the result.
Get the latest item from a user
I glossed over it in the last section, but we also need some functionality to retrieve the latest mindful object from Fauna for our specific user. In our getLatestFromFauna() method, we’ll again instantiate our Fauna client and then call a new UDF.
Our new UDF is going to call a Fauna Index. An Index is an efficient way of doing a lookup on a Fauna database. In our case, we want to return all user_things by the user field. Then we can also sort the result by timestamp and reverse the default ordering of the data to show the latest first.
Simple Indexes can be created in the Index dashboard. Since we want to do the reverse sort, we’ll need to enter some custom FQL into the Fauna Shell (you can do this in the database dashboard Shell section).
This creates an Index named getMindfulByUserReverse, created from our user_thing Collection. The terms object is a list of fields to search by. In our case, this is just the user field on the data object. We then provide values to return. In our case, we need the Ref and the Timestamp and we’ll use the reverse property to reverse order our results by this field.
We’ll create a new UDF to use this Index.
Query(
Lambda(
[],
If( // Check if there is at least 1 in the index
GT(
Count(
Select(
"data",
Paginate(Match(Index("getMindfulByUserReverse"), Identity()))
)
),
0
),
Let( // if more than 0
{
match: Paginate(
Match(Index("getMindfulByUserReverse"), Identity()) // Search the index by our User
),
latestObj: Take(1, Var("match")), // Grab the first item from our match
latestRef: Select(
["data"],
Get(Select(["data", 0, 1], Var("latestObj"))) // Get the data object from the item
),
latestTime: Select(["data", 0, 0], Var("latestObj")), // Get the time
merged: Merge( // merge those items into one object to return
{ latestTime: Var("latestTime") },
{ latestMindful: Var("latestRef") }
)
},
Var("merged")
),
Let({ error: { err: "No data" } }, Var("error")) // if there aren't any, return an error.
)
)
)
This time our Lambda() function doesn’t need any arguments since we’ll have our User based on the Token used.
First, we’ll check to see if there’s at least 1 item in our Index. If there is, we’ll grab the first item’s data and time and return that back as a merged object.
After we get the latest from Fauna in our JavaScript, we’ll format it to a structure our storeCurrent() and render() methods expect it and return that object.
Now, we have an application that creates, stores, and fetches data for a daily message to contemplate. A user can use this on their phone, on their tablet, on the computer, and have it all synced. We could turn this into a PWA or even a native app with a system like Ionic.
We’re still missing one feature. Viewing a certain number of past items. Since we’ve stored this in our database, we can retrieve them in whatever way we need to.
Pull the latest X Mindful Missions to get a picture of what you’ve thought about
We’ll create a new JavaScript method paired with a new UDF to tackle this.
getSomeFromFauna will take an integer count to ask Fauna for a certain number of items.
Our UDF will be very similar to the getLatestFromFauana UDF. Instead of returning the first item, we’ll Take() the number of items from our array that matches the integer that gets passed into our UDF. We’ll also begin with the same conditional, in case a user doesn’t have any items stored yet.
Query(
Lambda(
["count"], // Number of items to return
If( // Check if there are any objects
GT(
Count(
Select(
"data",
Paginate(Match(Index("getMindfulByUserReverse"), Identity(null)))
)
),
0
),
Let(
{
match: Paginate(
Match(Index("getMindfulByUserReverse"), Identity(null)) // Search the Index by our User
),
latestObjs: Select("data", Take(Var("count"), Var("match"))), // Get the data that is returned
mergedObjs: Map( // Loop over the objects
Var("latestObjs"),
Lambda(
"latestArray",
Let( // Build the data like we did in the LatestMindful function
{
ref: Select(["data"], Get(Select([1], Var("latestArray")))),
latestTime: Select(0, Var("latestArray")),
merged: Merge(
{ latestTime: Var("latestTime") },
Select("mindful", Var("ref"))
)
},
Var("merged") // Return this to our new array
)
)
)
},
Var("mergedObjs") // return the full array
),
{ latestMindful: [{ title: "No additional data" }] } // if there are no items, send back a message to display
)
)
)
In this demo, we created a full-fledged app with serverless data. Because the data is served from a CDN, it can be as close to a user as possible. We used FaunaDB‘s features, such as UDFs and Indexes, to optimize our database queries for speed and ease of use. We also made sure we only queried our database the bare minimum to reduce requests.
Where To Go With Serverless Data
The JAMstack isn’t just for sites. It can be used for robust applications as well. Whether that’s for a game, CRUD application or just to be mindful of your surroundings you can do a lot without sacrificing customization and without spinning up your own non-dist database system.
With performance on the mind of everyone creating on the JAMstack — whether for cost or for user experience — finding a good place to store and retrieve your data is a high priority. Find a spot that meets your needs, those of your users, and meets ideals of the JAMstack.
An experienced interviewer takes care of many things: builds hypotheses, selects interviewees, composes invitations, schedules appointments, sets the stage, and, of course, writes an interview script. Any of these preparations can go wrong, but the script failure means all the effort is in vain. So, if you haven’t interviewed people a lot before or you have to delegate it to non-designers, I’d recommend paying attention to high-quality questions, in the first place. Then, there is a chance they’ll smooth out other potential shortcomings.
We’ll talk about 12 kinds of questions explained with examples. The first part includes six frequent mistakes and how to fix them. The second part presents six ways to improve decent questions and take control of difficult situations.
Pitfall #1: Hypothetical Questions
“I don’t care if people will use the new features,” said no budget owner ever. Investing in design and development, people want to make sure money will return. And direct asking, unfortunately, is not an effective way to check it, although it may intuitively seem a great idea. “Let’s go out of the office and ask ‘em!” In my practice, there were a lot of cases when people said they liked a feature but were reluctant to pay for it. So, is there any method to make sure that something not functioning yet will be needed when implemented?
I cannot recall anything more relevant than referring to people’s past experiences and behavior in similar situations. If users don’t have a habit of saving articles for later on all the news sites, what is the chance they’ll start doing it on your website? As Jakob Nielsen said, “Users spend most of their time on other sites.”
Pitfall #2: Closed Questions
Closed questions appear from a natural human wish to be approved and gain support. However, in the interviews, they aren’t useful enough. A yes-or-no question doesn’t provoke reserved people to talk and doesn’t help much to reveal their motives and way of thinking.
To be fair, closed questions are not evil. For example, they can serve a handy facilitation technique to make a talkative interviewee stop and turn back to the point. Also, they can help to double-check the information previously received through open questions. But if your goal is to gather as much information as possible, open questions will work better.
Pitfall #3: Leading Questions
The things considered polite in everyday conversations may be harmful to the efficiency of a user interview. Trying to help an interviewee with the options can guide them in saying what they don’t really think. A user interview is not the most comfortable situation for the majority of people, and they try to pass it as quickly as possible and at minimum effort. As a result, people tend to agree with anything more or less close to the truth or with a socially-expected choice instead of composing their answer from scratch.
That’s why it’s better to move one step at a time and build the next question upon the answer to the previous one.
Pitfall #4: Selfish Questions
Idea authors sometimes act like proud parents?—?they want everyone to admire their child. The downside of such an attitude in user interviews is the unconscious usage of the pronouns “we” or “our.” As a result, users feel as if they are taking an exam and should either adore what they see or maintain neutrality, disguising real complaints.
In your interview script, replace possessive pronouns with neutral words like “this site” and “that application” or just call a subject of conversation by the name.
Pro tip: as an interviewer, you can try hiding or understating your job title and relation to the topic.
Pitfall #5: Stacked Questions
There are many reasons why we ask stacked questions. It can be a human desire to be heard, the fear of being interrupted, or worrying that you might forget the next question while listening to the current answer. However, for the interview efficiency, stacked questions are not an option. Interviewees often select the one they are more comfortable to answer to or the one they managed to memorize from the stack. Remembering questions shouldn’t become the interviewee’s burden, so it’s better to ask them one by one. (And maybe the answers are so comprehensive that you won’t need some of the planned questions anymore.)
Pitfall #6: Explanation Instead Of A Question
Teams that work together for some time often establish their own language and tend to bring it into the product they are building. But will users understand such words as “dashboard,” “smart update,” “inclusion,” or “trigger”? Explanatory questions put an interviewee into the position of a lexicographer and help to check what sense (if any) they put into brand concepts and expert terminology. For a designer, it gives insights into how the future product?—?a website, app, or self-service terminal?—?should speak to people.
The opposite side of this approach is explaining it yourself and leading people before they have a chance to share their opinions. Think about this: in the interview, you are superior and can put pressure on users making them get your point. But will you always be there for thousands of users to explain how the product works? Probably no. So, it’s more efficient to discover people’s thinking styles and then create self-explanatory solutions rather than create something and push it in interviews.
We’ve just covered six major interviewing mistakes. The next portion of advice will be about making fairly good questions even more powerful and dealing with difficult interview situations.
Pitfall #7: Question Clutter
Open questions are great until you realize there are too many details to figure out. The best method in such a situation is storytelling?—?describing a recent or the most prominent experience. As a result, an interviewee talks about a real situation and is less inclined to compose a socially desired answer or summarize various cases.
Besides, storytelling gives the freedom to speak about aspects a person considers necessary. Usually, people start with or talk longer about the most crucial experiences.
Pitfall #8: Too General Questions
When you’ve figured out regularity or general attitude, it’s the right moment to ask the interviewee about an example. Recent-experience questions can fill in the gaps, which might have appeared while answering general questions. For an interviewer, it’s another powerful method to check if users aren’t accidentally exaggerating or dropping significant details.
Pitfall #9: Talking About What You Can Observe
When you are lucky enough and interview people in their “natural habitat,” it’s a perfect chance to see their work process with your own eyes. So, if there is an opportunity to ask a user to demonstrate typical actions?—?offline or online?—?you’ll gather tons of insights. It’s a chance to learn about users’ habits (including shortcuts and favorite programs), level of computer skills, software environment, and the way of thinking (mental model).
Pitfall #10: Tolerating Vagueness
Abstract nouns and adjectives, for example, “comfort,” “accessibility,” “support,” “smart,” or “user-friendly,” are probably the trickiest words in the language because everyone interprets them differently. When you hear abstract names, that’s not enough to document them as they are. These words require “unboxing” and only then can support design decision-making.
“Nothing is clear enough” has become my second favorite slogan after the classical UX phrase “It depends.” “Nothing is clear enough” means that you cannot be certain about the meaning if you hardly visualize a scenario from your interviewee’s life. The best way to unbox abstract concepts is by turning them into verbs.
Pitfall #11: Missing Numbers
Generalizations like “all,” “never,” “always,” “nobody,” “often,” or“frequently” are as unclear as abstract nouns and adjectives. But the way to “unbox” generalizations is different?—?through quantifying. Basically, you ask questions about approximate numbers or proportions. An interviewee, of course, might not provide you statistics, but at least you’ll understand whether the user’s “very frequent” is about “more than a half” or “nearly 20%.” Another example: the same phrase “a lot” can mean “50 per day” for work emails, but it’ll be only “5 per year” for cybersecurity alerts.
Pitfall #12: Undervalued WH-Questions
As a non-native speaker, I remember these questions from the English classes at school. The teacher often asked us to make WH-questions (What? Where? When? Who? How?) so that we could start a conversation and break the awkward silence. Nothing had changed from school times. Now, as a designer, I often use WH-questions as the main interviewing instrument.
My favorite question is “why.” For the sake of politeness and a more friendly atmosphere, I conceal it behind the following phrases, “What are you trying to achieve when you…?” or “Can you please explain the reason/value of…?” This is how in pursuit of a root cause you can ask several “whys” in a row without annoying your interviewee.
Summary
The question techniques above are pretty straightforward and might not take into account the nuances of a particular conversation or interviewee. Of course, even the best questions won’t make all the answers automatically objective, but they can make information more reliable and actionable. All in all, it’s always on an interviewer to adjust according to the situation. Here are the three core principles if you are in doubt about particular questions.
That’s why it’s recommended to ask about cases from the past and similar examples from other areas of a user’s life.
Let them tell their story; your ideas can wait
The goal of an interview is to explore the truth, not to sell or demonstrate something. If you force an interviewee to support you, it might mean the rest of the people won’t agree either. Also, give preference to clarifying the unknown versus checking hypotheses?—?for hypotheses, a better method is prototyping and testing.
If you cannot imagine it, you don’t get it
In a series of 1–2-hour user interviews, it’s so easy to get lazy and pretend you understand what you hear. Try challenging interviewee’s statements in your mind, “Did he say the truth? Do I know why she says that? What exactly do they mean telling me about it?”
There’s some interesting CSS trickery in Jason Pamental’s latest Web Fonts & Typography News. Jason wanted to bring swipeable columns to his digital book experience on mobile. Which brings up an interesting question right away… how do you set full-width columns that add columns horizontally, as-needed ? Well that’s a good trick right there, and it’s a one-liner:
columns: 100vw auto;
But it gets more complicated and disappointing from there.
With just a smidge more formatting to the columns:
We probably wouldn’t apply this effect on desktop, but hey, that’s what media queries are for. On mobile we get…
That herky-jerky scrolling makes this a bad experience right there. We can smooth that out with -webkit-overflow-scrolling: touch;…
The smoothness is maybe better, but the fact that the columns don’t snap into place makes it almost just as bad of a reading experience. That’s what scroll-snap is for, but alas:
Unfortunately it turns out you need a block-level element to which you can snap, and the artificially-created columns don’t count as such.
Oh noooooo. So close! But so far!
If we actually want scroll snapping, the content will need to be in block-level elements (like
). It’s easy enough to set up a horizontal row of
elements with flexbox like…
main {
display: flex;
}
main > div {
flex: 0 0 100vw;
}
But… how many divs do we need? Who knows! This is arbitrary content that might change. And even if we did know, how would we flow content naturally between the divs? That’s not a thing. That’s why it sucks that CSS regions never happened. So to make this nice swiping experience possible in CSS, we either need to:
Have some kind of CSS regions that is capable of auto-generating repeating block level elements as-needed by content
Neither of which is possible right now.
Jason didn’t stop there! He used JavaScript to figure out something that stops well short of some heavy scrolljacking thing. First, he figures out how many “pages” wide the CSS columns technique produces. Then, he adds spacer-divs to the scrolling element, each one the width of the page, and those are the things the scrolling element can scroll-snap to. Very clever.
At the moment, you can experience it at the book site by flipping on an optional setting.
We can use JavaScript to get the value of a CSS custom property. Robin wrote up a detailed explanation about this in Get a CSS Custom Property Value with JavaScript. To review, let’s say we’ve declared a single custom property on the HTML element:
html {
--color-accent: #00eb9b;
}
In JavaScript, we can access the value with getComputedStyle and getPropertyValue:
Perfect. Now we have access to our accent color in JavaScript. You know what’s cool? If we change that color in CSS, it updates in JavaScript as well! Handy.
What happens, though, when it’s not just one property we need access to in JavaScript, but a whole bunch of them?
We’re repeating ourselves a lot. We could shorten each one of these lines by abstracting the common tasks to a function.
const getCSSProp = (element, propName) => getComputedStyle(element).getPropertyValue(propName);
const colorAccent = getCSSProp(document.documentElement, '--color-accent'); // #00eb9b
// repeat for each custom property...
That helps reduce code repetition, but we still have a less-than-ideal situation. Every time we add a custom property in CSS, we have to write another line of JavaScript to access it. This can and does work fine if we only have a few custom properties. I’ve used this setup on production projects before. But, it’s also possible to automate this.
Let’s walk through the process of automating it by making a working thing.
What are we making?
We’ll make a color palette, which is a common feature in pattern libraries. We’ll generate a grid of color swatches from our CSS custom properties.
Here’s the complete demo that we’ll build step-by-step.
Let’s set the stage. We’ll use an unordered list to display our palette. Each swatch is a
element that we’ll render with JavaScript.
<ul class="colors"></ul>
The CSS for the grid layout isn’t pertinent to the technique in this post, so we won’t look at in detail. It’s available in the CodePen demo.
Now that we have our HTML and CSS in place, we’ll focus on the JavaScript. Here’s an outline of what we’ll do with our code:
Get all stylesheets on a page, both external and internal
Discard any stylesheets hosted on third-party domains
Get all rules for the remaining stylesheets
Discard any rules that aren’t basic style rules
Get the name and value of all CSS properties
Discard non-custom CSS properties
Build HTML to display the color swatches
Let’s get to it.
Step 1: Get all stylesheets on a page
The first thing we need to do is get all external and internal stylesheets on the current page. Stylesheets are available as members of the global document.
document.styleSheets
That returns an array-like object. We want to use array methods, so we’ll convert it to an array. Let’s also put this in a function that we’ll use throughout this post.
When we invoke getCSSCustomPropIndex, we see an array of CSSStyleSheet objects, one for each external and internal stylesheet on the current page.
Step 2: Discard third-party stylesheets
If our script is running on https://example.com any stylesheet we want to inspect must also be on https://example.com. This is a security feature. From the MDN docs for CSSStyleSheet:
In some browsers, if a stylesheet is loaded from a different domain, accessing cssRules results in SecurityError.
That means that if the current page links to a stylesheet hosted on https://some-cdn.com, we can’t get custom properties — or any styles — from it. The approach we’re taking here only works for stylesheets hosted on the current domain.
CSSStyleSheet objects have an href property. Its value is the full URL to the stylesheet, like https://example.com/styles.css. Internal stylesheets have an href property, but the value will be null.
Let’s write a function that discards third-party stylesheets. We’ll do that by comparing the stylesheet’s href value to the current location.origin.
With the third-party stylesheets discarded, we can inspect the contents of those remaining.
Step 3: Get all rules for the remaining stylesheets
Our goal for getCSSCustomPropIndex is to produce an array of arrays. To get there, we’ll use a combination of array methods to loop through, find values we want, and combine them. Let’s take a first step in that direction by producing an array containing every style rule.
We use reduce and concat because we want to produce a flat array where every first-level element is what we’re interested in. In this snippet, we iterate over individual CSSStyleSheet objects. For each one of them, we need its cssRules. From the MDN docs:
The read-only CSSStyleSheet property cssRules returns a live CSSRuleList which provides a real-time, up-to-date list of every CSS rule which comprises the stylesheet. Each item in the list is a CSSRule defining a single rule.
Each CSS rule is the selector, braces, and property declarations. We use the spread operator ...sheet.cssRules to take every rule out of the cssRules object and place it in finalArr. When we log the output of getCSSCustomPropIndex, we get a single-level array of CSSRule objects.
This gives us all the CSS rules for all the stylesheets. We want to discard some of those, so let’s move on.
Step 4: Discard any rules that aren’t basic style rules
CSS rules come in different types. CSS specs define each of the types with a constant name and integer. The most common type of rule is the CSSStyleRule. Another type of rule is the CSSMediaRule. We use those to define media queries, like @media (min-width: 400px) {}. Other types include CSSSupportsRule, CSSFontFaceRule, and CSSKeyframesRule. See the Type constants section of the MDN docs for CSSRule for the full list.
We’re only interested in rules where we define custom properties and, for the purposes in this post, we’ll focus on CSSStyleRule. That does leave out the CSSMediaRule rule type where it’s valid to define custom properties. We could use an approach that’s similar to what we’re using to extract custom properties in this demo, but we’ll exclude this specific rule type to limit the scope of the demo.
To narrow our focus to style rules, we’ll write another array filter:
const isStyleRule = (rule) => rule.type === 1;
Every CSSRule has a type property that returns the integer for that type constant. We use isStyleRule to filter sheet.cssRules.
One thing to note is that we are wrapping ...sheet.cssRules in brackets so we can use the array method filter.
Our stylesheet only had CSSStyleRules so the demo results are the same as before. If our stylesheet had media queries or font-face declarations, isStyleRule would discard them.
Step 5: Get the name and value of all properties
Now that we have the rules we want, we can get the properties that make them up. CSSStyleRule objects have a style property that is a CSSStyleDeclaration object. It’s made up of standard CSS properties, like color, font-family, and border-radius, plus custom properties. Let’s add that to our getCSSCustomPropIndex function so that it looks at every rule, building an array of arrays along the way:
If we invoke this now, we get an empty array. We have more work to do, but this lays the foundation. Because we want to end up with an array, we start with an empty array by using the accumulator, which is the second parameter of reduce. In the body of the reduce callback function, we have a placeholder variable, props, where we’ll gather the properties. The return statement combines the array from the previous iteration — the accumulator — with the current props array.
Right now, both are empty arrays. We need to use rule.style to populate props with an array for every property/value in the current rule:
rule.style is array-like, so we use the spread operator again to put each member of it into an array that we loop over with map. In the map callback, we return an array with two members. The first member is propName (which includes color, font-family, --color-accent, etc.). The second member is the value of each property. To get that, we use the getPropertyValue method of CSSStyleDeclaration. It takes a single parameter, the string name of the CSS property.
We use trim on both the name and value to make sure we don’t include any leading or trailing whitespace that sometimes gets left behind.
Now when we invoke getCSSCustomPropIndex, we get an array of arrays. Every child array contains a CSS property name and a value.
This is what we’re looking for! Well, almost. We’re getting every property in addition to custom properties. We need one more filter to remove those standard properties because all we want are the custom properties.
Step 6: Discard non-custom properties
To determine if a property is custom, we can look at the name. We know custom properties must start with two dashes (--). That’s unique in the CSS world, so we can use that to write a filter function:
In the function signature, we have ([propName]). There, we’re using array destructuring to access the first member of every child array in props. From there, we do an indexOf check on the name of the property. If -- is not at the beginning of the prop name, then we don’t include it in the props array.
When we log the result, we have the exact output we’re looking for: An array of arrays for every custom property and its value with no other properties.
Looking more toward the future, creating the property/value map doesn’t have to require so much code. There’s an alternative in the CSS Typed Object Model Level 1 draft that uses CSSStyleRule.styleMap. The styleMap property is an array-like object of every property/value of a CSS rule. We don’t have it yet, but If we did, we could shorten our above code by removing the map:
At the time of this writing, Chrome and Edge have implementations of styleMap but no other major browsers do. Because styleMap is in a draft, there’s no guarantee that we’ll actually get it, and there’s no sense using it for this demo. Still, it’s fun to know it’s a future possibility!
We have the data structure we want. Now let’s use the data to display color swatches.
Step 7: Build HTML to display the color swatches
Getting the data into the exact shape we needed was the hard work. We need one more bit of JavaScript to render our beautiful color swatches. Instead of logging the output of getCSSCustomPropIndex, let’s store it in variable.
We use reduce to iterate over the custom prop index and build a single HTML-looking string for innerHTML. But reduce isn’t the only way to do this. We could use a map and join or forEach. Any method of building the string will work here. This is just my preferred way to do it.
I want to highlight a couple specific bits of code. In the reduce callback signature, we’re using array destructuring again with [prop, val], this time to access both members of each child array. We then use the prop and val variables in the body of the function.
To show the example of each color, we use a b element with an inline style:
But how does that set a background color? In the full CSS we use the custom property --color as the value of background-color for each .color__swatch. Because external CSS rules inherit from inline styles, --color is the value we set on the b element.
.color__swatch {
background-color: var(--color);
/* other properties */
}
We now have an HTML display of color swatches representing our CSS custom properties!
CodePen Embed Fallback
This demo focuses on colors, but the technique isn’t limited to custom color props. There’s no reason we couldn’t expand this approach to generate other sections of a pattern library, like fonts, spacing, grid settings, etc. Anything that might be stored as a custom property can be displayed on a page automatically using this technique.
Here’s a fantastic case study where Ivan Akulov looks at the rather popular writing app Notion and how the team might improve the performance in a variety of ways; through code splitting, removing unused vendor code, module concatenation, and deferring JavaScript execution. Not so long ago, we made a list for getting started with web performance but this article goes so much further into the app side of things: making sure that users are loading only the JavaScript that they need, and doing that as quickly as possible.
I love that this piece just doesn’t feel like dunking on the Notion team, and bragging about how Ivan might do things better. There’s always room for improvement and constructive feedback is better than guilting someone into it. Yay for making things fast while being nice about it!
When it comes to adopting a new software product for a project, there are many choices that one needs to make. A lot of factors need to be kept in mind, depending on what functions the new software will be expected to perform. One such question that needs to be answered before you can make a decision is, “Between open source vs proprietary software, which one would be better for me?”
To choose one over the other, you would want to see a thorough comparison of both options. However, in order to choose the perfect software solution for your project, you need to know exactly what the terms ‘open source’ and ‘proprietary’ imply. Once you are clear on that, we will give you a simple yet comprehensive ‘open source vs proprietary software’ comparison, detailing the pros and cons of each.
So let’s begin!
What Is Open Source Software?
There are often thousands – even millions! – of lines of code that go into developing an average computer program. That code is not always accessible to the users, but when it is, the software is said to be open source.
In simple words, open source software refers to software that allows its users to access the “behind the scenes” source code. They are also free to make any changes they feel necessary before they start using the software. Open source licenses also allow unrestricted distribution of code for all purposes, and they are often provided free of cost.
What Is Proprietary Software?
As we mentioned in the previous section, not all software developers allow users to access the code for their software. Proprietary software is the property of the creator and is legally owned by them.
Proprietary software offers restricted insight into its technical workings. The code is hidden, and the software itself also often has to be bought. There are very strict conditions regarding its usage, violating which may even land you in some serious legal trouble! Often, the distribution of this kind of software is also prohibited.
Comparison of Open Source vs Proprietary Software
Now that we know what each category of software entails, we move on to the actual purpose of our discussion. Let us compare the various factors that the entire “open source vs proprietary software” debate depends on.
Ownership
Open source software may be owned by one entity or be freely available to the developer community. This community is then responsible for the evolution of the code. There is no restriction regarding its usage or distribution as long as the conditions documented in the open source license are being met.
The ownership works differently for bespoke proprietary software. For example, if you ask GoodCore, a well-known software development company, to build a dedicated software product for your organisation, then you will be the sole owner of that product. However, this ownership will not be passed onto anyone else. The customers who you license this product to will not own the code, but will own the rights to use the software as they please (given that they do not violate the terms of the end-user agreement, of course!). You can also offer additional services as part of that license.
Independence
With open source software, you have the freedom to work with fellow coders from the developer community. You can pick and choose the parts of the software that you want to modify for your personal use. However, it is a very taxing process.
In the case of proprietary software, you can rely on the software provider for everything, from development to support. But then you will not have the freedom to make any modifications to the software. On top of that, you might end up facing a vendor lock-in situation. (For example, you purchase a license for web application development services that are provided by a company, X. After a few years, company X decides to increase its fees. You will be stuck with X because the cost of switching to a different vendor would cost an arm and a leg!)
Ease of Use
With increased independence, open source developers sometimes get carried away, causing the project to become unnecessarily complex or cluttered. This makes the software more developer-friendly but less user-friendly, making it especially harder to use for people with little or no technical knowledge.
Proprietary software, in contrast, is specially developed with the end-users in mind. A lot of thought and effort is put into its development, such that it is easy and intuitive for the users. No matter how complex the ideology behind the software’s code, the interface is tried and tested, and kept as simple as possible to provide maximum usability.
Security
When you are picking a new software application to be implemented for a special purpose, you need it to be highly safe and secure. Thus, we must address the security concerns surrounding open source vs proprietary software.
Making the entire source code for an application publicly available opens up a plethora of vulnerabilities. The purpose of sharing code is to allow the community to test the code for bugs and potential security risks so that it can be improved further. This greatly improves the quality of the software. However, some people may take advantage of the situation and exploit open source systems on the basis of that code.
On the other hand, proprietary software has all its code hidden. This prevents the infrastructure of the software from being exposed to cybercriminals. However, this does not make the software completely immune from security risks; and you cannot even check because you cannot see the code! On top of that, you would have to blindly trust your software provider. Would you be willing to do that?
Updates, Support, and Maintenance
Often, users don’t wish to update their software products once a stable release has been properly implemented. Open source software does not have to be forcefully updated. However, it requires constant time and effort if you do want to keep it updated. If you run into a problem, you can always ask fellow developers who are part of the community. They will happily help you out; after all, this was one of the major reasons why the concept of open source software was introduced in the first place! What will you do if you invest years of effort into a particular open source project only to see it ‘forgotten’? Thus, depending on open source community support carries some level of risk and uncertainty.
This problem is often avoided with proprietary software. Most licenses come with packages for after-sales support, regular updates, and maintenance from the vendor’s side. Here’s the catch: the vendor may not care whether you want a particular update implemented or not!
Costs
For those of you looking for a cost-effective software solution to implement in your organisation, this is a crucial point of discussion in the open source vs proprietary debate.
You need to think of the costs of each type of software as a trade-off between the two key resources: time and cost.
Open source software is usually free to use. If not, the fees are often minimal. However, as we discussed before, it may need some time to reach a stable build. There are issues regarding modifications, upgrades, and maintenance, all of which takes time, even though it is free of cost.
While some proprietary software apps are free, most of them incur heavy costs in the form of license fees. However, once you pay the fees, you can then avail the many services that the vendor may offer as part of the software package. Thus, they are great, time-sensitive solutions.
Wrapping up
Proprietary software would be a great pick if you don’t have the time to experiment with new technologies, or if you have the time and money to invest in a license that requires minimal input and effort from your side.
However, if you possess sound development knowledge and wish to work with a much more flexible, customisable, and cost-effective software application that you can modify according to your needs, open source software would be best for you.
Now that you are aware of the exact differences between open source vs proprietary software, you will find it easier to make your decision. Your choice should depend on not only the features of each type of software but also how they will fit into your particular situation.
Artificial Intelligence is one of the biggest technological innovation that has been growing over the years. The world has been able to witness the power of this technology as it is almost used in every sphere of life. From Siri to Google Assistant, we have this technology right on our fingertips.
Technology like Artificial Intelligence exemplifies perfection and unlocks new possibilities that everyone can reach out to seamlessly. Today we are living in a world that is technologically driven and upscales itself every single day. The integration of AI in app development is practised for a long time now and has yielded conic results.
AI is a reality, which means it is present everywhere. Trending technologies like Augmented Reality, Virtual Reality, Chatbots, etc have Artificial Intelligence completely immersed.
When it comes to perfection, AI is widely chosen. The ability of this technology to be accurate is striking. Not only this, but AI also sets a fast pace for everything.
When amalgamated with mobile app technology, AI can spike rates of customer satisfaction and serve them more efficiently.
According to Forbes, Sales & Marketing prioritize Machine Learning and AI more than any other department in the industry.
Mobile application development is being completely transformed after the introduction of this technology and offering an ample amount of services to the users. Needless to say that the way user’s data was extracted has completely changed over time and people are able to highly personalize mobile applications according to their needs. Let’s now take a closer look at how artificial intelligence can be integrated into mobile applications.
How To Incorporate AI In App Development
Applying AI in app development can make next-level mobile applications. Let’s take a look at how to wisely use AI:
Understanding The Risks and Demands
There are certain factors that need to take care of while app development and knowing the problems that can be solved through AI will allow the developer to implement AI technology in a much better way. Once the complexities have been ruled out, adding this technology will be easy. But if AI is implemented in almost every feature then this technology will lose its essence completely.
APIs Are Not Everything
API cannot solve every problem and the more you use them, the vague they will get. But using AI will improve the design of API. with this technology building and processing APIs are much easier and it changes the design of API forever.
Know What Your Data Is All About
This phase will allow you to analyze things out and the kind of data AI will be putting in front of the users. Having a keen understanding of where your data is actually coming from will help you refine the quality. With this, your AI will be clean, informative, and interactive.
Data Scientists To The Rescue
As mentioned earlier, data refinement is an important task in order to have your AI module clean. This process can become less cumbersome if you take the help of a data scientist. A data scientist can level up the whole app development process using AI.
The future of AI is bright and the AI software market will be dominating the future of app development as well. AI has the capability to design UX completely.
There are certain benefits of having artificial intelligence in mobile app development. The first and foremost thing that needs to be done is to know what problems AI will be able to address once it is amalgamated in app development. While there can be certain challenges in this technology, all of it should be tackled with ease and kept in control.
For Added Mobile App Security
The process of authentication can be brisk if AI technology is inherited. If this process is increased then the security of the app will be well maintained. It is estimated that AI will be able to turn down the number of security breaches and keep data secure. It is not only limited to this, with AI, security can be personalized accompanied by various suggestions in the coding phase of the app.
Relevant Ads Appearance
Ads can be a headache but with AI they are not, as this technology ensures that ads appearing on your screen is highly relevant and not out of the blue. This can be achieved by understanding user’s behavior and curating adds according to retrieved data. This way, companies can be more focused on understanding needs, and costumers will be satisfied too.
Seamless Searching Process
If you don’t want to waste much of your time searching for things on the internet, the AI-driven mobile application can be a feasible option for you. With this technology, your search engine is enhanced and works on intuition which lets you save a lot of time and search for things on the go. For example- Google, with search engines like Google all you need to type is one letter and predictive texts are displayed according to your searching behavior.
Digital Assistance
Digital assistance is present round the clock serving your general queries right on time without any delay. These digital assistants can be in forms of Chatbots, which are clearly leading customer care services. Chatbots have brought massive changes in customer experience and satisfaction scales have certainly gone up after the emergence of this technology.
Summing Up!
There are several technologies that have geared up to set a benchmark but artificial intelligence has clearly left a huge impact by providing various facilities that took industries by the storm. This technology is not only serving mobile app development sectors but other major sectors like Healthcare, E-commerce, Education, Manufacturing, and so forth.
As smartphones and mobile applications are embracing such technologies, users to are enchanted with how easy it is to manage everything right in front of the eyes. Strong features of AI like swift data collection, advanced search options, and a more personalized approach, AI has been able to make lives facile. It’s not only about the users using such services, but several organizations like mobile app development have noticed faster developmental rates and easy integration.
For quite a long time prescriptions information has been shared electronically, via e-mail, text messages, or fax, although the legal script has remained the piece of paper, signed by a healthcare provider.
Following the global tendency of workflow automation and digitalization prescription routine changes as well so that the legal document converts into the prescription data which is stored on secure online servers and can be accessed at any time from any device. All the user needs is credentials and permission from the patient who is the one to assign and manage roles on the account. The signed piece of paper is no longer needed, as of now, the new electronic type of prescription is optional and patients may also select to get a paper prescription copy if they want.
Over the last few years, there has been a significant increase in demand for more advanced EHR services, resulting in many new, highly innovative healthcare IT (HIT) businesses. HIT organizations offering these solutions are under immense pressure both to satisfy the unique needs of their clients and to provide the key features that consumers demand from any electronic health record (EHR) – all while tightly monitoring costs.
COMMISSIONS AFFORDABILITY with ePrescribing:
Hundreds of participants in the data exchange and integration business Provide the prescribers with the full set of data functions
Have your way Integrate with Web API (SSO) to reduce maintenance and upgrade Choose full integration for maximum control with your own BUI
Known Leader Innovation of electronic pharmacy for two decades, Automated updates helps you satisfy with confidence continuously changing regulatory requirements.
Training 200,000 ePrescribe users connect Wide variety of easy to use apps A $2b total R&D from 2003 sponsored by a top 10 HIT group. Self-Developed e-prescribing vs “Plug-and-Play” with Veradigm ePrescribe
SELF-DEVELOPED ePrescribe is Expensive, time-consuming, has a high learning curve and is not focused on your core capability, besides it requires constant updating for new mandates vs VERADIGM ePrescribe being straightforward and cost-effective, it keeps your focus where it needs to be, experience that pre-dates EMRs plus Automatic updates keep you ahead of regulatory changes.
A Moving Target
E-Prescribing is an option for a healthcare provider that gives him the capability to electronically transmit an actual original prescription straight to a pharmacy. The automation of this process provides a lot of benefits such as the document becomes error-free, while the handwriting copy could contain manual mistakes. Also, the copy of the prescription is easier to understand and reaches the pharmacy immediately, which totally improves the quality of patient care, so a patient will be aware of the current drug availability at the chosen location and the prescription will be ready for a pick up by the time the person gets to the pharmacy.
Electronic pharmacy regulatory requirements are increasingly tighter and higher. To handle updates efficiently through a professional HIT service. The environment that is changing — especially because of the risk of wrongness — is a virtual nightmare. Health care providers expect more than just paperless prescriptions from today’s prescription options. With its diverse collection of advanced features, Veradigm ePrescribe helps you fulfill this goal.
VERADIGM gives users access to a discounted health plan or a pharmacy benefit manager, cash pricing, therapeutic alternatives, and competitive prices in different pharmacies, providing prescription price transparency at the health care point. All information is patient-specific and available through the workflow of e-prescription. Hundreds of thousands of vendors will use it for hundreds of millions of transactions for market transparency.
HCP interviewees indicated that patients save 93 percent of the time for estimating price and/or treatment-based alternative details on their prescription. The opportunity to include insurance and pricing data has a positive effect on 85 percent of HCP respondents on patient satisfaction with their practice.
PATIENT FINANCIAL ASSISTANCE
The authorization of workflow is specifically concerned with the automated delivery of program information. It enables users to identify financial aid programs available within the ordinary workflows for particular patients, provide quick and easy access and print available prescription coupons, and also helps to decrease the costs, compliance and adherence of patients out of pocket
VERADIGM Electronic Prior Authorization (EPA) allows your users to automate the submission of authorization approval within your e-prescribing workflow, reduces wait time for additional insurance approvals on medications—from days or weeks to minutes and provides the ability to Spend less time on the phone with insurance companies and pharmacies.
EPCS AND PDMP
Please take Proactive Steps to Assist in Addressing Opioid Addiction through EPCS and PDMP * Integration (Prescription Drug Monitoring Programme). It allows you to directly and remotely pass patient prescriptions to your preferred pharmacy. Enhances patient safety by reducing unsafe dosages, doctor shopping, and potential abuse and improving patient satisfaction with an efficient process of form enforcement and ensuring compliance with the regulation of state PDMP programs.
VERADIGM ePrescribe MOBILE provides the option to the user to access e-prescribing from anywhere with internet connection, deliver a single workflow across desktop and mobile devices for your users, providing the ability to move through patient files effectively and in a timely manner. Besides, it gives Efficient real-time access through one-touch user authentication.
How to get started
By visiting our registration website, you can register for ePrescribe. The number, NPI number, state number, and expiration dates are required for your DEA license number. To verify the identity and license information, you are required to answer several questions. Once the registration process has been completed, you will be sent an email to review your DEA, your last four social security measures and answer your secret question. Then you will be asked to enter a username and password in your practice spot. You can now sign up for ePrescribe and add other clinical staff or suppliers on your account. You may instantly start prescribing.
E-prescribing has proved its significant role in the process of decreasing prescription errors and helping enhance patients’ information safety and data accessibility. The United States government views electronic medical records as one of the key factors for successful creation of a national health information system.
For quite a long time prescriptions information has been shared electronically, via e-mail, text messages, or fax, although the legal script has remained the piece of paper, signed by a healthcare provider.
Following the global tendency of workflow automation and digitalization prescription routine changes as well so that the legal document converts into the prescription data which is stored on secure online servers and can be accessed at any time from any device. All the user needs is credentials and permission from the patient who is the one to assign and manage roles on the account. The signed piece of paper is no longer needed, as of now, the new electronic type of prescription is optional and patients may also select to get a paper prescription copy if they want.
Over the last few years, there has been a significant increase in demand for more advanced EHR services, resulting in many new, highly innovative healthcare IT (HIT) businesses. HIT organizations offering these solutions are under immense pressure both to satisfy the unique needs of their clients and to provide the key features that consumers demand from any electronic health record (EHR) – all while tightly monitoring costs.
COMMISSIONS AFFORDABILITY with ePrescribing:
Hundreds of participants in the data exchange and integration business Provide the prescribers with the full set of data functions
Have your way Integrate with Web API (SSO) to reduce maintenance and upgrade Choose full integration for maximum control with your own BUI
Known Leader Innovation of electronic pharmacy for two decades, Automated updates helps you satisfy with confidence continuously changing regulatory requirements.
Training 200,000 ePrescribe users connect Wide variety of easy to use apps A $2b total R&D from 2003 sponsored by a top 10 HIT group. Self-Developed e-prescribing vs “Plug-and-Play” with Veradigm ePrescribe
SELF-DEVELOPED ePrescribe is Expensive, time-consuming, has a high learning curve and is not focused on your core capability, besides it requires constant updating for new mandates vs VERADIGM ePrescribe being straightforward and cost-effective, it keeps your focus where it needs to be, experience that pre-dates EMRs plus Automatic updates keep you ahead of regulatory changes.
A Moving Target
E-Prescribing is an option for a healthcare provider that gives him the capability to electronically transmit an actual original prescription straight to a pharmacy. The automation of this process provides a lot of benefits such as the document becomes error-free, while the handwriting copy could contain manual mistakes. Also, the copy of the prescription is easier to understand and reaches the pharmacy immediately, which totally improves the quality of patient care, so a patient will be aware of the current drug availability at the chosen location and the prescription will be ready for a pick up by the time the person gets to the pharmacy.
Electronic pharmacy regulatory requirements are increasingly tighter and higher. To handle updates efficiently through a professional HIT service. The environment that is changing — especially because of the risk of wrongness — is a virtual nightmare. Health care providers expect more than just paperless prescriptions from today’s prescription options. With its diverse collection of advanced features, Veradigm ePrescribe helps you fulfill this goal.
VERADIGM gives users access to a discounted health plan or a pharmacy benefit manager, cash pricing, therapeutic alternatives, and competitive prices in different pharmacies, providing prescription price transparency at the health care point. All information is patient-specific and available through the workflow of e-prescription. Hundreds of thousands of vendors will use it for hundreds of millions of transactions for market transparency.
HCP interviewees indicated that patients save 93 percent of the time for estimating price and/or treatment-based alternative details on their prescription. The opportunity to include insurance and pricing data has a positive effect on 85 percent of HCP respondents on patient satisfaction with their practice.
PATIENT FINANCIAL ASSISTANCE
The authorization of workflow is specifically concerned with the automated delivery of program information. It enables users to identify financial aid programs available within the ordinary workflows for particular patients, provide quick and easy access and print available prescription coupons, and also helps to decrease the costs, compliance and adherence of patients out of pocket
VERADIGM Electronic Prior Authorization (EPA) allows your users to automate the submission of authorization approval within your e-prescribing workflow, reduces wait time for additional insurance approvals on medications—from days or weeks to minutes and provides the ability to Spend less time on the phone with insurance companies and pharmacies.
EPCS AND PDMP
Please take Proactive Steps to Assist in Addressing Opioid Addiction through EPCS and PDMP * Integration (Prescription Drug Monitoring Programme). It allows you to directly and remotely pass patient prescriptions to your preferred pharmacy. Enhances patient safety by reducing unsafe dosages, doctor shopping, and potential abuse and improving patient satisfaction with an efficient process of form enforcement and ensuring compliance with the regulation of state PDMP programs.
VERADIGM ePrescribe MOBILE provides the option to the user to access e-prescribing from anywhere with internet connection, deliver a single workflow across desktop and mobile devices for your users, providing the ability to move through patient files effectively and in a timely manner. Besides, it gives Efficient real-time access through one-touch user authentication.
How to get started
By visiting our registration website, you can register for ePrescribe. The number, NPI number, state number, and expiration dates are required for your DEA license number. To verify the identity and license information, you are required to answer several questions. Once the registration process has been completed, you will be sent an email to review your DEA, your last four social security measures and answer your secret question. Then you will be asked to enter a username and password in your practice spot. You can now sign up for ePrescribe and add other clinical staff or suppliers on your account. You may instantly start prescribing.
E-prescribing has proved its significant role in the process of decreasing prescription errors and helping enhance patients’ information safety and data accessibility. The United States government views electronic medical records as one of the key factors for successful creation of a national health information system.