Archive

Archive for July, 2020

CSS News July 2020

July 7th, 2020 No comments
A grid of square items

CSS News July 2020

CSS News July 2020

Rachel Andrew

2020-07-07T10:30:00+00:00
2020-07-07T15:05:33+00:00

Things move a lot faster than they used to in terms of the implementation of Web Platform features, and this post is a round-up of news about CSS features that are making their way into the platform. If you are the sort of person who doesn’t like reading about things if you can’t use them now, then this article probably isn’t for you — we have many others for you to enjoy instead! However, if you like to know what is on the way and read more about the things you can play with in a beta version of a browser, read on!

Flexbox Gaps

Let’s start with something that is implemented in the shipping version of one browser, and in beta in another. In CSS Grid, we can use the gap, column-gap and row-gap properties to define the gaps between rows and columns or both at the same time. The column-gap feature also appears in the Multi-column layout to create gaps between columns.

While you can use margins to space out grid items, the nice thing about the gap feature is that you only get gaps between your items; you do not end up with additional space to account for at the start and end of the grid. Adding margins has typically been how we have created space between flex items. To create a regular space between flex items, we use a margin. If we do not want a margin at the start and end, we have to use a negative margin on the container to remove it.

It would be really nice to have that gap feature in Flexbox as well, wouldn’t it? The good news is that we do have — it’s already implemented in Firefox and is in the Beta version of Chrome.

In the next CodePen, you can see all three options. The first are flex items using margins on each side. This creates a gap at the start and end of the flex container. The second uses a negative margin on the flex container to pull that margin outside of the border. The third dispenses with margins altogether and instead uses gap: 20px, creating a gap between items but not on the start and end edge.

See the Pen Flex Items with margins and negative margin, and the gap feature by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Flex Items with margins and negative margin, and the gap feature by Rachel Andrew (@rachelandrew) on CodePen.

Mind The Gap

The Flexbox gap implementation highlights a few interesting things. Firstly, you may well remember that when the gap feature was first introduced to Grid Layout, the properties were:

  • grid-gap
  • grid-row-gap
  • grid-column-gap

These properties were shipped when grid first appeared in browsers. However, in much the same way as the alignment features (justify-content, align-content, align-items, and so on) first appeared in Flexbox and then became available to Grid (once it was decided the gap features were useful to more than grid), they were moved and renamed.

Along with those alignment features, the gap properties are now in the Box Alignment specification. The specification deals with alignment and space distribution so is a natural home for them. To prevent us from having multiple properties prefixed with a spec name, they were also renamed to drop the grid- prefix.

If you have the grid- prefixed versions in your code, you don’t need to worry. They have been kept as an alias of the properties so your code won’t break. For new projects, however, the unprefixed versions are implemented in all browsers.

Detecting Gap Support For Flexbox

You might be thinking that you could use the gap feature in Flexbox and use Feature Queries to test for support by using a margin as fallback. Sadly, this isn’t the case because feature queries test for a name and value. For example, if I want to test for grid support, I can use the following query:

@supports (display: grid) {
  .grid {
    /* grid layout code here */
  }
}

If I were to test for gap: 20px, however, I would get a positive response in Chrome which currently does not support gap in Flexbox but does support it in Grid. All those feature queries do is check to see if the browser recognizes the property and value. They have no way to test for support within a particular layout mode. I raised this as an issue in the CSS WG, however, it turns out to not be an easy thing to fix, and there are limited places currently where we have this partial implementation problem.

Aspect Ratio Unit

Some things have an aspect ratio that we want to preserve, and image or a video for example. If you place an image or video directly on the page using the HTML img or video element, then it nicely keeps the aspect ratio it arrives with (unless you forcibly change the width or height). However, we sometimes want to add an element with no intrinsic aspect ratio while making one dimension flexible with the other retaining a specific aspect ratio. This most often happens when we embed a video with an iframe, however, you might also want to make perfectly square areas on your grid (something which also requires that one dimension can react to another).

The way we currently deal with this is by way of the padding hack. This uses the fact that padding in the block direction is copied from the inline direction when we use a percentage. It’s a not very elegant solution to the problem, but it works.

The aspect ratio unit seeks to solve that by allowing us to specify an aspect ratio for a length. Chrome has implemented this in Canary, so you can take a look at the demo below using Canary if you enable the Experimental Web Platform Features flag.

I have created a grid layout and set my grid items to use a 1 / 1 aspect ratio. The width of the items is determined by their grid column track size (as is flexible). The height is then being copied from that to make a square. Just for fun, I then rotated the items.

A grid of square items

In Canary, you can take a look in the demo and see how the items remain square even as their track grows and shrinks, due to the fact that the block size is using a 1/1 ratio of the inline size.

See the Pen Grid using the aspect ratio for items (needs Chrome Canary and Exp Web Platform Features flag) by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Grid using the aspect ratio for items (needs Chrome Canary and Exp Web Platform Features flag) by Rachel Andrew (@rachelandrew) on CodePen.

Native Masonry Support

Developers often ask if CSS Grid can be used to create a Masonry- or Pinterest-styled layout. While some demos look a bit like that, Grid was never designed to do Masonry.

To explain, you need to know what a Masonry layout is. In a typical Masonry layout, items display by row. Once the first row is filled, new items populate another row. However, if some of the items in the first row are shorter than others, the second-row items will rise up to fill the gap. The Masonry library is how many people achieve this using JavaScript.

If you try to create this layout using CSS Grid and auto-placement, you will see that you lose that block direction rearrangement of items. They lay themselves out in strict rows and columns because that is what a grid does.

So could grid ever be used as a Masonry layout? One of the engineers at Mozilla thinks so, and has created a prototype of the functionality. You can test it out by using Firefox Nightly with the flag layout.css.grid-template-masonry-value.enabled set to true by going to about:config in the Firefox Nightly URL bar.

Masonry Layout in Firefox Nightly

See the Pen Proposed Masonry (needs Firefox Nightly and the ) by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen Proposed Masonry (needs Firefox Nightly and the ) by Rachel Andrew (@rachelandrew) on CodePen.

While this is very exciting for anyone who has had to create this kind of layout using JavaScript, a number of us do wonder if the grid specification is the place to define this very specific layout. You can read some of my thoughts in my article “Does Masonry Belong In The CSS Grid Specification?”.

Subgrid

We have had support for the subgrid value of grid-template-columns and grid-template-rows in Firefox for some time. Using this value means that you can inherit the size and number of tracks from a parent grid down through child grids. Essentially, as long as a grid item has display: grid, it can inherit the tracks that it covers rather than creating new column or row tracks.

The feature can be tested in Firefox, and I have lots of examples that you can test out. The article “Digging Into The Display Property: Grids All The Way Down” explains how subgrid differs from nested grids, and “CSS Grid Level 2: Here Comes Subgrid” introduces the specification. I also have a set of broken-down examples at “Grid by Example”.

However, the first question people have when I talk about subgrid is, “When will it be available in Chrome?” I still can’t give you a when, but some good news is on the horizon. On June 18th in a Chromium blog post, it was announced that the Microsoft Edge team (now working on Chromium) are working to reimplement Grid Layout into the LayoutNG engine, i.e. Chromium’s next-generation layout engine. Part of this work will involve also added subgrid support.

Adding features to browsers isn’t a quick process, however, the Microsoft team brought us Grid Layout in the first place — along with the early prefixed implementation that shipped in IE10. So this is great news and I look forward to being able to test the implementation when it ships in Beta.

prefers-reduced-data

Not yet implemented in any browser — but with a bug listed for Chrome with recent activity— is the prefers_reduced_data media feature. This will allow CSS to check if the visitor has enabled data saving in their device and adjust the website accordingly. You might, for example, choose to avoid loading large images.

@media (prefers-reduced-data: reduce) {
  .image {
    background-image: url("images/placeholder.jpg");
  }
}

The prefers_reduced_data media feature works in the same way as some of the already implemented user preference media features in the Level 5 Media Queries Specification. For example, the media features prefers_reduced_motion and prefers_color_scheme allow you to test to see if the visitor has requested to reduce motion or a dark mode in their operating system and tailor your CSS to suit.

::marker

The ::marker pseudo-element allows us to target the list marker. At a very straightforward level, this means that we can target the list bullet and change its color or size. (This was previously impossible due to the fact that you could only target the entire list item — text and marker.)

See the Pen ::marker by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen ::marker by Rachel Andrew (@rachelandrew) on CodePen.

Support for ::marker is already available in Firefox, and can now be found in Chrome Beta, too.

In addition to styling bullets on actual lists, you can use ::marker on other elements. In the example below, I have a heading which has been given display: list-item and therefore has a marker which I have replaced with an emoji.

See the Pen display: list-item and ::marker by Rachel Andrew (@rachelandrew) on CodePen.

See the Pen display: list-item and ::marker by Rachel Andrew (@rachelandrew) on CodePen.

Note: You can read more about ::marker and other list-related things in my article “CSS Lists, Markers and Counters” here on Smashing Magazine.

Please Test New Features

While it’s fun to have a little peek into what is coming up, I recommend testing out the implementations if you have a use case for any of these things. You may well find a bug or something that doesn’t work as you expect. Browser vendors and the CSS Working Group would love to know. If you think you have found a bug in a browser — perhaps you are testing ::marker in Chrome and find it displays differently to the implementation in Firefox — then raise an issue with the browser: “How To File A Good Bug” explains how. If you think that the specification could do something it doesn’t yet, then raise an issue against the specification over at the CSS Working Group GitHub repository.

(il)

Categories: Others Tags:

10 Best Places to Sell Your Artwork Online

July 7th, 2020 No comments

Are you an artist and wondering how to kick-start selling your artwork online? If yes, this post is for you.

In fact, artists and designers like you always look for various ways to sell their artworks and earn their livelihood. And nothing can be better than the online avenue that lets you showcase your art to the global customers.

As a designer, you must be looking for ways to establish your own graphic design business. But it is a time-consuming process, and you can’t achieve overnight. Until then, you can make good passive income by selling your artworks/designs online.

But which online platform is the best?

There are plenty of marketplaces out there that allow artists to open an account with them, enlist their products, and start selling. This overwhelming number may sometimes confuse you which platform to go with. That’s why we’ve shortlisted a few marketplaces based on comprehensive research.

Top 10 best places to sell your artwork online

1. PrintShop By Designhill

PrintShop by Designhill is the world’s #1 creative marketplace that brings artists/ graphic designers and design seekers together. While artists can use this marketplace for selling their designs on over 50 unique products, business owners and individuals can explore a plethora of designs created by designers across the world. You get an opportunity to showcase your artworks to millions of visitors.

Just open an account with the platform and start selling your designs.

2. Etsy

Etsy is just like a craft fair. This is yet another globally recognized online platform where you can sell almost all sorts of designs. But the website is more focused on selling handmade products and vintage items. Since the platform has millions of registered shoppers, the probability of getting you reasonable prices for your designs will always be high.

You can list your artworks @$0.20 per piece for four months. With millions of shoppers, the platform also has a massive number of artists creating fierce competition. This means only high-quality designs get noticed and sold. So, if you’re sure that your design is outstanding, then only list.

3. Redbubble

Redbubble is another reputed print-on-demand platform for independent artists. It allows you to set your profit margin, which means you decide your earning on your own. The platform is also home to many artist groups that serve as a source of inspiration to designers. You can register yourself for selling your artworks and making passive money online.

The platform is free to sign up. It lets you fully control what products you sell your design on and set price over the base price for each product.

4. Creative Market

Creative Market offers a robust marketplace on which to put your designs. Being a graphic designer, you may be creating fonts, templates, themes, or clip art. This platform allows you to showcase them all to millions of visitors from different corners of the world.

The main advantage of selling your artworks here is that you won’t be bound by any lock-in period. You’re free to set prices for your design on your own. You can make up to 70% of the sales price.

Also, the site doesn’t ask you for any exclusivity, which means you’re free to sell your designs elsewhere as well.

5. DeviantArt

Having over 47 million users, DeviantArt claims itself the most extensive online art gallery and community. The company is around two decades old, which has gained a good rapport by hosting a wide range of designs from global graphic designers. The platform allows artists to host their artworks in their own personal gallery for free.

The website also helps designers monetize their designs through DeviantArt Prints if DeviantArt approves your art. You can offer your designs on a variety of products such as canvas, fridge magnets, calendars, postcards, etc.

You can’t set the price on these items as a free user, and you can expect to earn around 16% of the product selling price. But if you become a core member by paying $15 quarterly or $50 annually, you can be able to set prices on your own; which means you can maximize your commission.

6. Threadless

Threadless is a free site. You can open an online store for free to sell your art to millions of potential customers from across the world. You can find a community of like-minded artists there. You can also put your designs for public review, and if your art gets a favorable vote by the majority of the community, the platform will help you promote your work.

7. Zazzle

Zazzle is also a great online marketplace where you can sell your designs to a wide range of audiences.

The platform allows you to open an online store for free and access to its top-notch tools to sell all sorts of designs on products such as business cards, stamps, tote bags, t-shirts, and many more.

8. Society6

Society6 has become one of the most prominent marketplaces for artists today and has nurtured thousands of global artists to date.

The platform allows you to open an account for free. Uploading your artworks in the correct resolution is incredibly seamless on this site. It offers framed prints, fine art prints, and stretched canvases in traditional art formats.

Society6 also provides plenty of resources to help you with pricing and marketing competitively. You get complete freedom to set your margin on per sale. The site releases payments on the first of every month.

9. Shopify

Shopify is an e-commerce platform that you can integrate with your own website. If you own a website but unable to sell products through it, integrate your site for boosting sales. The platform boasts over half-a-million participating stores representing nearly $82 billion in sales over the last 14 years.

The site charges for using its platform—pricing starts at $29/month. However, the platform allows you to use its 14-day free trial before you finally become a regular member. Your membership price covers your own store, free SSL certificate, unlimited products, sales channels, and many more.

And if your business grows, you can upgrade your plan to either $79/month or $299/month for larger-scale operations.

10. Fiverr

What initially launched for the freelancers to sell their services, Fiverr has now also turned as a popular platform for selling graphic design ideas, skills, and expertise. The site allows you to directly engage with your prospective buyers of design services and ideas to get your artwork’s desired prices.

Final thought

Online stores are the perfect places for selling artworks. The precise list of these online platforms will help you sell your artworks online. Since each site offers different profit margins, you must analyze them all and open stores with one that provides the optimum margins and best services.

Categories: Others Tags:

10 Best Places to Sell Your Artwork Online

July 7th, 2020 No comments

Are you an artist and wondering how to kick-start selling your artwork online? If yes, this post is for you.

In fact, artists and designers like you always look for various ways to sell their artworks and earn their livelihood. And nothing can be better than the online avenue that lets you showcase your art to the global customers.

As a designer, you must be looking for ways to establish your own graphic design business. But it is a time-consuming process, and you can’t achieve overnight. Until then, you can make good passive income by selling your artworks/designs online.

But which online platform is the best?

There are plenty of marketplaces out there that allow artists to open an account with them, enlist their products, and start selling. This overwhelming number may sometimes confuse you which platform to go with. That’s why we’ve shortlisted a few marketplaces based on comprehensive research.

Top 10 best places to sell your artwork online

1. PrintShop By Designhill

PrintShop by Designhill is the world’s #1 creative marketplace that brings artists/ graphic designers and design seekers together. While artists can use this marketplace for selling their designs on over 50 unique products, business owners and individuals can explore a plethora of designs created by designers across the world. You get an opportunity to showcase your artworks to millions of visitors.

Just open an account with the platform and start selling your designs.

2. Etsy

Etsy is just like a craft fair. This is yet another globally recognized online platform where you can sell almost all sorts of designs. But the website is more focused on selling handmade products and vintage items. Since the platform has millions of registered shoppers, the probability of getting you reasonable prices for your designs will always be high.

You can list your artworks @$0.20 per piece for four months. With millions of shoppers, the platform also has a massive number of artists creating fierce competition. This means only high-quality designs get noticed and sold. So, if you’re sure that your design is outstanding, then only list.

3. Redbubble

Redbubble is another reputed print-on-demand platform for independent artists. It allows you to set your profit margin, which means you decide your earning on your own. The platform is also home to many artist groups that serve as a source of inspiration to designers. You can register yourself for selling your artworks and making passive money online.

The platform is free to sign up. It lets you fully control what products you sell your design on and set price over the base price for each product.

4. Creative Market

Creative Market offers a robust marketplace on which to put your designs. Being a graphic designer, you may be creating fonts, templates, themes, or clip art. This platform allows you to showcase them all to millions of visitors from different corners of the world.

The main advantage of selling your artworks here is that you won’t be bound by any lock-in period. You’re free to set prices for your design on your own. You can make up to 70% of the sales price.

Also, the site doesn’t ask you for any exclusivity, which means you’re free to sell your designs elsewhere as well.

5. DeviantArt

Having over 47 million users, DeviantArt claims itself the most extensive online art gallery and community. The company is around two decades old, which has gained a good rapport by hosting a wide range of designs from global graphic designers. The platform allows artists to host their artworks in their own personal gallery for free.

The website also helps designers monetize their designs through DeviantArt Prints if DeviantArt approves your art. You can offer your designs on a variety of products such as canvas, fridge magnets, calendars, postcards, etc.

You can’t set the price on these items as a free user, and you can expect to earn around 16% of the product selling price. But if you become a core member by paying $15 quarterly or $50 annually, you can be able to set prices on your own; which means you can maximize your commission.

6. Threadless

Threadless is a free site. You can open an online store for free to sell your art to millions of potential customers from across the world. You can find a community of like-minded artists there. You can also put your designs for public review, and if your art gets a favorable vote by the majority of the community, the platform will help you promote your work.

7. Zazzle

Zazzle is also a great online marketplace where you can sell your designs to a wide range of audiences.

The platform allows you to open an online store for free and access to its top-notch tools to sell all sorts of designs on products such as business cards, stamps, tote bags, t-shirts, and many more.

8. Society6

Society6 has become one of the most prominent marketplaces for artists today and has nurtured thousands of global artists to date.

The platform allows you to open an account for free. Uploading your artworks in the correct resolution is incredibly seamless on this site. It offers framed prints, fine art prints, and stretched canvases in traditional art formats.

Society6 also provides plenty of resources to help you with pricing and marketing competitively. You get complete freedom to set your margin on per sale. The site releases payments on the first of every month.

9. Shopify

Shopify is an e-commerce platform that you can integrate with your own website. If you own a website but unable to sell products through it, integrate your site for boosting sales. The platform boasts over half-a-million participating stores representing nearly $82 billion in sales over the last 14 years.

The site charges for using its platform—pricing starts at $29/month. However, the platform allows you to use its 14-day free trial before you finally become a regular member. Your membership price covers your own store, free SSL certificate, unlimited products, sales channels, and many more.

And if your business grows, you can upgrade your plan to either $79/month or $299/month for larger-scale operations.

10. Fiverr

What initially launched for the freelancers to sell their services, Fiverr has now also turned as a popular platform for selling graphic design ideas, skills, and expertise. The site allows you to directly engage with your prospective buyers of design services and ideas to get your artwork’s desired prices.

Final thought

Online stores are the perfect places for selling artworks. The precise list of these online platforms will help you sell your artworks online. Since each site offers different profit margins, you must analyze them all and open stores with one that provides the optimum margins and best services.

Categories: Others Tags:

The benefits of Messenger-Based Sales

July 7th, 2020 No comments
The benefits of Messenger-Based Sales

Although messengers can be considered as newcomers to the business scene, they already offer new opportunities for brands. They have a broad user base and regularly roll out new features and even apps that support doing business — such as Whatsapp for business. It’s no surprise that these newly available tools and platforms have been triggering new business approaches and one of them is Messenger-Based Sales.

Let’s define Messenger-Based Sales: a concept or an approach that prioritizes messengers as the primary communication channel between a buyer and a business throughout the sales process. Simply put, Messenger-Based Sales aims to produce a convenient and seamless buying journey via chat apps — communication platforms that customers use most.

It’s no surprise that it’s much easier to open up an app that is already installed on your phone if you want to send a message than write an email or make a phone call. We’re also getting more and more casual, favoring short bursts of messages over formal emails or sometimes even face-to-face conversations.

Let’s dive into the benefits of Messenger-Based Sales deeper and understand what makes it convenient for customers and businesses.

Why Messenger-Based Sales?

Asynchronous communication

One of the reasons why messaging apps are convenient is because of asynchronous communication. Unlike traditional communication methods, such as phone calls, emails, or even live chat, messengers allow us to respond in our own time, removing the pressure and the expectation of having to answer immediately.

Emoji, stickers & GIFs

Messengers can also be more interactive than other platforms. For example, adding an emoji to a chat can convey emotion and an animated GIF can make the conversation more entertaining.

No wonder why Facebook Messenger has even introduced larger-sized emoji — people love visual messages so much that every day on, they send over 900 million emoji-only messages on Facebook Messenger alone.

Automated responses

To save time and keep your audience engaged, messaging apps offer automated responses, also known as auto or instant replies. This means that if someone wants to start chatting with a messenger company, then the company can customize automatic answers to provide or gather information. This may include the company’s location or contact information.

Chat history

Another advantage of communicating with your potential and existing customers via messengers is because chat history is always available. It’s easy to miss something when you’re taking on the phone — you may lose the detail of the conversation. Live chats don’t always save your conversation history. By using messengers, on the other hand, you can know who you’re talking to, have access and review conversations with your clients, as well as access files and documents that you’ve shared.

Suitable for B2C & B2B buyers

Not only messaging apps can increase sales for B2C companies by offering advice, answering efficiently and offering options to submit inquiries, they’re also proving to be useful for B2B business.

Think about it, if people like chatting with each other, wouldn’t they like to connect with a B2B company just as quickly, especially in a lengthy purchase process such as B2B?

Moreover, B2B buyers demand that their buying journey is intuitive, secure and accessible. Research shows that 70% of customers say that convenience is essential.

High open rate & engagement

Although email is doing better at open rate and engagement when compared to a phone call, according to statistics, the open rate of mailing lists is still low, with just 2.4% of the audience who follow the links inside the emails. In addition, many emails end up in spam folders that would never get read.

It’s no surprise that today, people are far more likely to view and click a message in the messaging app. Firstly, the texts are usually short and thus much faster to read. Secondly, we always get notified about them right on our screen. Last but not least, conversations via chat apps just tend to be lighter and less formal.

Convenient for marketing, sales & customer support

Nowadays, customers expect immediate responses — they want companies to reach back to them as quickly as they can. And although messengers haven’t been utilized in the majority of businesses across the globe, they have already started to gain traction in various departments, including marketing, sales, and customer support.

Marketing team

Whether it’s sending out polls, delivering the latest blog posts or running ads, messengers have been used as marketing automation tools. Many companies already employ messaging apps for marketing purposes.

Sales team

With messengers, the sales team can carry the conversation throughout their entire pipeline. Payment can be made directly from the chat.

Support team

With messaging apps, the support team can do a much better job. Messengers are much quicker than emails; they have fewer formalities, offer asynchronous communication and multimedia exchange. Additionally, by creating a chatbot for frequently asked questions and replying to many users simultaneously 24/7, sales and support teams can save their time and focus on the more important stuff.

Categories: Others Tags:

Bootstrap 5

July 6th, 2020 No comments

It’s always notable when the world biggest CSS framework goes up a major version (it’s in alpha now).

It has dropped jQuery and IE, started using some CSS custom properties, gone fully customized with form elements, started to embrace utility classes, and includes a massive icon set you can use via SVG sprite. Sweet.

Direct Link to ArticlePermalink

The post Bootstrap 5 appeared first on CSS-Tricks.

Categories: Designing, Others Tags:

Building Serverless GraphQL API in Node with Express and Netlify

July 6th, 2020 No comments

I’ve always wanted to build an API, but was scared away by just how complicated things looked. I’d read a lot of tutorials that start with “first, install this library and this library and this library” without explaining why that was important. I’m kind of a Luddite when it comes to these things.

Well, I recently rolled up my sleeves and got my hands dirty. I wanted to build and deploy a simple read-only API, and goshdarnit, I wasn’t going to let some scary dependency lists and fancy cutting-edge services stop me¹.

What I discovered is that underneath many of the tutorials and projects out there is a small, easy-to-understand set of tools and techniques. In less than an hour and with only 30 lines of code, I believe anyone can write and deploy their very own read-only API. You don’t have to be a senior full-stack engineer — a basic grasp of JavaScript and some experience with npm is all you need.

At the end of this article you’ll be able to deploy your very own API without the headache of managing a server. I’ll list out each dependency and explain why we’re incorporating it. I’ll also give you an intro to some of the newer concepts involved, and provide links to resources to go deeper.

Let’s get started!

A rundown of the API concepts

There are a couple of common ways to work with APIs. But let’s begin by (super briefly) explaining what an API is all about: reading and updating data.

Over the past 20 years, some standard ways to build APIs have emerged. REST (short for REpresentational State Transfer) is one of the most common. To use a REST API, you make a call to a server through a URL — say api.example.com/rest/books — and expect to get a list of books back in a format like JSON or XML. To get a single book, we’d go back to the server at a URL — like api.example.com/rest/books/123 — and expect the data for book #123. Adding a new book or updating a specific book’s data means more trips to the server at similar, purpose-defined URLs.

That’s the basic idea of two concepts we’ll be looking at here: GraphQL and Serverless.

GraphQL

Applications that do a lot of getting and updating of data make a lot of API calls. Complicated software, like Twitter, might make hundreds of calls to get the data for a single page. Collecting the right data from a handful of URLs and formatting it can be a real headache. In 2012, Facebook developers starting looking for new ways to get and update data more efficiently.

Their key insight was that for the most part, data in complicated applications has relationships to other data. A user has followers, who are each users themselves, who each have their own followers, and those followers have tweets, which have replies from other users. Drawing the relationships between data results in a graph and that graph can help a server do a lot of clever work formatting and sending (or updating) data, and saving front-end developers time and frustration. Graph Query Language, aka GraphQL, was born.

GraphQL is different from the REST API approach in its use of URLs and queries. To get a list of books from our API using GraphQL, we don’t need to go to a specific URL (like our api.example.com/graphql/books example). Instead, we call up the API at the top level — which would be api.example.com/graphql in our example — and tell it what kind of information we want back with a JSON object:

{
  books {
    id
    title
    author
  }
}

The server sees that request, formats our data, and sends it back in another JSON object:

{
  "books" : [
    {
      "id" : 123
      "title" : "The Greatest CSS Tricks Vol. I"
      "author" : "Chris Coyier"
    }, {
      // ...
    }
  ]
}

Sebastian Scholl compares GraphQL to REST using a fictional cocktail party that makes the distinction super clear. The bottom line: GraphQL allows us to request the exact data we want while REST gives us a dump of everything at the URL.

Concept 2: Serverless

Whenever I see the word “serverless,” I think of Chris Watterston’s famous sticker.

Similarly, there is no such thing as a truly “serverless” application. Chris Coyier nice sums it up his “Serverless” post:

What serverless is trying to mean, it seems to me, is a new way to manage and pay for servers. You don’t buy individual servers. You don’t manage them. You don’t scale them. You don’t balance them. You aren’t really responsible for them. You just pay for what you use.

The serverless approach makes it easier to build and deploy back-end applications. It’s especially easy for folks like me who don’t have a background in back-end development. Rather than spend my time learning how to provision and maintain a server, I often hand the hard work off to someone (or even perhaps something) else.

It’s worth checking out the CSS-Tricks guide to all things serverless. On the Ideas page, there’s even a link to a tutorial on building a serverless API!

Picking our tools

If you browse through that serverless guide you’ll see there’s no shortage of tools and resources to help us on our way to building an API. But exactly which ones we use requires some initial thought and planning. I’m going to cover two specific tools that we’ll use for our read-only API.

Tool 1: NodeJS and Express

Again, I don’t have much experience with back-end web development. But one of the few things I have encountered is Node.js. Many of you are probably aware of it and what it does, but it’s essentially JavaScript that runs on a server instead of a web browser. Node.js is perfect for someone coming from the front-end development side of things because we can work directly in JavaScript — warts and all — without having to reach for some back-end language.

Express is one of the most popular frameworks for Node.js. Back before React was king (How Do You Do, Fellow Kids?), Express was the go-to for building web applications. It does all sorts of handy thing like routing, templating, and error handling.

I’ll be honest: frameworks like Express intimidate me. But for a simple API, Express is extremely easy to use and understand. There’s an official GraphQL helper for Express, and a plug-and-play library for making a serverless application called serverless-http. Neat, right?!

Tool 2: Netlify functions

The idea of running an application without maintaining a server sounds too good to be true. But check this out: not only can you accomplish this feat of modern sorcery, you can do it for free. Mind blowing.

Netlify offers a free plan with serverless functions that will give you up to 125,000 API calls in a month. Amazon offers a similar service called Lambda. We’ll stick with Netlify for this tutorial.

Netlify includes Netlify Dev which is a CLI for Netlify’s platform. Essentially, it lets us run a simulation of our in a fully-featured production environment, all within the safety of our local machine. We can use it to build and test our serverless functions without needing to deploy them.

At this point, I think it’s worth noting that not everyone agrees that running Express in a serverless function is a good idea. As Paul Johnston explains, if you’re building your functions for scale, it’s best to break each piece of functionality out into its own single-purpose function. Using Express the way I have means that every time a request goes to the API, the whole Express server has to be booted up from scratch — not very efficient. Deploy to production at your own risk.

Let’s get building!

Now that we have out tools in place, we can kick off the project. Let’s start by creating a new folder, navigating to fit in terminal, then running npm init on it. Once npm creates a package.json file, we can install the dependencies we need. Those dependencies are:

  1. Express
  2. GraphQL and express-graphql. These allow us to receive and respond to GraphQL requests.
  3. Bodyparser. This is a small layer that translates the requests we get to and from JSON, which is what GraphQL expects.
  4. Serverless-http. This serves as a wrapper for Express that makes sure our application can be used on a serverless platform, like Netlify.

That’s it! We can install them all in a single command:

npm i express express-graphql graphql body-parser serverless-http

We also need to install Netlify Dev as a global dependency so we can use it as a CLI:

npm i -g netlify-dev

File structure

There’s a few files that are required for our API to work correctly. The first is netlify.toml which should be created at the project’s root directory. This is a configuration file to tell Netlify how to handle our project. Here’s what we need in the file to define our startup command, our build command and where our serverless functions are located:

[build]


  # This command builds the site
  command = "npm run build"


  # This is the directory that will be deployed
  publish = "build"


  # This is where our functions are located
  functions = "functions"

That functions line is super important; it tells Netlify where we’ll be putting our API code.

Next, let’s create that /functions folder at the project’s root, and create a new file inside it called api.js. Open it up and add the following lines to the top so our dependencies are available to use and are included in the build:

const express = require("express");
const bodyParser = require("body-parser");
const expressGraphQL = require("express-graphql");
const serverless = require("serverless-http");

Setting up Express only takes a few lines of code. First, we’ll initial Express and wrap it in the serverless-http serverless function:

const app = express();
module.exports.handler = serverless(app);

These lines initialize Express, and wrap it in the serverless-http function. module.exports.handler lets Netlify know that our serverless function is the Express function.

Now let’s configure Express itself:

app.use(bodyParser.json());
app.use(
  "/",
  expressGraphQL({
    graphiql: true
  })
);

These two declarations tell Express what middleware we’re running. Middleware is what we want to happen between the request and response. In our case, we want to parse JSON using bodyparser, and handle it with express-graphql. The graphiql:true configuration for express-graphql will give us a nice user interface and playground for testing.

Defining the GraphQL schema

In order to understand requests and format responses, GraphQL needs to know what our data looks like. If you’ve worked with databases then you know that this kind of data blueprint is called a schema. GraphQL combines this well-defined schema with types — that is, definitions of different kinds of data — to work its magic.

The very first thing our schema needs is called a root query. This will handle any data requests coming in to our API. It’s called a “root” query because it’s accessed at the root of our API— say, api.example.com/graphql.

For this demonstration, we’ll build a hello world example; the root query should result in a response of “Hello world.”

So, our GraphQL API will need a schema (composed of types) for the root query. GraphQL provides some ready-built types, including a schema, a generic object², and a string.

Let’s get those by adding this below the imports:

const {
  GraphQLSchema,
  GraphQLObjectType,
  GraphQLString
} = require("graphql");

Then we’ll define our schema like this:

const schema = new GraphQLSchema({
  query: new GraphQLObjectType({
    name: 'HelloWorld',
    fields: () => ({ /* we'll put our response here */ })
  })
})

The first element in the object, with the key query, tells GraphQL how to handle a root query. Its value is a GraphQL object with the following configuration:

  • name – A reference used for documentation purposes
  • fields – Defines the data that our server will respond with. It might seem strange to have a function that just returns an object here, but this allows us to use variables and functions defined elsewhere in our file without needing to define them first³.
const schema = new GraphQLSchema({
  query: new GraphQLObjectType({
    name: "HelloWorld",
    fields: () => ({
      message: {
        type: GraphQLString,
        resolve: () => "Hello World",
      },
    }),
  }),
});

The fields function returns an object and our schema only has a single message field so far. The message we want to respond with is a string, so we specify its type as a GraphQLString. The resolve function is run by our server to generate the response we want. In this case, we’re only returning “Hello World” but in a more complicated application, we’d probably use this function to go to our database and retrieve some data.

That’s our schema! We need to tell our Express server about it, so let’s open up api.js and make sure the Express configuration is updated to this:

app.use(
  "/",
  expressGraphQL({
    schema: schema,
    graphiql: true
  })
);

Running the server locally

Believe it or not, we’re ready to start the server! Run netlify dev in Terminal from the project’s root folder. Netlify Dev will read the netlify.toml configuration, bundle up your api.js function, and make it available locally from there. If everything goes according to plan, you’ll see a message like “Server now ready on http://localhost:8888.”

If you go to localhost:8888 like I did the first time, you might be a little disappointed to get a 404 error.

But fear not! Netlify is running the function, only in a different directory than you might expect, which is /.netlify/functions. So, if you go to localhost:8888/.netlify/functions/api, you should see the GraphiQL interface as expected. Success!

Now, that’s more like it!

The screen we get is the GraphiQL playground and we can use it to test out the API. First, clear out the comments in the left pane and replace them with the following:

{
  message
}

This might seem a little… naked… but you just wrote a GraphQL query! What we’re saying is that we’d like to see the message field we defined in api.js. Click the “Run” button, and on the righth, you’ll see the following:

{
  "data": {
    "message": "Hello World"
  }
}

I don’t know about you, but I did a little fist pump when I did this the first time. We built an API!

Bonus: Redirecting requests

One of my hang-ups while learning about Netlify’s serverless functions is that they run on the /.netlify/functions path. It wasn’t ideal to type or remember it and I nearly bailed for another solution. But it turns out you can easily redirect requests when running and deploying on Netlfiy. All it takes is creating a file in the project’s root directory called _redirects (no extension necessary) with the following line in it:

/api /.netlify/functions/api 200!

This tells Netlify that any traffic that goes to yoursite.com/api should be sent to /.netlify/functions/api. The 200! bit instructs the server to send back a status code of 200 (meaning everything’s OK).

Deploying the API

To deploy the project, we need to connect the source code to Netlfiy. I host mine in a GitHub repo, which allows for continuous deployment.

After connecting the repository to Netlfiy, the rest is automatic: the code is processed and deployed as a serverless function! You can log into the Netlify dashboard to see the logs from any function.

Conclusion

Just like that, we are able to create a serverless API using GraphQL with a few lines of JavaScript and some light configuration. And hey, we can even deploy — for free.

The possibilities are endless. Maybe you want to create your own personal knowledge base, or a tool to serve up design tokens. Maybe you want to try your hand at making your own PokéAPI. Or, maybe you’re interesting in working with GraphQL.

Regardless of what you make, it’s these sorts of technologies that are getting more and more accessible every day. It’s exciting to be able to work with some of the most modern tools and techniques without needing a deep technical back-end knowledge.

If you’d like to see at the complete source code for this project, it’s available on GitHub.

Some of the code in this tutorial was adapted from Web Dev Simplified’s “Learn GraphQL in 40 minutes” article. It’s a great resource to go one step deeper into GraphQL. However, it’s also focused on a more traditional server-full Express.


  1. If you’d like to see the full result of my explorations, I’ve written a companion piece called “A design API in practice” on my website.
  2. The reasons you need a special GraphQL object, instead of a regular ol’ vanilla JavaScript object in curly braces, is a little beyond the scope of this tutorial. Just keep in mind that GraphQL is a finely-tuned machine that uses these specialized types to be fast and resilient.
  3. Scope and hoisting are some of the more confusing topics in JavaScript. MDN has a good primer that’s worth checking out.

The post Building Serverless GraphQL API in Node with Express and Netlify appeared first on CSS-Tricks.

Categories: Designing, Others Tags:

WordPress Contributors Seek Sponsorship for Improving Gutenberg Developer Docs

July 6th, 2020 No comments

A couple of WordPress contributors are currently looking for folks to sponsor them to work on the documentation for the WordPress block editor (often referred to as “Gutenberg”) and this is your chance to support them.

If you’ve developed blocks for the WordPress block editor — or at least have tried to — then you have likely struggled to find any meaningful documentation. Heck, just look at two recent posts here at CSS-Tricks where Dmitry Mayorov explains block variations and Leonardo Losoviz adds a welcome guide to the block editor. They both lament the lack of documentation and describe how they had to work around it. Chris has even experimented with different build approaches, from create-gluten-block to wp-cli to ACF Blocks. It’s sortuva Wild West out there and documented standards with examples would be a huge win, both for WordPress and for developers.

Now, I don’t think it’s worth getting into a debate into why the documentation doesn’t already exist a year since the block editor was released. There are lots of reasons for that and none of them help move things forward.

We flipped the switch to enable the WordPress block editor here at CSS-Tricks earlier this year and haven’t looked back. Where several of us on the team would write drafts in Dropbox Paper, Google Docs, or even a code editor, I think we’ve all started writing directly in WordPress because, well, it’s just so gosh-darned nice. I know I’m looking forward to contributing whatever I can to help this make this tool more accessible to developers — that’s the best way to spark new ideas and innovations for the future of blocks.

Direct Link to ArticlePermalink

The post WordPress Contributors Seek Sponsorship for Improving Gutenberg Developer Docs appeared first on CSS-Tricks.

Categories: Designing, Others Tags:

Understanding Plugin Development In Gatsby

July 6th, 2020 No comments
Smashing Editorial

Understanding Plugin Development In Gatsby

Understanding Plugin Development In Gatsby

Aleem Isiaka

2020-07-06T14:30:00+00:00
2020-07-06T16:34:30+00:00

Gatsby is a React-based static-site generator that has overhauled how websites and blogs are created. It supports the use of plugins to create custom functionality that is not available in the standard installation.

In this post, I will introduce Gatsby plugins, discuss the types of Gatsby plugins that exist, differentiate between the forms of Gatsby plugins, and, finally, create a comment plugin that can be used on any Gatsby website, one of which we will install by the end of the tutorial.

What Is A Gatsby Plugin?

Gatsby, as a static-site generator, has limits on what it can do. Plugins are means to extend Gatsby with any feature not provided out of the box. We can achieve tasks like creating a manifest.json file for a progressive web app (PWA), embedding tweets on a page, logging page views, and much more on a Gatsby website using plugins.

Types Of Gatsby Plugins

There are two types of Gatsby plugins, local and external. Local plugins are developed in a Gatsby project directory, under the /plugins directory. External plugins are those available through npm or Yarn. Also, they may be on the same computer but linked using the yarn link or npm link command in a Gatsby website project.

Forms Of Gatsby Plugins

Plugins also exist in three primary forms and are defined by their use cases:

Components Of A Gatsby Plugin

To create a Gatsby plugin, we have to define some files:

  • gatsby-node.js
    Makes it possible to listen to the build processes of Gatsby.
  • gatsby-config.js
    Mainly used for configuration and setup.
  • gatsby-browser.js
    Allows plugins to run code during one of the Gatsby’s processes in the browser.
  • gatsby-ssr.js
    Customizes and adds functionality to the server-side rendering (SSR) process.

These files are referred to as API files in Gatsby’s documentation and should live in the root of a plugin’s directory, either local or external.

Not all of these files are required to create a Gatsby plugin. In our case, we will be implementing only the gatsby-node.js and gatsby-config.js API files.

Building A Comment Plugin For Gatsby

To learn how to develop a Gatsby plugin, we will create a comment plugin that is installable on any blog that runs on Gatsby. The full code for the plugin is on GitHub.

Serving and Loading Comments

To serve comments on a website, we have to provide a server that allows for the saving and loading of comments. We will use an already available comment server at gatsbyjs-comment-server.herokuapp.com for this purpose.

The server supports a GET /comments request for loading comments. POST /comments would save comments for the website, and it accepts the following fields as the body of the POST /comments request:

  • content: [string]
    The comment itself,
  • author: [string]
    The name of the comment’s author,
  • website
    The website that the comment is being posted from,
  • slug
    The slug for the page that the comment is meant for.

Integrating the Server With Gatsby Using API Files

Much like we do when creating a Gatsby blog, to create an external plugin, we should start with plugin boilerplate.

Initializing the folder

In the command-line interace (CLI) and from any directory you are convenient with, let’s run the following command:

gatsby new gatsby-source-comment-server https://github.com/Gatsbyjs/gatsby-starter-plugin

Then, change into the plugin directory, and open it in a code editor.

Installing axios for Network Requests

To begin, we will install the axios package to make web requests to the comments server:

npm install axios --save
// or
yarn add axios
Adding a New Node Type

Before pulling comments from the comments server, we need to define a new node type that the comments would extend. For this, in the plugin folder, our gatsby-node.js file should contain the code below:

exports.sourceNodes = async ({ actions }) => {
  const { createTypes } = actions;
  const typeDefs = `
    type CommentServer implements Node {
      _id: String
      author: String
      string: String
      content: String
      website: String
      slug: String
      createdAt: Date
      updatedAt: Date
    }
  `;
  createTypes(typeDefs);
};

First, we pulled actions from the APIs provided by Gatsby. Then, we pulled out the createTypes action, after which we defined a CommentServer type that extends Node.js. Then, we called createTypes with the new node type that we set.

Fetching Comments From the Comments Server

Now, we can use axios to pull comments and then store them in the data-access layer as the CommentServer type. This action is called “node sourcing” in Gatsby.

To source for new nodes, we have to implement the sourceNodes API in gatsby-node.js. In our case, we would use axios to make network requests, then parse the data from the API to match a GraphQL type that we would define, and then create a node in the GraphQL layer of Gatsby using the createNode action.

We can add the code below to the plugin’s gatsby-node.js API file, creating the functionality we’ve described:

const axios = require("axios");

exports.sourceNodes = async (
  { actions, createNodeId, createContentDigest },
  pluginOptions
) => {
  const { createTypes } = actions;
  const typeDefs = `
    type CommentServer implements Node {
      _id: String
      author: String
      string: String
      website: String
      content: String
      slug: String
      createdAt: Date
      updatedAt: Date
    }
  `;
  createTypes(typeDefs);

  const { createNode } = actions;
  const { limit, website } = pluginOptions;
  const _website = website || "";

  const result = await axios({
    url: `https://Gatsbyjs-comment-server.herokuapp.com/comments?limit=${_limit}&website=${_website}`,
  });

  const comments = result.data;

  function convertCommentToNode(comment, { createContentDigest, createNode }) {
    const nodeContent = JSON.stringify(comment);

    const nodeMeta = {
      id: createNodeId(`comments-${comment._id}`),
      parent: null,
      children: [],
      internal: {
        type: `CommentServer`,
        mediaType: `text/html`,
        content: nodeContent,
        contentDigest: createContentDigest(comment),
      },
    };

    const node = Object.assign({}, comment, nodeMeta);
    createNode(node);
  }

  for (let i = 0; i < comments.data.length; i++) {
    const comment = comments.data[i];
    convertCommentToNode(comment, { createNode, createContentDigest });
  }
};

Here, we have imported the axios package, then set defaults in case our plugin’s options are not provided, and then made a request to the endpoint that serves our comments.

We then defined a function to convert the comments into Gatsby nodes, using the action helpers provided by Gatsby. After this, we iterated over the fetched comments and called convertCommentToNode to convert the comments into Gatsby nodes.

Transforming Data (Comments)

Next, we need to resolve the comments to posts. Gatsby has an API for that called createResolvers. We can make this possible by appending the code below in the gatsby-node.js file of the plugin:

exports.createResolvers = ({ createResolvers }) => {
  const resolvers = {
    MarkdownRemark: {
      comments: {
        type: ["CommentServer"],
        resolve(source, args, context, info) {
          return context.nodeModel.runQuery({
            query: {
              filter: {
                slug: { eq: source.fields.slug },
              },
            },
            type: "CommentServer",
            firstOnly: false,
          });
        },
      },
    },
  };
  createResolvers(resolvers);
};

Here, we are extending MarkdownRemark to include a comments field. The newly added comments field will resolve to the CommentServer type, based on the slug that the comment was saved with and the slug of the post.

Final Code for Comment Sourcing and Transforming

The final code for the gatsby-node.js file of our comments plugin should look like this:

const axios = require("axios");

exports.sourceNodes = async (
  { actions, createNodeId, createContentDigest },
  pluginOptions
) => {
  const { createTypes } = actions;
  const typeDefs = `
    type CommentServer implements Node {
      _id: String
      author: String
      string: String
      website: String
      content: String
      slug: String
      createdAt: Date
      updatedAt: Date
    }
  `;
  createTypes(typeDefs);

  const { createNode } = actions;
  const { limit, website } = pluginOptions;
  const _limit = parseInt(limit || 10000); // FETCH ALL COMMENTS
  const _website = website || "";

  const result = await axios({
    url: `https://Gatsbyjs-comment-server.herokuapp.com/comments?limit=${_limit}&website=${_website}`,
  });

  const comments = result.data;

  function convertCommentToNode(comment, { createContentDigest, createNode }) {
    const nodeContent = JSON.stringify(comment);

    const nodeMeta = {
      id: createNodeId(`comments-${comment._id}`),
      parent: null,
      children: [],
      internal: {
        type: `CommentServer`,
        mediaType: `text/html`,
        content: nodeContent,
        contentDigest: createContentDigest(comment),
      },
    };

    const node = Object.assign({}, comment, nodeMeta);
    createNode(node);
  }

  for (let i = 0; i < comments.data.length; i++) {
    const comment = comments.data[i];
    convertCommentToNode(comment, { createNode, createContentDigest });
  }
};

exports.createResolvers = ({ createResolvers }) => {
  const resolvers = {
    MarkdownRemark: {
      comments: {
        type: ["CommentServer"],
        resolve(source, args, context, info) {
          return context.nodeModel.runQuery({
            query: {
              filter: {
                website: { eq: source.fields.slug },
              },
            },
            type: "CommentServer",
            firstOnly: false,
          });
        },
      },
    },
  };
  createResolvers(resolvers);
};
Saving Comments as JSON Files

We need to save the comments for page slugs in their respective JSON files. This makes it possible to fetch the comments on demand over HTTP without having to use a GraphQL query.

To do this, we will implement the createPageStatefully API in thegatsby-node.js API file of the plugin. We will use the fs module to check whether the path exists before creating a file in it. The code below shows how we can implement this:

import fs from "fs"
import {resolve: pathResolve} from "path"
exports.createPagesStatefully = async ({ graphql }) => {
  const comments = await graphql(
    `
      {
        allCommentServer(limit: 1000) {
          edges {
            node {
              name
              slug
              _id
              createdAt
              content
            }
          }
        }
      }
    `
  )

  if (comments.errors) {
    throw comments.errors
  }

  const markdownPosts = await graphql(
    `
      {
        allMarkdownRemark(
          sort: { fields: [frontmatter___date], order: DESC }
          limit: 1000
        ) {
          edges {
            node {
              fields {
                slug
              }
            }
          }
        }
      }
    `
  )

  const posts = markdownPosts.data.allMarkdownRemark.edges
  const _comments = comments.data.allCommentServer.edges

  const commentsPublicPath = pathResolve(process.cwd(), "public/comments")

  var exists = fs.existsSync(commentsPublicPath) //create destination directory if it doesn't exist

  if (!exists) {
    fs.mkdirSync(commentsPublicPath)
  }

  posts.forEach((post, index) => {
    const path = post.node.fields.slug
    const commentsForPost = _comments
      .filter(comment => {
        return comment.node.slug === path
      })
      .map(comment => comment.node)

    const strippedPath = path
      .split("/")
      .filter(s => s)
      .join("/")
    const _commentPath = pathResolve(
      process.cwd(),
      "public/comments",
      `${strippedPath}.json`
    )
    fs.writeFileSync(_commentPath, JSON.stringify(commentsForPost))
  })
}

First, we require the fs, and resolve the function of the path module. We then use the GraphQL helper to pull the comments that we stored earlier, to avoid extra HTTP requests. We remove the Markdown files that we created using the GraphQL helper. And then we check whether the comment path is not missing from the public path, so that we can create it before proceeding.

Finally, we loop through all of the nodes in the Markdown type. We pull out the comments for the current posts and store them in the public/comments directory, with the post’s slug as the name of the file.

The .gitignore in the root in a Gatsby website excludes the public path from being committed. Saving files in this directory is safe.

During each rebuild, Gatsby would call this API in our plugin to fetch the comments and save them locally in JSON files.

Rendering Comments

To render comments in the browser, we have to use the gatsby-browser.js API file.

Define the Root Container for HTML

In order for the plugin to identify an insertion point in a page, we would have to set an HTML element as the container for rendering and listing the plugin’s components. We can expect that every page that requires it should have an HTML element with an ID set to commentContainer.

Implement the Route Update API in the gatsby-browser.js File

The best time to do the file fetching and component insertion is when a page has just been visited. The onRouteUpdate API provides this functionality and passes the apiHelpers and pluginOpions as arguments to the callback function.

exports.onRouteUpdate = async (apiHelpers, pluginOptions) => {
  const { location, prevLocation } = apiHelpers
}
Create Helper That Creates HTML Elements

To make our code cleaner, we have to define a function that can create an HTML element, set its className, and add content. At the top of the gatsby-browser.js file, we can add the code below:

// Creates element, set class. innerhtml then returns it.
 function createEl (name, className, html = null) {
  const el = document.createElement(name)
  el.className = className
  el.innerHTML = html
  return el
}
Create Header of Comments Section

At this point, we can add a header into the insertion point of comments components, in the onRouteUpdate browser API . First, we would ensure that the element exists in the page, then create an element using the createEl helper, and then append it to the insertion point.

// ...

exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    const header = createEl("h2")
    header.innerHTML = "Comments"
    commentContainer.appendChild(header)
  }
}
Listing Comments

To list comments, we would append a ul element to the component insertion point. We will use the createEl helper to achieve this, and set its className to comment-list:

exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    const header = createEl("h2")
    header.innerHTML = "Comments"
    commentContainer.appendChild(header)
    const commentListUl = createEl("ul")
    commentListUl.className = "comment-list"
    commentContainer.appendChild(commentListUl)
}

Next, we need to render the comments that we have saved in the public directory to a ul element, inside of li elements. For this, we define a helper that fetches the comments for a page using the path name.

// Other helpers
const getCommentsForPage = async slug => {
  const path = slug
    .split("/")
    .filter(s => s)
    .join("/")
  const data = await fetch(`/comments/${path}.json`)
  return data.json()
}
// ... implements routeupdate below

We have defined a helper, named getCommentsForPage, that accepts paths and uses fetch to load the comments from the public/comments directory, before parsing them to JSON and returning them back to the calling function.

Now, in our onRouteUpdate callback, we will load the comments:

// ... helpers
exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
  const commentContainer = document.getElementById("commentContainer")
  if (commentContainer && location.path !== "/") {
    //... inserts header
    const commentListUl = createEl("ul")
    commentListUl.className = "comment-list"
    commentContainer.appendChild(commentListUl)
   const comments = await getCommentsForPage(location.pathname)
}

Next, let’s define a helper to create the list items:

// .... other helpers

const getCommentListItem = comment => {
  const li = createEl("li")
  li.className = "comment-list-item"

  const nameCont = createEl("div")
  const name = createEl("strong", "comment-author", comment.name)
  const date = createEl(
    "span",
    "comment-date",
    new Date(comment.createdAt).toLocaleDateString()
  )
  // date.className="date"
  nameCont.append(name)
  nameCont.append(date)

  const commentCont = createEl("div", "comment-cont", comment.content)

  li.append(nameCont)
  li.append(commentCont)
  return li
}

// ... onRouteUpdateImplementation

In the snippet above, we created an li element with a className of comment-list-item, and a div for the comment’s author and time. We then created another div for the comment’s text, with a className of comment-cont.

To render the list items of comments, we iterate through the comments fetched using the getComments helper, and then call the getCommentListItem helper to create a list item. Finally, we append it to the

    element:

    // ... helpers
    exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
      const commentContainer = document.getElementById("commentContainer")
      if (commentContainer && location.path !== "/") {
        //... inserts header
        const commentListUl = createEl("ul")
        commentListUl.className = "comment-list"
        commentContainer.appendChild(commentListUl)
       const comments = await getCommentsForPage(location.pathname)
        if (comments && comments.length) {
          comments.map(comment => {
            const html = getCommentListItem(comment)
            commentListUl.append(html)
            return comment
          })
        }
    }

    Posting a Comment

    Post Comment Form Helper

    To enable users to post a comment, we have to make a POST request to the /comments endpoint of the API. We need a form in order to create this form. Let’s create a form helper that returns an HTML form element.

    // ... other helpers
    const createCommentForm = () => {
      const form = createEl("form")
      form.className = "comment-form"
      const nameInput = createEl("input", "name-input", null)
      nameInput.type = "text"
      nameInput.placeholder = "Your Name"
      form.appendChild(nameInput)
      const commentInput = createEl("textarea", "comment-input", null)
      commentInput.placeholder = "Comment"
      form.appendChild(commentInput)
      const feedback = createEl("span", "feedback")
      form.appendChild(feedback)
      const button = createEl("button", "comment-btn", "Submit")
      button.type = "submit"
      form.appendChild(button)
      return form
    }

    The helper creates an input element with a className of name-input, a textarea with a className of comment-input, a span with a className of feedback, and a button with a className of comment-btn.

    Append the Post Comment Form

    We can now append the form into the insertion point, using the createCommentForm helper:

    // ... helpers
    exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
      const commentContainer = document.getElementById("commentContainer")
      if (commentContainer && location.path !== "/") {
        // insert header
        // insert comment list
        commentContainer.appendChild(createCommentForm())
      }
    }

    Post Comments to Server

    To post a comment to the server, we have to tell the user what is happening — for example, either that an input is required or that the API returned an error. The element is meant for this. To make it easier to update this element, we create a helper that sets the element and inserts a new class based on the type of the feedback (whether error, info, or success).

    // ... other helpers
    // Sets the class and text of the form feedback
    const updateFeedback = (str = "", className) => {
      const feedback = document.querySelector(".feedback")
      feedback.className = `feedback ${className ? className : ""}`.trim()
      feedback.innerHTML = str
      return feedback
    }
    // onRouteUpdate callback

    We are using the querySelector API to get the element. Then we set the class by updating the className attribute of the element. Finally, we use innerHTML to update the contents of the element before returning it.

    Submitting a Comment With the Comment Form

    We will listen to the onSubmit event of the comment form to determine when a user has decided to submit the form. We don’t want empty data to be submitted, so we would set a feedback message and disable the submit button until needed:

    exports.onRouteUpdate = async ({ location, prevLocation }, pluginOptions) => {
      // Appends header
      // Appends comment list
      // Appends comment form
      document
        .querySelector("body .comment-form")
        .addEventListener("submit", async function (event) {
          event.preventDefault()
          updateFeedback()
          const name = document.querySelector(".name-input").value
          const comment = document.querySelector(".comment-input").value
          if (!name) {
            return updateFeedback("Name is required")
          }
          if (!comment) {
            return updateFeedback("Comment is required")
          }
          updateFeedback("Saving comment", "info")
          const btn = document.querySelector(".comment-btn")
          btn.disabled = true
          const data = {
            name,
            content: comment,
            slug: location.pathname,
            website: pluginOptions.website,
          }
    
          fetch(
            "https://cors-anywhere.herokuapp.com/gatsbyjs-comment-server.herokuapp.com/comments",
            {
              body: JSON.stringify(data),
              method: "POST",
              headers: {
                Accept: "application/json",
                "Content-Type": "application/json",
              },
            }
          ).then(async function (result) {
            const json = await result.json()
            btn.disabled = false
    
            if (!result.ok) {
              updateFeedback(json.error.msg, "error")
            } else {
              document.querySelector(".name-input").value = ""
              document.querySelector(".comment-input").value = ""
              updateFeedback("Comment has been saved!", "success")
            }
          }).catch(async err => {
            const errorText = await err.text()
            updateFeedback(errorText, "error")
          })
        })
    }

    We use document.querySelector to get the form from the page, and we listen to its submit event. Then, we set the feedback to an empty string, from whatever it might have been before the user attempted to submit the form.

    We also check whether the name or comment field is empty, setting an error message accordingly.

    Next, we make a POST request to the comments server at the /comments endpoint, listening for the response. We use the feedback to tell the user whether there was an error when they created the comment, and we also use it to tell them whether the comment’s submission was successful.

    Adding a Style Sheet

    To add styles to the component, we have to create a new file, style.css, at the root of our plugin folder, with the following content:

    #commentContainer {
    }
    
    .comment-form {
      display: grid;
    }

    At the top of gatsby-browser.js, import it like this:

    import "./style.css"

    This style rule will make the form’s components occupy 100% of the width of their container.

    Finally, all of the components for our comments plugin are complete. Time to install and test this fantastic plugin we have built.

    Test the Plugin

    Create a Gatsby Website

    Run the following command from a directory one level above the plugin’s directory:

    // PARENT
    // ├── PLUGIN
    // ├── Gatsby Website
    
    gatsby new private-blog https://github.com/gatsbyjs/gatsby-starter-blog

    Install the Plugin Locally and Add Options

    Link With npm

    Next, change to the blog directory, because we need to create a link for the new plugin:

    cd /path/to/blog
    npm link ../path/to/plugin/folder
    Add to gatsby-config.js

    In the gatsby-config.js file of the blog folder, we should add a new object that has a resolve key and that has name-of-plugin-folder as the value of the plugin’s installation. In this case, the name is gatsby-comment-server-plugin:

    module.exports = {
      // ...
      plugins: [
        // ...
        "gatsby-plugin-dom-injector",
        {
          resolve: "gatsby-comment-server-plugin",
          options: {website: "https://url-of-website.com"},
        },
      ],
    }

    Notice that the plugin accepts a website option to distinguish the source of the comments when fetching and saving comments.

    Update the blog-post Component

    For the insertion point, we will add

    to the post template component at src/templates/blog-post.js of the blog project. This can be inserted at any suitable position; I have inserted mine after the last hr element and before the footer.

    Start the Development Server

    Finally, we can start the development server with gatsby develop, which will make our website available locally at http://localhost:8000. Navigating to any post page, like http://localhost:8000/new-beginnings, will reveal the comment at the insertion point that we specified above.

    Create a Comment

    We can create a comment using the comment form, and it will provide helpful feedback as we interact with it.

    List Comments

    To list newly posted comments, we have to restart the server, because our content is static.

    Conclusion

    In this tutorial, we have introduced Gatsby plugins and demonstrated how to create one.

    Our plugin uses different APIs of Gatsby and its own API files to provide comments for our website, illustrating how we can use plugins to add significant functionality to a Gatsby website.

    Although we are pulling from a live server, the plugin is saving the comments in JSON files. We could make the plugin load comments on demand from the API server, but that would defeat the notion that our blog is a static website that does not require dynamic content.

    The plugin built in this post exists as an npm module, while the full code is on GitHub.

    References:

    Resources:

    • Gatsby’s blog starter, GitHub
      A private blog repository available for you to create a Gatsby website to consume the plugin.
    • Gatsby Starter Blog, Netlify
      The blog website for this tutorial, deployed on Netlify for testing.

    (yk)

    Categories: Others Tags:

    Posters! (for CSS Flexbox and CSS Grid)

    July 6th, 2020 No comments

    Any time I chat with a fellow web person and CSS-Tricks comes up in conversation, there is a good chance they’ll say: oh yeah, that guide on CSS flexbox, I use that all the time!

    Indeed that page, and it’s cousin the CSS grid guide, are among our top trafficked pages. I try to take extra care with them making sure the information on them is current, useful, and the page loads speedily and properly. A while back, in a round of updates I was doing on the guides, I reached out to Lynn Fisher, who always does incredible work on everything, to see if she’d be up for re-doing the illustrations on the guides. Miraculously, she agreed, and we have the much more charismatic illustrations that live on the guides today.

    In a second miracle, I asked Lynn again if she’d be up for making physical paper poster designs of the guides, and see agreed again! And so they live!

    Here they are:

    You better believe I have it right next to me in my office:

    They are $25 each which includes shipping anywhere in the world.

    The post Posters! (for CSS Flexbox and CSS Grid) appeared first on CSS-Tricks.

    Categories: Designing, Others Tags:

    Make Your Sites Fast, Accessible And Secure With Help From Google

    July 6th, 2020 No comments
    An illustration of the three metrics explained: Largest Contentful Paint, First Input Delay and Cumulative Layout Shift

    Make Your Sites Fast, Accessible And Secure With Help From Google

    Make Your Sites Fast, Accessible And Secure With Help From Google

    Dion Almaer

    2020-07-06T14:00:00+00:00
    2020-07-06T16:34:30+00:00

    Earlier this year, the Chrome team announced the Web Vitals initiative to provide unified guidance, metrics, and tools to help developers deliver great user experiences on the web. The Google Search team also recently announced that they will be evaluating page experience as a ranking criteria, and will include Core Web Vitals metrics as its foundation.

    The three pillars of 2020 Core Web Vitals are loading, interactivity, and visual stability of page content, which are captured by the following metrics:

    An illustration of the three metrics explained: Largest Contentful Paint, First Input Delay and Cumulative Layout Shift

    Core Web Vitals 2020 (Large preview)
    • Largest Contentful Paint measures perceived load speed and marks the point in the page load timeline when the page’s main content has likely loaded.
    • First Input Delay measures responsiveness and quantifies the experience users feel when trying to first interact with the page.
    • Cumulative Layout Shift measures visual stability and quantifies the amount of unexpected movement of page content.

    At web.dev LIVE, we shared best practices on how to optimize for Core Web Vitals and how to use Chrome DevTools to explore your site or app’s vitals values. We also shared plenty of other performance-related talks that you can find at web.dev/live in the Day 1 schedule.

    tooling.report

    The web is a complex platform and developing for it can be challenging at the best of times. Build tools aim to make a web developer’s life easier, but as a result build tools end up being quite complex themselves.

    To help web developers and tooling authors conquer the complexity of the web, we built tooling.report. It’s a website that helps you choose the right build tool for your next project, decide if migrating from one tool to another is worth it, or figure out how to incorporate best practices into your tooling configuration and codebase. We aim to explain the tradeoffs involved when choosing a build tool and document how to follow best practices with any given build tool.

    We designed a suite of tests for the report based on what we believe represents the best practices for web development. The tests allow us to determine which build tool allows you to follow what best practice and we worked with the build tool authors to make sure we used their tools correctly and represented them fairly.

    An overview and comparison between the well-known build toolks webpack v4, Rollup v2, Parcel v2 and Browserify+Gulp

    Comparison report of current set of libraries on tooling.report (Large preview)

    The initial release of tooling.report covers webpack v4, Rollup v2, and Parcel v2 as well as Browserify+Gulp, which we believe are the most popular build tools right now. We built tooling.report with the flexibility of adding more build tools and additional tests with help from the community.

    So if you think a best practice that should be tested or is missing, please propose it in a GitHub issue and if you’re up for writing adding a new tool we did not include in the initial set, we welcome you to contribute!

    Meanwhile, you can read more about our approach towards building tooling.report and watch our session from web.dev LIVE for more.

    Latest In Chrome DevTools And Lighthouse 6.0

    Most web developers spend a lot of time of their day in their developer tools so we want to ensure that our tools enable greater productivity, whether it’s for debugging or for auditing and fixing issues to improve user experience.

    Chrome Devtools: New Issues Tab, Color Deficiency Emulator And Web Vitals support

    One of the most powerful features of Chrome DevTools is its ability to spot issues on a webpage and bring them to the developer’s attention — this is most pertinent as we move into the next phase of a privacy-first web. To reduce notification fatigue and clutter of the Console, we’ve launched the “Issues Tab” that focuses on three types of critical issues to start with: Cookie problems, Mixed content and COEP issues. Watch our session on finding and fixing problems with the Issues Tab for more.

    A screenshot of the Chrome DevTools timeline where developers can track and measure metrics, performance and more

    New Issues Tab in Chrome DevTools (Large preview)

    Moreover, with Core Web Vitals becoming one of the most critical sets of metrics that we believe every developer must track and measure, we want to ensure developers are able to easily track how they perform against these thresholds. So we’ve added the three metrics in the Chrome DevTools timeline.

    And finally, with an increasing number of developers focusing on accessibility, we also introduced a Color Vision Deficiency Emulator that allows developers to simulate vision deficiencies, including blurred vision & various other types of color blindness. We’re super excited to bring this feature to developers who’re looking to make their websites more color-blind friendly and you can see more about this and many other features in our session on What’s the latest in DevTools.

    A screenshot from YouTube of a session featuring Jake Archibald and Surma

    New Color Vision Deficiency Emulator in Chrome DevTools (Large preview)

    Lighthouse 6.0: New Metrics, Core Web Vitals Lab Measurement, An Updated Performance Score, And Exciting New Audits

    Lighthouse is an open-source automated tool that helps developers improve their site’s performance. In its latest version, we focused on providing insights based on metrics that give you a balanced view of your user experience quality against critical dimensions.

    To ensure consistency, we’ve added support for the Core Web Vitals — LCP, TBT (lab equivalent for FID as Lighthouse is a lab tool) and CLS — and removed three old ones: First Meaningful Paint, First CPU Idle, and Max Potential FID. These removals are due to considerations like metric variability and newer metrics offering better reflections of the part of user experience that we’re trying to measure. Additionally, we also made some adjustments to the weights based on user feedback.

    We also added a super nifty scoring calculator to help you explore your performance scoring, by providing a comparison between version v5 and v6 scores. When you run an audit with Lighthouse 6.0, the report comes with a link to the calculator with your results populated.

    And finally, we added a bunch of useful new audits, with a focus on JavaScript Analysis and accessibility.

    All new audits in Lighthouse 6.0 (Large preview)

    There are many others that we spoke about at web.dev LIVE — watch the session on What’s latest in speed tooling and the latest in Puppeteer.

    During web.dev LIVE, we shared more new features and updates that have come to the web over the past few months. Watch all the sessions to stay up to date and subscribe to the web.dev newsletter if you’d like more such content straight to your inbox.

    (ef, ra, il)

    Categories: Others Tags: