Single-Page-Apps can be fantastic. Most teams will mess them up because most teams operate in dysfunctional organisations. Multi-Page-Apps can also be fantastic, both in highly functional organisations that can apply them when and where they are appropriate and in dysfunctional ones, as they enforce a limit to project scope.
Both approaches can be good and bad. Baldur makes the point that management plays the biggest role in ensuring either outcome.
Another truth: there are an awful lot of projects out there that aren’t all-in on either approach, but are a mixture. I feel that. I also feel like there is a strong desire to declare a winner or have a simple checklist to decide which approach to go for, and that just ain’t gonna happen because the landscape is too shifty and there are just too many factors. Technology: it’s complicated.
Doing a quick Google search on ‘UX design methods’ will yield hundreds of hits. If you’ve just mastered your design resume, and are new to UX design, or in a rush, wading through all that info is guaranteed to give you a headache.
How about this instead—before adding something, ask yourself “will it spark joy?”
Invented by Netflix star Marie Kondo, the KonMari method of decluttering is a full-on embrace of minimalism. Her approach removes the distractions from an environment until what remains is free to bring joy to the person(s) occupying the space.
So, how does KonMari relate to UX design, and how may creatives apply the process? Keep reading to find out!
Uncluttering From the Base: What’s the KonMari Organization Method?
The KonMari movement was founded by the tidying expert and TV personality Marie Kondo. She transforms cluttered homes into spaces of serenity and inspiration. She is the star of the Netflix Show, “Tidying Up With Marie Kondo” and has written four best-seller books on organizing and tidying a home.
The KonMari method doesn’t just involve discarding old items. The items removed are not necessarily useless; only what brings a smile to your face is to be kept.
Getting More Out of Minimalism
The Marie Kondo method is a part of minimalism — a movement that has guided several different disciplines for centuries.
There are traces of minimalism in 12th-century Japanese philosophy, including Ma and Wabi-Sabi. Then, the Netherlands started the very restrictive De Stijl in the 20th century, the same as the groundbreaking “form follows function” Bauhaus movement in Germany.
However, minimalism, as we now know it, was established in America in the 1960s – 70s.
The expression “keep it simple” has influenced design heavily, as can be seen in the interface designs of social media platforms like Instagram and WhatsApp, and email programs.
Here’s some common wisdom when it comes to ‘keeping it simple’.
Simplicity: The phrase ‘less is more’ cannot be more true than when it comes to design. Having too many elements can make something hard to visually parse. So, instead of trying to show off your design chops, you need to ask yourself “can this design make sense without this?” If the answer is yes, then consider not including it. Not only will this make for a better final product, but you’ll also save yourself some time as well!
In practice, a common example is designers using a relaxed, short color palette and one to two fonts (maximum). This allows for the message being put forward to be conveyed through contrast rather than risk a messy design.
Structured Visual Hierarchy: How the elements on a webpage are laid out indicates to a reader their priority. A good designer arranges elements in a logical and strategic manner, so they are led to the areas of focus. Finding the appropriate place to look in a user interface (UI) design can be challenging if the piece has an ambiguous visual hierarchy.
For instance, a landing page that places “Review our Services” or “Policy” at the top and sends “Get a Free Quote” or “Register Now” to the bottom has a terrible visual hierarchy.
Proportion and Composition: Successful designers understand how individual elements come together to create a compelling whole. This is where grids come in. Grids provide organization to your work, showing how every part contributes to the final design.
The above image shows how grids can help a designer understand how much negative space an asset has and how the final product may look to a viewer.
Affordances of All Elements: Minimalism mandates that the numerous elements in your UX design have relatable affordances. The affordance of a design is the possibility of action that the user may take on it (the possibility of an action on an object). For example, the affordance of a car is that it can be driven and the affordance of a bell is that it can be rung.
When you introduce this concept to your design, it helps your audience to know what your product can do at a glance. Let’s say you’re designing a special email product, users should immediately know what they can do as soon as they see your design. The emphasis is that your system elements are arranged in a way that users may quickly and accurately identify the actions that the system allows them to perform.
A designer may achieve this effect by first understanding the users’ perspective and specific cultural and social signifiers that they would be familiar with.
The Foxmail (popular Gmail desktop app) homepage is loaded with affordances – e.g., the shape makes the rose rectangle stand out as a button.
Presence of White Space: White space refers to the untouched space in a design, although this doesn’t necessarily describe a blank space with a white background. It may be of any color, texture, and pattern. Think of its purpose as something like the function of silence in music.
Having the right amount of white space is essential when trying to convey something important. Things like product images, calls-to-action, and steps in a guide need to be clear. Use white space to ensure a user is clear on what it is they should be focusing on.
Designers may use white space properly by spacing out their designs, like commas and periods. Consider the illustration above.
Use of the Golden Ratio: Although there are different views of what makes a good design, the golden ratio is something almost every designer can agree is necessary for a strong design.
The Golden Ratio can be obtained by splitting a line into two parts: a bigger part (a) and a smaller part (b), in such a way that a divided by b equals a + b divided by a. In turn, both will have to equal 1.618, meaning that the first equation gives 1.618, the same as the second. The Divine Proportion can be used to find the perfect ratio for typography, layout and hierarchy definition, image cropping and resizing, and logo development.
Suppose your body copy is at size 11. You may get the ideal size for your header text by multiplying 11 by 1.618, which would be 17.798. Considering this, the perfect header text size would be 18x.
Typography as a Crucial Design Variable: A few words on a page can be incredibly powerful, especially when they’re well placed.
The type of typeface that you employ can convey a message in itself. When deciding what to use, try placing the same text in multiple typefaces to see how the feel of the message changes.
How Your UX Design Can Benefit from the KonMari Method
The KonMari method embraces the idea of keeping it simple, but with the special twist of ‘sparking joy’. With it in mind, you can streamline your process and create some truly beautiful designs. And, in the process, shave off costly hours of work for yourself.
How “Sparking Joy” Helps Your Design Process
By focusing only on what sparks joy, a designer can better place their time on making the intention behind the design stand out.
Consider your local grocery store. Whenever they have a new product they want to promote, the approach they’d take is to place it in a highly visible area. Sure, some information about it would be somewhere nearby, but the main focus would be to have the product ‘speak for itself’ through clever lighting.
By the same token, your designs should focus on highlighting the product, CTA, etc. in the best possible light. The user’s eye should naturally lead to the action you want them to take. Everything else needs to be in service to that.
Less Chaos, More Organization
The problem of too much can start as early as the ideation stage. Creativity is great, but when you have loose ideas flying around it can be hard to work on a single thought. To better organize your thinking, you can use tools such as mind mapping, card sorting, or stakeholder mapping to stay on track.
The Kondo Approach is Life-changing
KonMari partly draws inspiration from Shintoism (or Kami). This ancient Japanese religion significantly emphasizes certain rituals and spiritualities with a clear objective: expelling negative energy to uplift the divine spirit. Kami concentrates on living the right way, and the first step is tidying up.
The same is true of designers. According to Warren Berger (founder of aMoreBeautifulQuestion), design may not change the world in the ridiculous, grand way people expect.
However, design resolves individual problems worldwide. Thus, it changes lives one at a time.
Practical Tips to Apply KonMari to the UX Design Process
Applying KonMari to product design can be challenging for beginners. The following tips will help you to apply this organizational method.
Know the Procedure (and Trust It)
The UX designing process doesn’t change regardless of the audience or the content type — whether an ecommerce business flyer or a job advert. Why? There’s a purpose for every part of the sequence. The designer has a reason for the low-to-high fidelity osmosis during the prototype, a reason for concentrating on the customer’s challenges before fashioning solutions, and also a reason for iterative tests.
KonMari has the same protocol. You organize clothing to begin, then books before paper, miscellaneous items, and end with mementos.
It’s critical to know and understand the objectives of every phase in the design process. This will provide a foundation to build off of. This also makes selecting the appropriate tools easier.
Think Critically Before Deleting
Digital assets don’t physically exist meaning deleting them is simple. This can have an effect on your thinking as, with time, you might forget how much work went into a specific asset. Often, this split-second action is triggered by critique or feedback. Sometimes all that’s required is a small change, but whoops, you’ve gone and deleted the whole thing!
The KonMari method can help avoid this.
Instead of simply deleting the design outright, consider whether it could be recycled for a future project instead. Save iterations: it takes no time to duplicate your design after creating it. There are days you’ll be grateful you did!
Communicate What’s Possible
Visibility inspires communication in KonMari. The process includes selecting a place for the objects you’re keeping so that they can be found easily. According to this method, invisibility equals an absence.
Sometimes called “discoverability and learnability”, design your product such that a user can easily know if they may achieve a particular objective with it. Communicate
For instance, placing a placeholder text in a text box tells the user what to input in that box. Similarly, having placeholder text in a search box gives the user a clear idea of what to search for. Doing this will ensure the user fills a form correctly or gets accurate search results in their first attempt.
Final Thoughts
KonMari is an outstanding organizational strategy, thanks to its reliability. Cutting out the clutter in your home lets you find joy in what’s left. The same applies to your design. Removing the unnecessary, over-designed and overly complicated elements leaves behind a final design that really emphasizes what’s important.
For a more full-featured app, the editor interface is pretty good, and meets most of the criteria mentioned above.
If you run or have recently switched to a static site generator, you might find yourself writing a lot of Markdown. And the more you write it, the more you want the tooling experience to disappear so that the content takes focus.
I’m going to give you some options (including my favorite), but more importantly, I’ll walk though features that these apps offer that are especially relevant when choosing. Here are some key considerations for Markdown editing apps to help the words flow.
Consideration #1: Separate writing and reading modes
UX principles tell us that modes are problematic. But perhaps there is an exception for text editing software. From vi(m) to Google Docs, separate modes for writing and reading seem to appeal to writers. Similarly, many Markdown editors have separate modes or views for writing, editing and reading.
I happen to like Markdown editors that provide a side-by-side or paned design where I can see both at once. Writing Markdown is not the same as writing code. What it looks like matters, and having a preview can give you a feel for that. It’s kind of like static site generators that auto-refresh so that you can see your changes as you make them.
In contrast, I’m not a fan of the one-mode-to-rule-them-all design where Markdown formatting automatically converts to styled text, hiding the formatted code (implemented in some form by Dropbox Paper, Typora, Ulysses, and Bear). I can’t stand the work of futzing with the app to change a heading level, for example. Do I click it, double-click, triple-click? What if I’m just using the keyboard?
I want to see all the Markdown that I’ve written, even if the end user won’t. That’s one thing that I do want a Markdown editor to borrow from code editors.
Consideration #2: Good themes
Some Markdown editors allow full customization of editor themes, while others ship with nice ones out of the box. Regardless, I think a good editor should have just the right amount of styling to differentiate plain text from formatted text, but not so much that it distracts you from being able to read it and focus on the content. Even with the preview pane open, I typically spend most of my time looking at the editing view.
Different colors for each style
Since most of the text in the editor isn’t going to be rendered as it would in the browser, it’s nice to quickly see which text you’ve formatted using Markdown. This helps you determine, for example, whether a URL is actually written out in the text or is used inside a hyperlink. So, I like to have a different color for each Markdown style (headings, links, bold, italic, quotes, images, code, bullets, etc.)
Apply bold and italics styles too
I prefer to use asterisks for Markdown formatting everywhere I’m able to (e.g., bold, italics, and unordered lists), so I find it helpful to have extra styling beyond color to distinguish bold, italic, and bold+italic. When skimming it can be hard to differentiate between **this is important** and *this is important*, whereas **this is important** and *this is important* are easier to separate. It also helps me see if I’ve accidentally mismatched asterisks (e.g., **is this important?*).
Different font sizes for each heading level
This might be a bit controversial and may split the audience. Code editors don’t show different font sizes within a file. Colors and styles, sure, but not sizes. But, for me, it helps.
When writing, hierarchy is the key to organization. With different font sizes for each heading, you can see the outline of whatever you’re writing just by skimming through it.
Shortcuts and smart keyboard behaviors
I expect all the standard shortcuts that work in a text editor to work. CTRL/CMD + B for bold, I for italic, etc., as well as some that are nice-to-have when writing articles, in particular CTRL/CMD + (number) for headings. CTRL/CMD + 1 for H1, etc.
But there are also some keyboard behaviors I like that are borrowed from code editors. For example, if I select some text and press [ or ( it won’t overwrite that text, but, instead, enclose it with the opening and closing character. Same for using text formatting characters like *, `, and _.
I also rely on keyboard shortcuts to create links and images. Even after more than five years of writing Markdown on a regular basis, I still sometimes forget whether the brackets or parentheses comes first. So, I really like having a handy shortcut to insert them correctly.
Even better, in some editors, if you have a URL in your clipboard and you select text then use a keyboard shortcut to make it into a link, it will insert the URL in the hyperlink field. This has really sped up my workflow.
Bonus feature: Copy to HTML
The editor that I use most often has a one-click “Copy HTML” feature (with keyboard shortcut) that takes all of the Markdown I’ve written and copies the HTML to the clipboard. This can be very convenient when using an online editor (e.g., WordPress) that has a code/source option.
Consideration #3: Stand-alone editor vs. CMS/IDE plugin
I know that a lot of people who work with static site generators love their IDEs and may even jump back and forth between code and Markdown in a single day. I often do. So I can see why using a familiar IDE would be more attractive than having a separate app for Markdown.
But when I’m sitting down to write a page in Markdown or an article, where I’m focusing on the text itself, I prefer a separate app.
I’m not fanatical about using standalone Markdown editors over IDE editor or plugins; I use one occasionally for complex find-and-replace tasks and other edits. As long as it offers the benefits listed above, I wouldn’t try to talk anyone out of it.
Here are a few reasons why a standalone app might work better for writing:
Cleaner interface. I’m not someone who needs “Zen mode” in my writing app, but I do like to have as few panels open as possible when I’m writing, which typically requires turning a lot of things off in an IDE.
Performance. Most Markdown tools just feel lighter to me. They are certainly less complex and do less stuff, so they should be faster. I don’t ever want to feel like my writing app is exerting any effort. It should launch fast and respond instantly, always.
Availability. I just haven’t found a Markdown editor in an IDE that I really like. Perhaps there is one out there; I just don’t have time to try them all. But I like most standalone Markdown editors that I’ve used, and I can’t say the same for what I’ve tried in IDE-land.
Mental shift. When I open my IDE, I’m thinking about writing code, but when I open my Markdown editor, I’m thinking about writing words. I like that it gets me into the right mindset.
My favorite Markdown editors for writing
While these are my top picks, it doesn’t mean that if an app isn’t on this list that it’s bad. There are several good apps that I didn’t mention because they had too many features or were too expensive given the number of decent free or cheap options. And similar to IDE packages, there are a ton of Markdown apps out there and I haven’t tried them all (but I have tried a lot of them!).
A note about features that help you get “into the zone,” such as “typewriter” or “focus” modes, or soothing background music. They’ve never really worked for me and I eventually turn them off, so they aren’t a feature that I go looking for. (Although if you are into those, you can try Typora, which is free (during Beta) and runs on Mac, Windows, and Linux.)
For a more full-featured app, the editor interface is pretty good, and meets most of the criteria mentioned above. Zettlr offers similar features, but just feels more complicated, IMO.
Not my favorite app for writing and editing text, but it has the nice added ability to publish to various platforms (e.g., Medium, WordPress, Tumblr, Blogger, and Evernote).
A good choice if you use Markdown for more than just site content (personal notes, task management, etc.). Scores high in appearance and usability, too.
Summary
With Markdown syntax being supported in more and more places — including Slack, GitHub, WordPress, etc. — it is quickly becoming a lingua franca for richer communication in our increasingly text-based lives. It’s here to stay because it’s not only easy to learn and use, it’s intuitive. And luckily we’re currently spoiled for choice when it comes to quality Markdown writing apps.
Laravel was designed to meet different requirements, including event processing and authentication mechanisms for MVC architecture. It also has a software package manager who can manage configurable and expansive code with the massive backing of database management.
With its brief and stylish features, Laravel has attracted substantial attention. Whether specialist or newbie, they will think of Laravel for the first time when developing PHP projects.
Laravel does all it can to facilitate the situation for you; it means that lots and lots and tons of work are carried out in the background to ensure you are living comfortably. Unfortunately, all the “magic” features of Laravel seem to work only with code layers that need to be enhanced when running apart.
What made Laravel become the most widely used PHP framework?
Scalability and modularity
Modularity and code scalability are at the core of Laravel. In the Packalyst directory that contains about 5500 packages, you can find the files you wish to add. The goal of Laravel is to enable any folder you would like to see.
Program interfaces and microservices
Lumen is a Laravel micro-framework focusing on rationalization. With its more significant interaction, you can easily and quickly develop microprojects. Lumen incorporates with minimum effort all critical aspects of Laravel. By copying the code for the project Laravel, you can relocate the entire framework.
Authentication
Laravel comes with local user authentication, and you can retain users with the “Remember” choice. For example, it also helps to display if it is an active presence for specific additional parameters.
Integration type
Laravel Cashier can encounter all of your necessities when developing a payment platform. In addition, a user authentication system is also synchronized and integrated. So you don’t need to be concerned about how the billing system can be integrated into the process.
Laravel Main Features
Increased productivity – Cache
A caching solid system can be created for your application. You can make adjustments to the application load to allow the most satisfactory experience for the user. By default, file system level caching is enabled. However, you can modify this behavior patterns by using non-SQL databases such as REDIS, Memcache, or APC. They store data in pairs of “key-value” and do it in the RAM server. Because of this, the time of access to the data is significantly reduced, and developers can cache any data. For the developer, the main thing in this art is to correctly invalidate the cache and remove obsolete data when it keeps changing.
Open source and a large community
A Laravel-based product from many supporters is easy to maintain and find developers for your project to develop. There are platforms like Adeva where you can get in touch with the best of the best developers out there. Then, you can use the open-source which enables anyone to upgrade the framework and its applications to third parties.
MVC architecture
By following the MVC architecture, a clear separation between the three abstract layers of the application is achieved: model, controllers, representation.
They become independent of each other and can be used separately. This helps avoid situations where fixing some bugs in the logic breaks old workings and leads to even more bugs in multiple places. Unfortunately, it is difficult for anyone to consider all the connections and foresee where and what their new code may negatively affect. So, the only proper solution is to get rid of these connections.
Eloquent ORM
The Laravel object-relational mapper (ORM) is known as Eloquent and is one of Laravel’s best characteristics because it allows seamless connections to the database and data model of choice.
With Eloquent, Laravel eliminates all obstacles to the interaction and composing of complex SQL queries for data access in your dataset.
Artisan CLI
Another essential aspect of Laravel is the Artisan CLI or command line. It allows you to generate or modify any part of Laravel from the command line, eliminating the need to navigate through folders and files.
Without having a database client installed, you can even interact directly from your command line with Artisan with your database via Laravel Tinker.
Automatic pagination
You can understand the value of having the pagination sorted by a built-in framework if you have ever had a problem with paging in your applications. By building automatic pagination, which comes out directly from the box, Laravel solves the pagination problem. This feature is one of the most well-known and removes you from the effort to solve the mystery of pagination.
High security
There are three major security issues: SQL injection, cross-site request forgery (CSRF), and XSS.
What is a SQL injection attack?
SQL injection is a very old, unusual vulnerability. Persons with less experience can avoid SQL injection risk. Let’s look at a classic case first: users often type in when logging in.
$sql = “select * from user where username = ‘” + userName “” and passwd = ‘” +userPassword + “‘”;
Under normal circumstances, it will explain: select * from user where username = ‘admin’ and passwd = ‘mima’, but unfortunately hackers will also write SQL statements, hackers enter the username and password by typing: user named admin ‘or 1 = 1-‘, the password is empty, the SQL spliced at this time is: select * from the user, where username = ‘admin’ or 1 = 1-‘ and passwd = ‘ ‘.
1 = 1 is always authentic; the last one: the following SQL will be commented out, and the administrator user will be logged incorrectly. The principle is that simple.
How to break the hacker’s injection attack after understanding the principle? Most often, keyword checks are carried out in the layer of business logic. If it contains SQL keywords, like * or, select, delete, etc., it will be replaced; the most effective way is to use SQL variables for the query.
The framework is guarded in this situation by ORM, which by definition eliminates the risk of “raw” SQL queries and restores all parameters during their renovation. In addition, anything that can harm the data is removed from them.
A simple way to find injection points
1. Find a web page with a URL from the query string (e.g., look for URLs with “id =” in the URL).
2. Send a request to this website and change the id = statement with an additional single quote mark (e.g. id = 123 ‘).
3. Check the returned content and look for “sql”, “instruction”, and other keywords (this also means that specific error information is produced, which is very bad in itself).
4. Is the error message indicative of the incorrect encoding of parameters sent to SQL Server? If so, the attack may occur on the website by SQL injection.
What is CSRF?
An attacker carries out illegal operations (such as the transferor publication) by cross-site requests as a legitimate user. The CSRF principle is to rob the identity of a user through the use of a browser cookie or server session.
This is solved by screening the forbidden HTML tags and outputting the screened string as plain text without executing it.
The primary means of preventing CSRF is to identify the identity of the requestor, primarily in the following ways:
Adding a token to the form
Verification code
Verify the Referer in the request header (the anti-pilot link mentioned above is also used in this way).
Token and verification have unique consumption qualities so that, in principle, they are the same, but the verification code is a misuser. When this is not necessary, do not use the verification code lightly. The current method of many internet sites is to use a verification code that offers a great user experience after submitting a form several times without success.
Nearly everyone understands the verification code, but it prevents the logging engine from crashing brute forces and prevents CSRF attacks effectively. The code of verification is the shortest and most efficient method for countering CSRF attacks. But it’s not possible to enter verification codes for all user operations using verification codes. Verification codes may only need to be entered for a few essential functions. But HTML5 is being developed. Only with canvas tag, the front end too can realize the verification code features for CSRF.
What is XSS?
It is an attack method in which malicious scripts are injected into web pages to execute malicious scripts in the user’s browser when the user browses the web page.
There are two types of XSS attacks:
The first one induces users to click on a link embedded in malicious scripts to achieve the attack objective. For example, many attackers currently use Weibo forums to post URLs that contain malicious scripts.
The other one is when a malicious script is sent to the attacked website database. The malware script is posted from the database onto the execution page when the user browses the web page. The first version of the QQ mailbox was used as a platform for continuous scripting of cross-site attacks.
To protect your website from this attack, you must update it frequently. In the WordPress core, in plugins or themes, the vulnerabilities used by hackers to inject malicious code are found. That’s why all these components are so essential to be regularly updated. These updates address the vulnerabilities discovered to date.
I’m bringing this up now because I see Jonnie Hallman is blogging about tit again. He mentioned it as an awesome
Scroll shadows are when you can see a little inset shadow on elements if (and only if) you can scroll in that direction. It’s just good UX. You can actually pull it off in CSS, which I think is amazing and one of the great CSS tricks. Except… it just doesn’t work on iOS Safari. It used to work, and then it broke in iOS 13, along with some other useful CSS things, with no explanation why and has never been fixed.
So, now, if you really want scroll shadows (I think they are extra useful on mobile browsers anyway), it’s probably best to reach for JavaScript.
I’m bringing this up now because I see Jonnie Hallman is blogging about tit again. He mentioned it as an awesome little detail back in May. There are certain interfaces where scroll shadows really extra make sense.
Taking a step back, I thought about the solution that currently worked, using scroll events. If the scroll area has scrolled, show the top and left shadows. If the scroll area isn’t all the way scrolled, show the bottom and right shadows. With this in mind, I tried the simplest, most straight-forward, and least clever approach by putting empty divs at the top, right, bottom, and left of the scroll areas. I called these “edges”, and I observed them using the Intersection Observer API. If any of the edges were not intersecting with the scroll area, I could assume that the edge in question had been scrolled, and I could show the shadow for that edge. Then, once the edge is intersecting, I could assume that the scroll area has reached the edge of the scroll, so I could hide that shadow.
Clever clever. No live demo, unfortunately, but read the post for a few extra details on the implementation.
Other JavaScript-powered examples
CodePen Embed Fallback
CodePen Embed Fallback
CodePen Embed Fallback
I do think if you’re going to do this you should go the IntersectionObserver route though. Would love to see someone port the best of these ideas all together (wink wink).
Ahmad Shadeed documents a bonafide CSS trick from the Facebook CSS codebase. The idea is that when an element is the full width of the viewport, it doesn’t have any border-radius. But otherwise, it has 8px of border-radius. Here’s the code:
One line! Super neat. The guts of it is the comparison between 100vw and 100%. Essentially, the border-radius comes out to 8px most of the time. But if the component becomes the same width as the viewport (within 4px, but I’d say that part is optional), then the value of the border-radius becomes 0px, because the equation yields a negative (invalid) number.
The 9999 multiplication means that you’ll never get low-positive numbers. It’s a toggle. You’ll either get 8px or 0px and nothing in between. Try removing that part, resizing the screen, and seeing it sorta morph as the viewport becomes close to the component size:
CodePen Embed Fallback
Why do it like this rather than at a @media query? Frank, the developer behind the Facebook choice, says:
It’s not a media query, which compares the viewport to a fixed width. It’s effectively a kind of container query that compares the viewport to its own variable width.
PageSpeed Insights is a free performance measurement tool provided by Google. It analyzes the contents of a web page for desktop and mobile devices. It provides a single number score (from 1 to 100) that summarizes several underlying metrics that measure performance. If you have not run PageSpeed Insights on your website, then you should stop and do it now. It’s an important indicator of how Google scores and ranks your site.
If your PageSpeed Insights score is below 80, don’t panic. You are not alone. Many websites are not optimized for performance. The good news is that you can take steps that should immediately improve your score.
You will notice that PageSpeed Insights highlights issues that cause slow page loading. However, you might need more guidance to resolve these issues. Below, we walk you through how to resolve four common issues related to images. We also show you how ImageEngine, an image CDN, can simplify, automate, and deliver the best image optimization solution possible.
Performance Drives Google SEO Rankings
Why does the PageSpeed Insights score and performance matter? Isn’t SEO ranking all about content relevance, backlinks, and domain authority? Yes, but now performance matters more than it did a year ago. Starting in 2021, Google added performance metrics to the factors that impact search engine rankings. In a market where websites are constantly jockeying to match their competition’s pages (for content relevance, keywords, and other SEO issues), performance is making a difference in keyword search engine rankings.
What Are Core Web Vitals Metrics?
PageSpeed Insights relies on a set of performance metrics called Core Web Vitals. These metrics are:
Largest Contentful Paint (LCP): Measures the render time (in seconds) of the largest image or text block visible within the viewport, relative to when the page first started loading. Typically, the largest image is the hero image at the top of pages.
First Input Delay (FID): Measures the time from when a user first interacts with a page (i.e. when they click a link, tap on a button, or use a custom JavaScript-powered control) to the time when the browser is actually able to begin processing event handlers in response to that interaction.
Cumulative Layout Shift (CLS): Measures the layout shift that occurs any time a visible element changes its position from one rendered frame to the next.
Images and JavaScript are the Main Culprits
PageSpeed Insights breaks down problems into categories based upon how they impact these Core Web Vitals metrics. The top two reasons why you might have a low score are driven by JavaScript and images.
JavaScript issues are usually related to code that either blocks or delays page loading. For example, lazy-loading images might involve JavaScript that blocks loading. As a rule of thumb, do not use a third-party JavaScript library to manage image loading. These libraries frequently break the browser’s built-in image loading features. Lazy-loading may make above-the-fold images load slower (longer LCP) because the browser starts the download later and because the browser first has to execute the JavaScript.
Another JavaScript issue involves code that is large or unnecessary for the page. In other words, code bloat. There are good resources for resolving these issues on the web. However, in this blog, we will focus on image problems.
Images are a major contributor to poor performance. The average website payload is 2MB in 2021, and 50% of that is images. Frequently, images are larger than they need to be and can be optimized for size with no impact on quality…if you do it right.
Four Image Issues Highlighted by PageSpeed Insights
Largest Contentful Paint is the primary metric impacted by images. PageSpeed Insights frequently recommends the following four pieces of advice:
Serve images in next-gen formats.
Efficiently encode images.
Properly size images.
Avoid enormous network payloads.
That advice seems straightforward. Google provides some great advice on how to deal with images in its dev community. It can be summarized in the following steps:
Select the appropriate file format.
Apply the appropriate image compression.
Apply the right display size.
Render the image.
Write responsive image code to select the right variant of the image.
We call Google’s process the “Build-Time Responsive Syntax” approach. If you have a relatively static website where you don’t generate new pages or switch out images frequently, then you can probably live with this approach. However, if you have a large and dynamic site with many images, then you will quickly feel the pain of this approach. Google itself stresses that developers should seek to automate this image process. Why? Because the process has some serious workflow drawbacks:
Adds storage requirements due to a large increase in image variants.
Increases code bloat and introduces more code complexity.
Requires developers’ time and effort to create variants and implement responsiveness.
Doesn’t adapt to different contexts. It relies on best-guess (breakpoints) of what device visits the web page.
Needs a separate CDN to further increase delivery speeds.
Requires ongoing maintenance to adapt to new devices, breakpoints, image formats, markets, and practices.
Key Steps to Achieving High-Performance Images
Instead of using the Build-Time Responsive Syntax approach, an automated image CDN solution can address all of the image issues raised by PageSpeed Insights. The key steps of an image CDN that you should look for are:
Detect Mobile Devices: Detection of a website visitor’s device model and its technical capabilities. These include: OS version, browser version, screen pixel density, screen resolution width and height, support for next-gen image and video formats. This is where ImageEngine is unique in the market. ImageEngine uses true mobile device detection to further improve image optimization. It has a huge impact on the effectiveness of the image optimization process.
Optimize Images: An image CDN will leverage the device’s parameters to automatically resize, compress and convert large original images into optimized images with next-generation file formats, like WebP and AVIF. Frequently, an image CDN like ImageEngine will reduce the image payload by up to 80%.
Deliver by CDN: Image CDNs like ImageEngine have edge servers strategically positioned around the globe. By pushing optimized images closer to requesting customers and delivering them immediately from the cache, it often provides a 50% faster web page download time than traditional CDNs.
You can automate the addition of the Delivery Address to the img src tag by using plug-ins for WordPress and Magento. Developers can also use ImageEngine’s React, Vue, or Angular JavaScript frameworks to simplify the process.
Additionally, there are many ways to simplify implementation via adjustments to templates for many CMS and eCommerce platforms.
Results: Improved Performance, Better SEO
Most ImageEngine users see a huge improvement in LCP metrics, and consequently, a big improvement in the overall PageSpeed Insights score. ImageEngine provides a free demo analysis of your images before and after image optimization. In many cases, developers see improvements of many seconds on their LCP and Speed Index.
In summary, performance drives higher search rankings, and better UX, and increases website conversions for eCommerce. The steps you take to improve your image performance will pay for themselves in more sales and conversions, streamlined workflow, and lower CDN delivery costs.
Almost all version control systems (VCS) have some kind of support for branching. In a nutshell, branching means that you leave the main development line by creating a new, separate container for your work and continue to work there. This way you can experiment and try out new things without messing up the production code base. Git users know that Git’s branching model is special and incredibly powerful; it’s one of the coolest features of this VCS. It’s fast and lightweight, and switching back and forth between the branches is just as fast as creating or deleting them. You could say that Git encourages workflows that use a lot of branching and merging.
Git totally leaves it up to you how many branches you create and how often you merge them. Now, if you’re coding on your own, you can choose when to create a new branch and how many branches you want to keep. This changes when you’re working in a team, though. Git provides the tool, but you and your team are responsible for using it in the optimal way!
In this article, I’m going to talk about branching strategies and different types of Git branches. I’m also going to introduce you to two common branching workflows: Git Flow and GitHub Flow.
Part 3: Better Collaboration With Pull Requests Coming soon!
Part 4: Merge Conflicts
Part 5: Rebase vs. Merge
Part 6: Interactive Rebase
Part 7: Cherry-Picking Commits in Git
Part 8: Using the Reflog to Restore Lost Commits
Teamwork: Write down a convention
Before we explore different ways of structuring releases and integrating changes, let’s talk about conventions. If you work in a team, you need to agree on a common workflow and a branching strategy for your projects. It’s a good idea to put this down in writing to make it accessible to all team members.
Admittedly, not everyone likes writing documentation or guidelines, but putting best practise on record not only avoids mistakes and collisions, it also helps when onboarding new team members. A document explaining your branching strategies will help them to understand how you work and how your team handles software releases.
Here are a couple of examples from our own documentation:
master represents the current public release branch
next represents the next public release branch (this way we can commit hotfixes on master without pulling in unwanted changes)
feature branches are grouped under feature/
WIP branches are grouped under wip/ (these can be used to create “backups” of your personal WIP)
A different team might have a different opinion on these things (for example on “wip” or “feature” groups), which will certainly be reflected in their own documentation.
Integrating changes and structuring releases
When you think about how to work with branches in your Git repositories, you should probably start with thinking about how to integrate changes and how to structure releases. All those topics are tightly connected. To help you better understand your options, let’s look at two different strategies. The examples are meant to illustrate the extreme ends of the spectrum, which is why they should give you some ideas of how you can design your own branching workflow:
Mainline Development
State, Release, and Feature Branches
The first option could be described as “always be integrating” which basically comes down to: always integrate your own work with the work of the team. In the second strategy you gather your work and release a collection of it, i.e. multiple different types of branches enter the stage. Both approaches have their pros and cons, and both strategies can work for some teams, but not for others. Most development teams work somewhere in between those extremes.
Let’s start with the mainline development and explain how this strategy works.
Mainline Development
I said it earlier, but the motto of this approach is “always be integrating.” You have one single branch, and everyone contributes by committing to the mainline:
Remember that we’re simplifying for this example. I doubt that any team in the real world works with such a simple branching structure. However, it does help to understand the advantages and disadvantages of this model.
Firstly, you only have one branch which makes it easy to keep track of the changes in your project. Secondly, commits must be relatively small: you can’t risk big, bloated commits in an environment where things are constantly integrated into production code. As a result, your team’s testing and QA standards must be top notch! If you don’t have a high-quality testing environment, the mainline development approach won’t work for you.
State, Release and Feature branches
Let’s look at the opposite now and how to work with multiple different types of branches. They all have a different job: new features and experimental code are kept in their own branches, releases can be planned and managed in their own branches, and even various states in your development flow can be represented by branches:
Remember that this all depends on your team’s needs and your project’s requirements. While this approach may look complicated at first, it’s all a matter of practise and getting used to it.
Now, let’s explore two main types of branches in more detail: long-running branches and short-lived branches.
Long-running branches
Every Git repository contains at least one long-running branch which is typically called master or main. Of course, your team may have decided to have other long-running branches in a project, for example something like develop, production or staging. All of those branches have one thing in common: they exist during the entire lifetime of a project.
A mainline branch like master or main is one example for a long-running branch. Additionally, there are so-called integration branches, like develop or staging. These branches usually represent states in a project’s release or deployment process. If your code moves through different states in its development life cycle — e.g. from developing to staging to production — it makes sense to mirror this structure in your branches, too.
One last thing about long-running branches: most teams have a rule like “don’t commit directly to a long-running branch.” Instead, commits are usually integrated through a merge or rebase. There are two main reasons for such a convention:
Quality: No untested or unreviewed code should be added to a production environment.
Release bundling and scheduling: You might want to release new code in batches and even schedule the releases in advance.
Next up: short-lived branches, which are usually created for certain purposes and then deleted after the code has been integrated.
Short-lived branches
In contrast to long-running branches, short-lived branches are created for temporary purposes. Once they’ve fulfilled their duty and the code has been integrated into the mainline (or another long-lived branch), they are deleted. There are many different reasons for creating a short-lived branch, e.g. starting to work on a new and experimental feature, fixing a bug, refactoring your code, etc.
Typically, a short-lived branch is based on a long-running branch. Let’s say you start working on a new feature of your software. You might base the new feature on your long-running main branch. After several commits and some tests you decide the work is finished. The new feature can be integrated into the main branch, and after it has been merged or rebased, the feature branch can be deleted.
Two popular branching strategies
In the last section of this article, let’s look at two popular branching strategies: Git Flow and GitHub Flow. While you and your team may decide on something completely different, you can take them as inspiration for your own branching strategy.
Git Flow
One well-known branching strategy is called Git Flow. The main branch always reflects the current production state. There is a second long-running branch, typically called develop. All feature branches start from here and will be merged into develop. Also, it’s the starting point for new releases: developers open a new release branch, work on that, test it, and commit their bug fixes on such a release branch. Once everything works and you’re confident that it’s ready for production, you merge it back into main. As the last step, you add a tag for the release commit on main and delete the release branch.
Git Flow works pretty well for packaged software like (desktop) applications or libraries, but it seems like a bit of an overkill for website projects. Here, the difference between the main branch and the release branch is often not big enough to benefit from the distinction.
If you’re using a Git desktop GUI like Tower, you’ll find the possible actions in the interface and won’t have to memorize any new commands:
GitHub Flow
If you and your team follow the continuous delivery approach with short production cycles and frequent releases, I would suggest looking at GitHub Flow.
It’s extremely lean and simple: there is one long-running branch, the default main branch. Anything you’re actively working on has its own separate branch. It doesn’t matter if that’s a feature, a bug fix, or a refactoring.
What’s the “best” Git branching strategy?
If you ask 10 different teams how they’re using Git branches, you’ll probably get 10 different answers. There is no such thing as the “best” branching strategy and no perfect workflow that everyone should adopt. In order to find the best model for you and your team, you should sit down together, analyze your project, talk about your release strategy, and then decide on a branching workflow that supports you in the best possible way.
If you want to dive deeper into advanced Git tools, feel free to check out my (free!) “Advanced Git Kit”: it’s a collection of short videos about topics like branching strategies, Interactive Rebase, Reflog, Submodules and much more.
This is some bonafide CSS trickery from Harry that gives you some generic performance advice based on what it sees in your element.
First, it’s possible to make a block visible like any other element by changing the display away from the default of none. It’s a nice little trick. You can even do that for things in the , for example…
head,
head style,
head script {
display: block;
}
From there, Harry gets very clever with selectors, determining problematic situations from the usage and placement of certain tags. For example, say there is a that comes after some styles…
Well, that’s bad, because the script is blocked by CSS likely unnecessarily. Perhaps some sophisticated performance tooling software could tell you that. But you know what else can? A CSS selector!
head [rel="stylesheet"]:not([media="print"]):not(.ct) ~ script,
head style:not(:empty) ~ script {
}
That’s kinda like saying head link ~ script, but a little fancier in that it only selects actual stylesheets or style blocks that are truly blocking (and not itself). Harry then applies styling and pseudo-content to the blocks so you can use the stylesheet as a visual performance debugging tool.
That’s just darn clever, that. The stylesheet has loads of little things you can test for, like attributes you don’t need, blocking resources, and elements that are out of order.
Let’s assume that you are running a national food chain business. You just launched a new line of organic foods and you want to update your eCommerce marketing strategy accordingly.
When you think of “Organic food” you must have a specific audience in your mind. You must have made assumptions about their age, ethnicity, and location. For the most part, these assumptions might be true.
But, truth be told, if you are looking forward to strategizing for a better eCommerce solution, decisions should not be made based on assumptions.
With such fierce competition, extensive market research is not dispensable. For businesses of all sizes, market analysis, and research is the key to success. Unfortunately, most of the tools available for that purpose are quite expensive, complicated, and time-consuming. To figure out who is looking for your product and what regions to tackle, the best tool available is Google Trends.
What is Google Trends Anyway?
Google Trends is a powerful tool developed by Google. It provides complete insight into how different search queries are entered in search engines and how people are using the search engines around the world. When you know what people are searching for, you can understand how and what are they thinking.
Google Trends allows users to track the popularity of search terms over time, look for related phrases, and compare how popular a search term is in different parts of the world. Google Trends can also be used to spot spikes in search volumes of certain keywords, which are often caused by real-world events.
In this article, we will discuss in detail how Google Trends can help you improve your eCommerce marketing. So let’s deep dive into the details of the topic.
1. Utilizing Google Trends to find Hot Topics in your Industry
The world is now moving at a pace that each day there is something new to talk about. Trends change within a matter of seconds let alone days or months. Therefore, it is important that you are well aware of every ongoing trend in your industry. This is where Google Trends play the part. You can easily find the popular topics through Google Trends. Here is how:
Go to Google Trends
Step 1: Click on Explore from the Menu
Step 2: Select the Location you want to analyze. You can search for a whole country, state, or city.
Step 3: Choose the time interval you want. Depending on your requirements you can analyze topics that are currently trending or complete data of previous months or even a year.
Step 4: Select your category or Industry. For example, if you work in marketing then select marketing or food or so on.
Step 5: Explore and Analyze different search topics and queries related to your niche. You will find all sorts of data. You will have to do some work to filter out the data that is useful for your eCommerce business.
2. Set Realistic Traffic and Revenue Goals
Many organizations set their quarterly sales goals for the coming year during annual planning. Time after time, different companies deplorably miss their targets during one quarter, before knocking the ball out of the park for the next one. This arises when goals are either perfectly averaged over four quarters or set before you have a good understanding of seasonal spending patterns.
The key is to set realistic targets that are achievable rather than aiming for something that is not feasible. Prior to setting targets, use Google Trends to tabulate seasonal data of your top-selling products. This method is better suited to eCommerce businesses that are more focused on bootstrapping their top revenue-generating product without having to include a large number of search phrases. (However, the more search phrases you include, the better you’ll be able to confirm what the data is telling you.)
For stores that sell a large variety of products throughout the year, it will be hard to use Google Trends to get a clear picture of seasonal trends at a higher level. The hack here is to use keywords that only an active shopper at your store would use. This way you can get enough data to analyze the seasonal performance.
3. Sync your Calendar with Google Trends
An editorial calendar is as important as any other aspect of eCommerce strategy if not less. An editorial calendar lets you constructively manage your content by managing the timeline.
The seasonality of product searches emphasizes something that should be obvious: people have varied demands at different times of the year. They must investigate various pain areas, investigate various solutions, test various how-tos, and organize their approach to various occasions and holidays. They also have diverse emotions and motivations.
Search Google Trends for the themes you frequently cover to see when you should focus your content on them for the best SEO results. One technique you might do is to create evergreen pages but refresh the material prior to each new seasonal spike — this ensures that your page maintains its SEO equity while also providing up-to-date, pertinent content to site visitors. You can also look for patterns relating to upcoming events and holidays to get ideas for new material. To come up with topic ideas, try different inquiries and keywords in connection with the holiday.
4. Understanding Your Audience
Google has evolved into more of an institution than a search engine as the most popular search engine. As a result, their search data is extremely reflective of public sentiment and interests. You may take advantage of this by learning about public opinion in your sector.
Use Google Trends to see how our industry’s perception has changed over time and where it stands currently.
Get started with Google Trends by learning the basics. In the search box above, type some key phrases that you believe indicate a difference in opinion and understanding of your sector. As shown in the graph below.
This graph will show you the search trends over a certain time interval for specific phrases compared to each other for a particular industry.
5. Using Google Trends to find Paying Niches
Google Trends is an excellent resource for locating a rapidly growing market. You should modify your range from “past 12 months” to “2021-present” whenever you’re looking for a new niche. This allows you to monitor whether the volume of searches is increasing or decreasing. However, it also allows you to see seasonal trends in a single, clear image.
Consider the example of a skyrocketing product discussed below: Posture Corrector
You can easily observe clear growth during the last several months. We noticed a sharp increase in January, followed by a minor decrease in February. However, this does not rule out the possibility of profiting from sales. As a result, this popular product will need to be watched for a little longer.
On the other, if you analyze a more stable niche such as men’s fashion, you will observe that:
There are tiny dips in the graph, which can be seen clearly. The search traffic for this specialty, on the other hand, is quite consistent. It’s typical to witness some little dips or increases over the course of several years. However, Google Trends indicates that men’s fashion is a somewhat steady niche. You might be wondering what the peaks and valleys on the graph mean. These demonstrate the seasonal trends in search volume. There is an increase in searches from October through December, with a fall beginning in January. That isn’t to say you shouldn’t open a men’s fashion store in January; it just means you’ll get less website traffic at that time of year.
Now let’s consider a third example. How does a fading trend look like on Google Trends? Let’s consider the graph below for the keyphrase fidget spinner.
Until February 2017, there were almost no searches for “fidget spinners.” The product peaked three months later, in May. It’s apparent that in the first few months, there was a significant surge in attention. However, the steep drop since the top indicates that this is no longer a viable commercial venture to pursue.
6. Thorough keyword research with Google Trends
Assume you run a business that sells women’s blouses. According to Google Trends, searches for this are increasing, which is a good sign. However, you must now determine which keywords to target, how to name your product categories, and how to optimize a blog article about women’s blouses. Take a brief glance at “Related questions,” which is located on the right side of the “Related Topics” section we just discussed.
There is a constant callout to color across all the queries. There are two listings for the color black in the chart above. White, blue, pink, and green can be found on different sites. For these cases, you would want to construct a color-based product category, such as “black blouses.” However, you may also use such keywords in the title of your product and on the product page. “Women’s shirts” or “women’s blouses” might potentially be added as a product category because they have a lot of searches and make sense for this type of clothing.
7. Google Trends and YouTube Trends
While Google Trends is most commonly used to improve the online standing of your eCommerce store, you can also utilize it to expand your social media reach, particularly on YouTube. After searching YouTube for “fashion” videos, we discovered that the most popular ones used the phrase “fashion trends 2019.” So, let’s see what we get if we plug that into Google Trends.
The popularity of this search keyword increased in January. Naturally, the addition of the year to the keyword will make it a hot search term at the start of the year. We observed something intriguing after returning to YouTube and searching for “fashion trends 2019” in the search bar. Take a look at the following:
Each of the top videos was released in the year 2019. What makes this intriguing? Because vloggers (and bloggers) frequently post content before the new year to gain a jump start on visitors. However, we see that when content is produced in 2019, it scores fairly well in the top results.
So, if you’re planning to make a film about fashion trends in 2019, your best bet is to release it in January to take advantage of the data from Google Trends.
Let’s take a step back for a moment. Because we won’t know what the data for the remainder of 2019 will be, let’s look at the data for “fashion trends 2018” to see what we may expect in the coming year.
As a result, we can see that at the end of 2017, Youtube users began searching for the term “fashion trends 2018.” Then there’s that January increase we noticed in our 2019 graph. However, there are higher rises in March and September – immediately before summer and right before winter, respectively.
How can you make the most of the traffic during those times? If you have an email list, you may send an email in March and September to re-invigorate your video’s popularity. When Google notices that you’re pushing older video content, it’ll likely reward you with a higher ranking for your video, allowing you to earn more views. This method can also be used if you see that views on popular evergreen videos are dwindling.
Final Thoughts on Using Google Trends
Experimenting with Google Trends is the greatest approach to fully utilize its capabilities. Experiment with various keywords, themes, and comparisons. Use a variety of timelines, categories, and locations. Keep an eye on the most popular searches. Become enthralled. The more you look, the more vital keyword information you’ll find. Make use of this knowledge to ensure that your website responds to whatever your audience is thinking about.