Archive

Archive for the ‘’ Category

How to Create Forms in WordPress 6.3 Using the Jotform Plugin

October 27th, 2023 No comments

WordPress and Jotform help simplify website form creation and management. This tutorial shows how to use the Jotform plugin to add Jotforms to WordPress.

Jotform, a popular online form builder, makes it easy to construct everything from contact forms to surveys and registrations. Jotform can improve user engagement, data collection, and user experience by integrating with WordPress.

Sign up for Jotform

You must first create a Jotform account in order to use Jotform on your WordPress website. In order to create your Jotform account, follow these steps:

  • Visit Jotform’s website.
  • Click on the “Sign Up” button located in the top right corner.
  • Fill out the registration form with your name, email address, and password.
  • After completing the registration, click “Create My Account.”

You may create and modify forms for your website using Jotform’s form-building platform, which you can use when you join up.

Install the Jotform Plugin on Your Site

Installing the Jotform Online Forms plugin is required in order to integrate Jotform with your WordPress website. This is how you do it:

  • Open your WordPress Dashboard.
  • Navigate to the “Plugins” section in the sidebar and click on “Add New.”
  • In the search field, type “Jotform Online Forms” and press Enter.
  • When the plugin appears in the search results, click the “Install Now” button.
  • After the installation is complete, click the “Activate” button to activate the Jotform plugin.

Now that the Jotform plugin is activated and installed, you may create and integrate forms on your WordPress website.

Create a New Form

You can begin developing forms now that Jotform is linked to your WordPress website. To build a new form using Jotform, take the following actions:

  • Using the login information you provided at registration, access your Jotform account.
  • Click the “Create Form” button in your Jotform dashboard, then choose “Use Template.”
  • You can look for a template that works well for your form. We’ll utilize a “Contact Us” template in this example.
  • To make sure the chosen template satisfies your needs, you can preview it.
  • Alternatively, you can begin with a blank template if you would rather start from scratch and design a form with unique fields and layout.

With Jotform’s intuitive drag-and-drop interface, you can quickly and simply adjust the fields and look of your form.

Embed the Form on a Page or Post

After creating your Jotform form, embed it in a WordPress page or post. Jotform forms are easy to add to WordPress pages and posts because of its block-based editor.

WordPress 6.3 uses blocks for content and images. Blocks organize text, graphics, and forms, making content arrangement more natural and versatile.

Method 1: Include via Classic Editor Block

  • Open the page or post where you want to include Jotform.
  • In the content editor, type /classic where you want to add the form.
  • Select the “Classic” block from the available blocks.
  • Within the Classic block, you’ll find the Jotform icon; click on it.
  • You’ll be prompted to log in to your Jotform account. After logging in, select the form you created earlier.
  • Save the Classic block, and then preview the page. Your form should now be displayed on the page.

Method 2: Include via Shortcode Block

WordPress shortcodes are unique blocks that let you add features from different plugins straight into your page. In this instance, your form will be shown using the Jotform shortcode.

  • On Jotform.com, open the form you want to embed.
  • Click on the “Publish” tab within the form builder.
  • Go back to your WordPress page or post.
  • Create a new Shortcode block by typing /shortcode in the content editor.
  • Insert the following code into the Shortcode block, replacing with the actual ID of your form:

[jotform id=”” title=”Simple Contact Us Form”]

The resulting block should look something like this:

You may quickly add Jotform forms to your WordPress content by utilizing the Shortcode block or the Classic Editor.

Choose a High-Quality WordPress Theme to Showcase Your Forms

Choosing a premium WordPress theme is essential to the usability of your website. How nicely your Jotform forms integrate with the rest of your website can be significantly influenced by the theme you choose. A well-thought-out theme can improve the user experience and give your forms a more polished appearance.

Consider features like style, responsiveness, customization options, and Jotform plugin compatibility when selecting a premium WordPress theme for your website.

On the website The Bootstrap Themes, you may browse a selection of premium themes. Make sure the theme you select complements the design and objectives of your website.

Conclusion

You now know how to use the Jotform plugin to easily incorporate Jotform forms into your WordPress website by following this step-by-step tutorial. This combo improves the functionality and user experience of your website by making it simple to create, modify, and integrate forms. You may effectively gather data, interact with your audience, and optimize several website processes by following these guidelines.

It’s important to select a WordPress theme of superior quality that goes well with your Jotform forms so that your website appears unified and expert. With these tools at your disposal, you may maximize the potent capabilities offered by Jotform and improve your WordPress website.  Begin constructing and integrating forms right now to improve the functionality of your website.

Featured image by Jotform on Unsplash

The post How to Create Forms in WordPress 6.3 Using the Jotform Plugin appeared first on noupe.

Categories: Others Tags:

From Image Adjustments to AI: Photoshop Through the Years

October 27th, 2023 No comments

Remember when Merriam-Webster added Photoshop to the dictionary back in 2008? Want to learn how AI is changing design forever? Join us as we delve into the history of Photoshop, from its early beginnings right through to the dawn of artificial intelligence.

Categories: Designing, Others Tags:

Reeling Them Back: Retargeting Ads That Convert on Facebook

October 26th, 2023 No comments

Ever wondered how some ads seem to follow you around online? That’s Facebook retargeting at work! It’s a smart way to grab the attention of people who’ve already checked out your products. In the world of digital marketing, where standing out is a challenge, retargeting is like giving potential customers a friendly nudge, reminding them about your awesome products or services. We’ll dive into the secrets of making retargeting ads work like a charm on Facebook. From eye-catching pictures to words that make you want to click, we’ll explore how to get people excited about your brand again. Let’s roll up our sleeves and make those ads pop!

The Power of Facebook Retargeting

Imagine a digital strategy that consistently drives higher conversion rates, leading potential customers back to your offerings. That’s the essence of Facebook retargeting – a method that personalizes the customer journey and yields remarkable outcomes.

The data speaks for itself. When comparing retargeting to prospecting, the difference in conversion rates (CRs) is stark. Retargeting campaigns shine with a median CR of 3.8%, effortlessly outshining prospecting’s 1.5%. These data underscore the prowess of retargeting.

Diving deeper, a more detailed analysis highlights an intriguing discrepancy in retargeting CRs between the United States and other parts of the world. This nuance emphasizes the adaptability and potential of retargeting on a global scale.

Segmenting Audience for Precision and Clarity

A really important aspect of effective Facebook retargeting lies in audience segmentation. By distinctly separating your prospecting and retargeting audience, you gain a clearer understanding of performance metrics and pave the way for more efficient cost management.

Here’s the rationale: Retargeting and prospecting serve different purposes and inherently target distinct audiences. Retargeting focuses on individuals who’ve already engaged with your brand, noting them along the path to conversion. On the other hand, prospecting aims to cast a wider audience, introducing your brand to potential customers who might not yet be familiar with it.

Retargeting vs. Prospecting metrics

Now let’s talk numbers. It’s a known fact that retargeting ads generally come with higher CPM (cost per mill) compared to prospecting ads. The reason behind this is the audience size. Retargeting audiences are naturally smaller since they comprise individuals who’ve interacted with your brand before. This smaller pool leads to a higher CPM for retargeting ads.

When you combine these two audiences in your metrics, you’re essentially mixing different dynamics. This can lead to skewed insights and an inaccurate representation of your campaign’s true performance. If retargeting and prospecting metrics are combined, the overall CPM may appear inflated due to the presence of higher-cost retargeting ads. This could potentially mark the cost-effectiveness of your prospecting efforts.

Creating Compelling Ad Components

When it comes to creating retargeting ads on Facebook, the art lies in combining compelling elements that engage, entice, and resonate with your audience. Let’s dive deeper into the core components that can turn a casual viewer into a converted customer.

  1. Captivating Visuals

The role of retargeting ads is to stop scrolling and make users pause for a second glance. This is where the power of eye-catching visuals comes into play. 

Consider visuals that are not just aesthetically pleasing but also encapsulate your brand’s essence. Whether it’s vibrant product images or lifestyle shots that evoke emotion, visuals should tell a story that resonates with your audience. To stand out, aim for high-quality images or videos that are well-lit, well-composed, and aligned with your brand’s visual identity.

  1. Irresistible CTAa (Call to Action)

An effective retargeting ad relies on a well-defined Call to Action (CTA) that guides customers toward the desired action. The CTA serves as a clear direction, steering customers through their journey. It’s essential that the CTA is succinct, compelling, and in harmony with the customer’s path.

Effective CTAs create a sense of urgency or offer tangible value. Consider “Limited Time Offer – Shop Now!” or “Unlock 20% Off – Get Yours Today!” Always keep the customer’s benefit in mind when creating your CTA – it’s the final nudge that propels them toward conversion.

  1. Highlighting Value Propositions

Your retargeting ad is a chance to showcase what makes your brand or product unique. Highlight key benefits and value propositions that set you apart from the competition. Whether it’s quality, affordability, or a specific feature, make it crystal clear why choosing your brand is the right decision.

For instance, “Experience Unmatched Sound Quality” or “Transform Your Cooking with Chef-Grade Knives” communicates the value your product offers in a succinct manner.

Leveraging Pricing Details for Effective Retargeting

When we talk about retargeting ads, how you show prices can be a strong tactic, But just like any strategy, there are things to think about. Let’s look at using pricing info in retargeting ads – the good things it does and the possible not-so-good things.

The Pros and Cons of Pricing Details

Including pricing details in your retargeting ads can be a double-edged sword. On one hand, it offers transparency, setting clear expectations for potential customers. Seeing the price upfront eliminates ambiguity and ensures that those who engage further are genuinely interested.

However, there’s a potential downside. Displaying pricing information could lead some users to make swift judgments based solely on cost. If your product or service is positioned as a premium offering with a higher price point, those who focus solely on price might miss out on the value and benefits your brand provides.

Strategic Application of Pricing Information

So, when should you deploy pricing details to attract potential customers? Here’s where understanding your audience’s journey comes into play. If your data reveals that users who engaged with your brand are particularly price-sensitive, mentioning a discount or showcasing a competitive price could be a smart move.

Our data points to an interesting trend – the absence of a discount in retargeting ads can sometimes yield negative consequences. Users who have interacted with your brand previously might be expecting a little extra incentive, and the absence of one could lead to disengagement.

Getting the Timing Right: Ad Frequency and Engagement

Timing is everything, especially in the world of retargeting ads. Let’s break down the concept of ad frequency and how it can affect how people engage with your ads.

Understanding Ad Frequency

Ad frequency is how often someone sees your retargeting ad. It’s like how many times you hear your favorite song on the radio – too much, and you might get tired of it. The same goes for ads. If someone keeps seeing your ad again and again, it can start feeling a bit overwhelming.

Striking the Right Balance

Finding the sweet spot for ad frequency is key. You want to remind people about your brand without becoming a digital pest. The goal is to avoid something called “ad fatigue,” where users get so used to your ad that they start ignoring it – not what we want.

So, how do you strike that balance? Well, it depends on your audience and your goals. Generally, showing your retargeting ad a few times over a specific period can work well. It’s like saying, “Hey, we’re still here,” without saying it too many times.

Remember, timing matters too. Showing your ad at the right moments can have a bigger impact. For instance, if someone abandons their cart, showing them a reminder shortly after can be more effective than waiting too long.

Retargeting ads: A/B Testing and Optimization

Now, let’s delve into a powerful method to make your retargeting ads even better – A/B testing. It’s like trying out different options to see which one works best. A/B testing lets you experiment with various parts of your ads to find out what makes people more interested.

A/B testing is like running experiments to improve your ads. Instead of guessing, you’re using real tests to see what gets better results. It’s similar to trying different ways of doing something to find the most effective one.

What You Can Test

Let’s break down what you can test. First, visuals – the images or videos in your ads. Change them to see which ones catch more attention. Next, CTAs – the buttons that tell people what to do. Try different words to see which ones make more people click.

Messaging is another part – the words you use in your ad. Test different messages to see what resonates better with your audience. Lastly, pricing – experiment with different prices or discounts to see what encourages more people to make a purchase.

How to Test

Testing is simple. Create two versions of your ad: one with the change you want to test (Version A) and one without the change (Version B). Then, show these versions to different people and see which one gets a better response.

A/B testing helps you find the best formula for your ads. By trying out different approaches, you’ll discover what works best for your audience

Summing up

Facebook retargeting is your way of reconnecting with potential customers who’ve already shown interest in your brand. By creating compelling ads with eye-catching visuals, clear calls to action, personalized messages, and emphasizing value, you engage your audience on their terms. Tracking performance and employing A/B testing further enhance your strategy. Remember, understanding your audience, monitoring performance, and continual improvement are key to effective retargeting. By combining these elements, you can confidently guide your retargeting efforts, leading to more conversions and stronger customer relationships.

Featured image by Greg Bulla on Unsplash

The post Reeling Them Back: Retargeting Ads That Convert on Facebook appeared first on noupe.

Categories: Others Tags:

Identity Verification Unveiled: 6 Must-Know Trends In 2023

October 25th, 2023 No comments

It is now more critical than ever to verify your identity at the same time as having access to your bank account, email account, or making a web purchase. In anticipation of 2023, the destiny of identification verification evolves, placing current new eras and techniques in the foreground.

The essay will describe six tendencies expected to alternate identification verification by 2023, and it will likely be surprisingly informative. First, with virtual living, there are conveniences such as biometric integration or identity fusion, finally acknowledging that artificial intelligence is significant.

These advancements will protect identity online as long as companies keep pace with them to ensure that people can always browse. Consequently, it will be possible for us to take a journey of exploration into the identification validation area that is expanding rapidly, changeable, and ever-changing.

1. Decentralized Identity and Self Sovereign Identity (SSI).

red padlock on black computer keyboard
Source

In 2023, self-sovereign identity or decentralized identification became famous. This enables people to have more opportunities for sharing and exploiting their data. What you need to know is as follows:

Blockchain as a Trust Anchor

Blockchain Technology and Decentralized Identifier – Providing Immutable Record System for Tracking and Verification of Identities. Identity verifications have become genuine and public due to the lack of a centralized government or an arbiter.

User-Centric Identity

By giving users control, SSI flips the script on conventional identity verification. With SSI, people may save and selectively share their identity data on their devices, lowering the risk of data breaches and identity theft. This pattern coincides with rising worries about data privacy and the need for more control over individual information.

2. Two-Factor Authentication

pink and silver padlock on black computer keyboard
Source

The ongoing war against identity theft requires instruments such as two-factor or multi-factor authentication. The customer should enter the code emailed or sent to their mobile phone. The verification method can easily be recognized by customers, and also understand how to use it. 

You can verify a customer’s email addresses and phone numbers in minutes with 2FA or MFA. That is a vital check when ensuring that your customers have inputted correct data.

When employing two-factor or multi-factor authentication, users are often required to provide a form of personal identification in addition to the standard username and password. The requirement for a token serves as a strong fraud deterrent. Thus, users should physically possess or memorize the token, such as a code they have received from the authentication service provider. 

3. Knowledge-Based Authentication

Using security questions, KBA confirms the user’s identity since it is built upon previous experience. These questions are often simple to answer for the respondent, yet they pose a problem to other people. However, KBA has some preventive procedures, including asking, “What was your favorite teacher?” and “What number of pets do you have?” for example. 

Some of them require answers in a specified duration. First and foremost, KBA is the most practical form of verification. However, social networking provides quick solutions for problems as a drawback. Other, more indirect methods may be used in social engineering.

4. AI and Machine Learning for Enhanced Verification

With AI/ML for identity verification, the process has become better targeted and efficient. How these technologies are influencing the environment is as follows:

Enhanced Document Verification

The document-checking tools driven by AI can detect at once if the given document, like a passport, license, or utility bill, is not fake. Using these instruments reduces the danger of unlawfulness inherent in false documents.

Advanced Fraud Detection

Systems of artificial intelligence-driven fraud detection continually learn about new fraud patterns. Anomalies are uncovered, reported, and stopped in real time as they occur.

Improved User Experience

The user experience is also being streamlined using AI and ML. They can determine a user’s legitimacy based on their actions and historical data, eliminating the need for onerous verification procedures.

5. Database Methods

Database ID approaches use data from various sources to verify a person’s identity card. Database approaches are frequently used to assess a user’s level of risk because they significantly minimize the need for manual assessments. 

6. Regulatory Compliance and KYC (Know Your Customer) Evolution

a person holding a phone
Source

Regulatory compliance is still driving identity verification trends. To keep up with technological improvements, KYC standards are changing:

Digital Identity Ecosystems

Developing Digital Identity Ecosystems. The ecosystem of identity comprises networks built to guarantee privacy, safety, and continuity in proving one’s online identity. These include biometrics, digital ID cards, electronic identity proofing, and blockchain-based solutions.

Global Regulatory Harmonization

As cross-border transactions intensify, the need for KYC standards’ global harmonization increases. Organizations, therefore, are adopting standardized procedures as a means to conform to more than one jurisdiction.

Bottomline 

As society changes its digital landscape with each year coming closer to 2023, identity verification remains one of the most essential elements for preserving online security and good quality user experience. The critical dimensions influencing the identity verification environment are biometric authentication, decentralized identity, innovations in AI And ML, regulatory conformity, zero-trust security models, and multi-factor authentication.

To this end, businesses and people would also have to constantly monitor all these technological innovations so that their interactions would be smooth when using the internet and keep them safe. Such enhancements will offer us a safer and more trustworthy digital environment, benefiting us all.

Featured image by Towfiqu barbhuiya on Unsplash

The post Identity Verification Unveiled: 6 Must-Know Trends In 2023 appeared first on noupe.

Categories: Others Tags:

Best AI Tools That Help You in Making Your Content More Unique

October 25th, 2023 No comments

In times like these, standing out from the crowd and grabbing your audience’s attention through unique content is essential. 

Fortunately, the introduction of Artificial Intelligence (AI) has completely transformed the content creation field.

This article explores five AI tools that have revolutionized the way content is created to be unique. These tools empower writers, marketers, businessmen, and students to add their own personal touch and genuine feel to their work. The tools listed include an online notepad tool, content creation tools, and SEO helpers.

From AI-powered content generation to using advanced paraphrasing techniques to make your content unique, AI tools provide many possibilities for those who want their content to be impressive.

So, start reading the article to explore the world of AI-driven creativity. 

Scalenut.com

Scalenut is your one-stop solution for all your content needs, from generating ideas to optimizing for SEO.

You can use Scalenut to create high-quality content for various formats, from blog posts and articles to social media posts and product descriptions. 

Using it for content creation can save you time and effort. You can focus on other business areas while your content is being created.

By creating detailed content briefs, Scalenut will help you create well-structured and informative content. It will also suggest ways to improve your content’s structure and the readability of your writing.

Using Scalenut to generate detailed content briefs, you can get feedback on how your writing is coming out, as well as suggestions on improving the grammar, style, and clarity of your writing.

So, it is a great tool for crafting supreme content. Whether you are a small startup or a large enterprise, Scalenut caters to businesses of all sizes. 

Rephraser.co

Rephraser.co is an absolute game-changer for content writers. This AI rephrasing tool has the incredible ability to generate a different version of your content, all while preserving the original meaning. 

This means you can effortlessly create different versions of your articles or blog posts for various platforms or audiences.

But that’s not all. This rephraser online tool also plays a crucial role in helping you avoid plagiarism and make your content unique.

It uses its state-of-the-art algorithms to use different words and sentences to make sure that your content is not a copycat of any existing content.

While ensuring your text is unique, rephraser.co tries to keep the key concepts and ideas inside your content so your message remains consistent and cohesive.

Additionally, this rephrasing tool makes your text easier to read, whether you are writing for a wide audience or for someone who does not speak English fluently. 

To meet your diverse needs, it provides six distinct rephrasing modes. These modes, namely Creative, Anti-Plagiarism, Fluency, Academic, Blog, and Formal, offer you the flexibility to choose the most suitable approach. 

By utilizing any of these modes, you can rephrase text to make it both plagiarism-free and captivating. 

Most importantly, creates content resembling human writing without requiring manual composition. With this remarkable tool at your disposal, you can effortlessly produce authentic, original content free from any traces of plagiarism. 

Hemingway Editor

The Hemingway Editor is a useful AI tool for enhancing the uniqueness, readability, and accessibility of your content. It effectively highlights adverbs, passive voice, and complex sentences, allowing you to identify and remove these elements for a more concise and readable writing style.

By eliminating adverbs and passive voice with the help of the Hemingway Editor, your writing becomes more engaging and distinctive. 

Moreover, the tool assists in identifying and replacing complex words and phrases with simpler alternatives, enhancing accessibility and uniqueness.

It also analyzes the readability of your content and provides a score, helping you identify and address any areas where your writing may be difficult to comprehend.

The Hemingway Editor’s readability score is valuable for pinpointing areas in your content that may require improvement. 

Therefore, it is an invaluable resource for anyone seeking to enhance the uniqueness, readability, and accessibility of their content. 

Grammarly

Are you tired of submitting content that is riddled with grammar, spelling, and punctuation errors? 

Do you want to make your writing more engaging and professional? Look no further than Grammarly!

Grammarly is a powerful AI tool that can help you identify and correct errors in your writing. Not only that, but it can also suggest improvements to your writing style, making it more concise and effective. 

With Grammarly, you can ensure that your content is original and unique, avoiding any accusations of plagiarism.

It is easy to use, making it accessible to anyone who wants to improve their writing. 

Grammarly provides synonyms and alternative words to enhance the language choices in your content, making it more distinctive and captivating.

The tool suggests adjustments to match the tone and style of your content with your target audience, enabling you to personalize your writing with a distinct and individual voice.

By promoting clear and concise writing, the tool assists in effectively conveying ideas, setting your content apart for its straightforwardness.

Yoast SEO

Last but not least, Yoast SEO is a WordPress plugin that can greatly enhance your content’s search engine optimization (SEO) and make it unique.

One of the key benefits of Yoast SEO is its ability to assist you in creating compelling title tags and meta descriptions for your pages and posts. 

These title tags catch the eye in search results, while meta descriptions provide a concise summary below them. 

Yoast SEO ensures that your title tags and meta descriptions are the appropriate length and contain the most relevant keywords.

In addition, Yoast SEO aids you in effectively incorporating keywords into your content. It offers suggestions for relevant keywords and phrases to include in your title tags, meta descriptions, and content. 

By doing so, it helps you avoid the detrimental practice of keyword stuffing, which can negatively impact your website’s search result rankings.

It assists you in structuring your content in a search engine-friendly manner.

Up to You

Now, you have a better idea of how to use the above-mentioned tools to craft unique content. We hope you find this article quite informative. 

We hope that this article has provided you with valuable insights. 

So why delay any further? 

Incorporate these AI tools into your arsenal and produce exceptional content that captivates your intended audience.

The post Best AI Tools That Help You in Making Your Content More Unique appeared first on noupe.

Categories: Others Tags:

The Fight For The Main Thread

October 24th, 2023 No comments

This article is a sponsored by SpeedCurve

Performance work is one of those things, as they say, that ought to happen in development. You know, have a plan for it and write code that’s mindful about adding extra weight to the page.

But not everything about performance happens directly at the code level, right? I’d say many — if not most — sites and apps rely on some number of third-party scripts where we might not have any influence over the code. Analytics is a good example. Writing a hand-spun analytics tracking dashboard isn’t what my clients really want to pay me for, so I’ll drop in the ol’ Google Analytics script and maybe never think of it again.

That’s one example and a common one at that. But what’s also common is managing multiple third-party scripts on a single page. One of my clients is big into user tracking, so in addition to a script for analytics, they’re also running third-party scripts for heatmaps, cart abandonments, and personalized recommendations — typical e-commerce stuff. All of that is dumped on any given page in one fell swoop courtesy of Google Tag Manager (GTM), which allows us to deploy and run scripts without having to go through the pain of re-deploying the entire site.

As a result, adding and executing scripts is a fairly trivial task. It is so effortless, in fact, that even non-developers on the team have contributed their own fair share of scripts, many of which I have no clue what they do. The boss wants something, and it’s going to happen one way or another, and GTM facilitates that work without friction between teams.

All of this adds up to what I often hear described as a “fight for the main thread.” That’s when I started hearing more performance-related jargon, like web workers, Core Web Vitals, deferring scripts, and using pre-connect, among others. But what I’ve started learning is that these technical terms for performance make up an arsenal of tools to combat performance bottlenecks.

The real fight, it seems, is evaluating our needs as developers and stakeholders against a user’s needs, namely, the need for a fast and frictionless page load.

Fighting For The Main Thread

We’re talking about performance in the context of JavaScript, but there are lots of things that happen during a page load. The HTML is parsed. Same deal with CSS. Elements are rendered. JavaScript is loaded, and scripts are executed.

All of this happens on the main thread. I’ve heard the main thread described as a highway that gets cars from Point A to Point B; the more cars that are added to the road, the more crowded it gets and the more time it takes for cars to complete their trip. That’s accurate, I think, but we can take it a little further because this particular highway has just one lane, and it only goes in one direction. My mind thinks of San Francisco’s Lombard Street, a twisty one-way path of a tourist trap on a steep decline.

The main thread may not be that curvy, but you get the point: there’s only one way to go, and everything that enters it must go through it.

JavaScript operates in much the same way. It’s “single-threaded,” which is how we get the one-way street comparison. I like how Brian Barbour explains it:

“This means it has one call stack and one memory heap. As expected, it executes code in order and must finish executing a piece of code before moving on to the next. It’s synchronous, but at times that can be harmful. For example, if a function takes a while to execute or has to wait on something, it freezes everything up in the meantime.”

— Brian Barbour

So, there we have it: a fight for the main thread. Each resource on a page is a contender vying for a spot on the thread and wants to run first. If one contender takes its sweet time doing its job, then the contenders behind it in line just have to wait.

Monitoring The Main Thread

If you’re like me, I immediately reach for DevTools and open the Lighthouse tab when I need to look into a site’s performance. It covers a lot of ground, like reporting stats about a page’s load time that include Time to First Byte (TTFB), First Contentful Paint (FCP), Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS), and so on.

I love this stuff! But I also am scared to death of it. I mean, this is stuff for back-end engineers, right? A measly front-end designer like me can be blissfully ignorant of all this mumbo-jumbo.

Meh, untrue. Like accessibility, performance is everyone’s job because everyone’s work contributes to it. Even the choice to use a particular CSS framework influences performance.

Total Blocking Time

One thing I know would be more helpful than a set of Core Web Vitals scores from Lighthouse is knowing the time it takes to go from the First Contentful Paint (FCP) to the Time to Interactive (TTI), a metric known as the Total Blocking Time (TBT). You can see that Lighthouse does indeed provide that metric. Let’s look at it for a site that’s much “heavier” than Smashing Magazine.

There we go. The problem with the Lighthouse report, though, is that I have no idea what is causing that TBT. We can get a better view if we run the same test in another service, like SpeedCurve, which digs deeper into the metric. We can expand the metric to glean insights into what exactly is causing traffic on the main thread.

That’s a nice big view and is a good illustration of TBT’s impact on page speed. The user is forced to wait a whopping 4.1 seconds between the time the first significant piece of content loads and the time the page becomes interactive. That’s a lifetime in web seconds, particularly considering that this test is based on a desktop experience on a high-speed connection.

One of my favorite charts in SpeedCurve is this one showing the distribution of Core Web Vitals metrics during render. You can see the delta between contentful paints and interaction!

Spotting Long Tasks

What I really want to see is JavaScript, which takes more than 50ms to run. These are called long tasks, and they contribute the most strain on the main thread. If I scroll down further into the report, all of the long tasks are highlighted in red.

Another way I can evaluate scripts is by opening up the Waterfall View. The default view is helpful to see where a particular event happens in the timeline.

But wait! This report can be expanded to see not only what is loaded at the various points in time but whether they are blocking the thread and by how much. Most important are the assets that come before the FCP.

First & Third Party Scripts

I can see right off the bat that Optimizely is serving a render-blocking script. SpeedCurve can go even deeper by distinguishing between first- and third-party scripts.

That way, I can see more detail about what’s happening on the Optimizely side of things.

Monitoring Blocking Scripts

With that in place, SpeedCurve actually lets me track all the resources from a specific third-party source in a custom graph that offers me many more data points to evaluate. For example, I can dive into scripts that come from Optimizely with a set of custom filters to compare them with overall requests and sizes.

This provides a nice way to compare the impact of different third-party scripts that represent blocking and long tasks, like how much time those long tasks represent.

Or perhaps which of these sources are actually render-blocking:

These are the kinds of tools that allow us to identify bottlenecks and make a case for optimizing them or removing them altogether. SpeedCurve allows me to monitor this over time, giving me better insight into the performance of those assets.

Monitoring Interaction to Next Paint

There’s going to be a new way to gain insights into main thread traffic when Interaction to Next Paint (INP) is released as a new core vital metric in March 2024. It replaces the First Input Delay (FID) metric.

What’s so important about that? Well, FID has been used to measure load responsiveness, which is a fancy way of saying it looks at how fast the browser loads the first user interaction on the page. And by interaction, we mean some action the user takes that triggers an event, such as a click, mousedown, keydown, or pointerdown event. FID looks at the time the user sparks an interaction and how long the browser processes — or responds to — that input.

FID might easily be overlooked when trying to diagnose long tasks on the main thread because it looks at the amount of time a user spends waiting after interacting with the page rather than the time it takes to render the page itself. It can’t be replicated with lab data because it’s based on a real user interaction. That said, FID is correlated to TBT in that the higher the FID, the higher the TBT, and vice versa. So, TBT is often the go-to metric for identifying long tasks because it can be measured with lab data as well as real-user monitoring (RUM).

But FID is wrought with limitations, the most significant perhaps being that it’s only a measure of the first interaction. That’s where INP comes into play. Instead of measuring the first interaction and only the first interaction, it measures all interactions on a page. Jeremy Wagner has a more articulate explanation:

“The goal of INP is to ensure the time from when a user initiates an interaction until the next frame is painted is as short as possible for all or most interactions the user makes.”
— Jeremy Wagner

Some interactions are naturally going to take longer to respond than others. So, we might think of FID as merely a first impression of responsiveness, whereas INP is a more complete picture. And like FID, the INP score is closely correlated with TBT but even more so, as Annie Sullivan reports:

Thankfully, performance tools are already beginning to bake INP into their reports. SpeedCurve is indeed one of them, and its report shows how its RUM capabilities can be used to illustrate the correlation between INP and long tasks on the main thread. This correlation chart illustrates how INP gets worse as the total long tasks’ time increases.

What’s cool about this report is that it is always collecting data, providing a way to monitor INP and its relationship to long tasks over time.

Not All Scripts Are Created Equal

There is such a thing as a “good” script. It’s not like I’m some anti-JavaScript bloke intent on getting scripts off the web. But what constitutes a “good” one is nuanced.

Who’s It Serving?

Some scripts benefit the organization, and others benefit the user (or both). The challenge is balancing business needs with user needs.

I think web fonts are a good example that serves both needs. A font is a branding consideration as well as a design asset that can enhance the legibility of a site’s content. Something like that might make loading a font script or file worth its cost to page performance. That’s a tough one. So, rather than fully eliminating a font, maybe it can be optimized instead, perhaps by self-hosting the files rather than connecting to a third-party domain or only loading a subset of characters.

Analytics is another difficult choice. I removed analytics from my personal site long ago because I rarely, if ever, looked at them. And even if I did, the stats were more of an ego booster than insightful details that helped me improve the user experience. It’s an easy decision for me, but not so easy for a site that lives and dies by reports that are used to identify and scope improvements.

If the script is really being used to benefit the user at the end of the day, then yeah, it’s worth keeping around.

When Is It Served?

A script may very well serve a valid purpose and benefit both the organization and the end user. But does it need to load first before anything else? That’s the sort of question to ask when a script might be useful, but can certainly jump out of line to let others run first.

I think of chat widgets for customer support. Yes, having a persistent and convenient way for customers to get in touch with support is going to be important, particularly for e-commerce and SaaS-based services. But does it need to be available immediately? Probably not. You’ll probably have a greater case for getting the site to a state that the user can interact with compared to getting a third-party widget up front and center. There’s little point in rendering the widget if the rest of the site is inaccessible anyway. It is better to get things moving first by prioritizing some scripts ahead of others.

Where Is It Served From?

Just because a script comes from a third party doesn’t mean it has to be hosted by a third party. The web fonts example from earlier applies. Can the font files be self-hosted instead rather than needing to establish another outside connection? It’s worth asking. There are self-hosted alternatives to Google Analytics, after all. And even GTM can be self-hosted! That’s why grouping first and third-party scripts in SpeedCurve’s reporting is so useful: spot what is being served and where it is coming from and identify possible opportunities.

What Is It Serving?

Loading one script can bring unexpected visitors along for the ride. I think the classic case is a third-party script that loads its own assets, like a stylesheet. Even if you think you’re only loading one stylesheet &mdahs; your own — it’s very possible that a script loads additional external stylesheets, all of which need to be downloaded and rendered.

Getting JavaScript Off The Main Thread

That’s the goal! We want fewer cars on the road to alleviate traffic on the main thread. There are a bunch of technical ways to go about it. I’m not here to write up a definitive guide of technical approaches for optimizing the main thread, but there is a wealth of material on the topic.

I’ll break down several different approaches and fill them in with resources that do a great job explaining them in full.

Use Web Workers

A web worker, at its most basic, allows us to establish separate threads that handle tasks off the main thread. Web workers run parallel to the main thread. There are limitations to them, of course, most notably not having direct access to the DOM and being unable to share variables with other threads. But using them can be an effective way to re-route traffic from the main thread to other streets, so to speak.

Split JavaScript Bundles Into Individual Pieces

The basic idea is to avoid bundling JavaScript as a monolithic concatenated file in favor of “code splitting” or splitting the bundle up into separate, smaller payloads to send only the code that’s needed. This reduces the amount of JavaScript that needs to be parsed, which improves traffic along the main thread.

Async or Defer Scripts

Both are ways to load JavaScript without blocking the DOM. But they are different! Adding the async attribute to a tag will load the script asynchronously, executing it as soon as it’s downloaded. That’s different from the defer attribute, which is also asynchronous but waits until the DOM is fully loaded before it executes.

Preconnect Network Connections

I guess I could have filed this with async and defer. That’s because preconnect is a value on the rel attribute that’s used on a tag. It gives the browser a hint that you plan to connect to another domain. It establishes the connection as soon as possible prior to actually downloading the resource. The connection is done in advance, allowing the full script to download later.

While it sounds excellent — and it is — pre-connecting comes with an unfortunate downside in that it exposes a user’s IP address to third-party resources used on the page, which is a breach of GDPR compliance. There was a little uproar over that when it was found out that using a Google Fonts script is prone to that as well.

Non-Technical Approaches

I often think of a Yiddish proverb I first saw in Malcolm Gladwell’s Outliers; however, many years ago it came out:

To a worm in horseradish, the whole world is horseradish.

It’s a more pleasing and articulate version of the saying that goes, “To a carpenter, every problem looks like a nail.” So, too, it is for developers working on performance. To us, every problem is code that needs a technical solution. But there are indeed ways to reduce the amount of work happening on the main thread without having to touch code directly.

We discussed earlier that performance is not only a developer’s job; it’s everyone’s responsibility. So, think of these as strategies that encourage a “culture” of good performance in an organization.

Nuke Scripts That Lack Purpose

As I said at the start of this article, there are some scripts on the projects I work on that I have no idea what they do. It’s not because I don’t care. It’s because GTM makes it ridiculously easy to inject scripts on a page, and more than one person can access it across multiple teams.

So, maybe compile a list of all the third-party and render-blocking scripts and figure out who owns them. Is it Dave in DevOps? Marcia in Marketing? Is it someone else entirely? You gotta make friends with them. That way, there can be an honest evaluation of which scripts are actually helping and are critical to balance.

Bend Google Tag Manager To Your Will

Or any tag manager, for that matter. Tag managers have a pretty bad reputation for adding bloat to a page. It’s true; they can definitely make the page size balloon as more and more scripts are injected.

But that reputation is not totally warranted because, like most tools, you have to use them responsibly. Sure, the beauty of something like GTM is how easy it makes adding scripts to a page. That’s the “Tag” in Google Tag Manager. But the real beauty is that convenience, plus the features it provides to manage the scripts. You know, the “Manage” in Google Tag Manager. It’s spelled out right on the tin!

Wrapping Up

Phew! Performance is not exactly a straightforward science. There are objective ways to measure performance, of course, but if I’ve learned anything about it, it’s that subjectivity is a big part of the process. Different scripts are of different sizes and consist of different resources serving different needs that have different priorities for different organizations and their users.

Having access to a free reporting tool like Lighthouse in DevTools is a great start for diagnosing performance issues by identifying bottlenecks on the main thread. Even better are paid tools like SpeedCurve to dig deeper into the data for more targeted insights and to produce visual reports to help make a case for performance improvements for your team and other stakeholders.

While I wish there were some sort of silver bullet to guarantee good performance, I’ll gladly take these and similar tools as a starting point. Most important, though, is having a performance game plan that is served by the tools. And Vitaly’s front-end performance checklist is an excellent place to start.

Categories: Others Tags:

What Removing Object Properties Tells Us About JavaScript

October 23rd, 2023 No comments

A group of contestants are asked to complete the following task:

Make object1 similar to object2.

let object1 = {
  a: "hello",
  b: "world",
  c: "!!!",
};

let object2 = {
  a: "hello",
  b: "world",
};

Seems easy, right? Simply delete the c property to match object2. Surprisingly, each person described a different solution:

  • Contestant A: “I set c to undefined.”
  • Contestant B: “I used the delete operator.”
  • Contestant C: “I deleted the property through a Proxy object.”
  • Contestant D: “I avoided mutation by using object destructuring.”
  • Contestant E: “I used JSON.stringify and JSON.parse.”
  • Contestant F: “We rely on Lodash at my company.”

An awful lot of answers were given, and they all seem to be valid options. So, who is “right”? Let’s dissect each approach.

Contestant A: “I Set c To undefined.”

In JavaScript, accessing a non-existing property returns undefined.

const movie = {
  name: "Up",
};

console.log(movie.premiere); // undefined

It’s easy to think that setting a property to undefined removes it from the object. But if we try to do that, we will observe a small but important detail:

const movie = {
  name: "Up",
  premiere: 2009,
};

movie.premiere = undefined;

console.log(movie);

Here is the output we get back:

{name: 'up', premiere: undefined}

As you can see, premiere still exists inside the object even when it is undefined. This approach doesn’t actually delete the property but rather changes its value. We can confirm that using the hasOwnProperty() method:

const propertyExists = movie.hasOwnProperty("premiere");

console.log(propertyExists); // true

But then why, in our first example, does accessing object.premiere return undefined if the property doesn’t exist in the object? Shouldn’t it throw an error like when accessing a non-existing variable?

console.log(iDontExist);

// Uncaught ReferenceError: iDontExist is not defined

The answer lies in how ReferenceError behaves and what a reference is in the first place.

A reference is a resolved name binding that indicates where a value is stored. It consists of three components: a base value, the referenced name, and a strict reference flag.

For a user.name reference, the base value is the object, user, while the referenced name is the string, name, and the strict reference flag is false if the code isn’t in strict mode.

Variables behave differently. They don’t have a parent object, so their base value is an environment record, i.e., a unique base value assigned each time the code is executed.

If we try to access something that doesn’t have a base value, JavaScript will throw a ReferenceError. However, if a base value is found, but the referenced name doesn’t point to an existing value, JavaScript will simply assign the value undefined.

“The Undefined type has exactly one value, called undefined. Any variable that has not been assigned a value has the value undefined.”

ECMAScript Specification

We could spend an entire article just addressing undefined shenanigans!

Contestant B: “I Used The delete Operator.”

The delete operator’s sole purpose is to remove a property from an object, returning true if the element is successfully removed.

const dog = {
  breed: "bulldog",
  fur: "white",
};

delete dog.fur;

console.log(dog); // {breed: 'bulldog'}

Some caveats come with the delete operator that we have to take into consideration before using it. First, the delete operator can be used to remove an element from an array. However, it leaves an empty slot inside the array, which may cause unexpected behavior since properties like length aren’t updated and still count the open slot.

const movies = ["Interstellar", "Top Gun", "The Martian", "Speed"];

delete movies[2];

console.log(movies); // ['Interstellar', 'Top Gun', empty, 'Speed']

console.log(movies.length); // 4

Secondly, let’s imagine the following nested object:

const user = {
  name: "John",
  birthday: {day: 14, month: 2},
};

Trying to remove the birthday property using the delete operator will work just fine, but there is a common misconception that doing this frees up the memory allocated for the object.

In the example above, birthday is a property holding a nested object. Objects in JavaScript behave differently from primitive values (e.g., numbers, strings, and booleans) as far as how they are stored in memory. They are stored and copied “by reference,” while primitive values are copied independently as a whole value.

Take, for example, a primitive value such as a string:

let movie = "Home Alone";
let bestSeller = movie;

In this case, each variable has an independent space in memory. We can see this behavior if we try to reassign one of them:

movie = "Terminator";

console.log(movie); // "Terminator"

console.log(bestSeller); // "Home Alone"

In this case, reassigning movie doesn’t affect bestSeller since they are in two different spaces in memory. Properties or variables holding objects (e.g., regular objects, arrays, and functions) are references pointing to a single space in memory. If we try to copy an object, we are merely duplicating its reference.

let movie = {title: "Home Alone"};
let bestSeller = movie;

bestSeller.title = "Terminator";

console.log(movie); // {title: "Terminator"}

console.log(bestSeller); // {title: "Terminator"}

As you can see, they are now objects, and reassigning a bestSeller property also changes the movie result. Under the hood, JavaScript looks at the actual object in memory and performs the change, and both references point to the changed object.

Knowing how objects behave “by reference,” we can now understand how using the delete operator doesn’t free space in memory.

The process in which programming languages free memory is called garbage collection. In JavaScript, memory is freed for an object when there are no more references and it becomes unreachable. So, using the delete operator may make the property’s space eligible for collection, but there may be more references preventing it from being deleted from memory.

While we’re on the topic, it’s worth noting that there is a bit of a debate around the delete operator’s impact on performance. You can follow the rabbit trail from the link, but I’ll go ahead and spoil the ending for you: the difference in performance is so negligible that it wouldn’t pose a problem in the vast majority of use cases. Personally, I consider the operator’s idiomatic and straightforward approach a win over a minuscule hit to performance.

That said, an argument can be made against using delete since it mutates an object. In general, it’s a good practice to avoid mutations since they may lead to unexpected behavior where a variable doesn’t hold the value we assume it has.

Contestant C: “I Deleted The Property Through A Proxy Object.”

This contestant was definitely a show-off and used a proxy for their answer. A proxy is a way to insert some middle logic between an object’s common operations, like getting, setting, defining, and, yes, deleting properties. It works through the Proxy constructor that takes two parameters:

  • target: The object from where we want to create a proxy.
  • handler: An object containing the middle logic for the operations.

Inside the handler, we define methods for the different operations, called traps, because they intercept the original operation and perform a custom change. The constructor will return a Proxy object — an object identical to the target — but with the added middle logic.

const cat = {
  breed: "siamese",
  age: 3,
};

const handler = {
  get(target, property) {
    return `cat's ${property} is ${target[property]}`;
  },
};

const catProxy = new Proxy(cat, handler);

console.log(catProxy.breed); // cat's breed is siamese

console.log(catProxy.age); // cat's age is 3

Here, the handler modifies the getting operation to return a custom value.

Say we want to log the property we are deleting to the console each time we use the delete operator. We can add this custom logic through a proxy using the deleteProperty trap.

const product = {
  name: "vase",
  price: 10,
};

const handler = {
  deleteProperty(target, property) {
    console.log(`Deleting property: ${property}`);
  },
};

const productProxy = new Proxy(product, handler);

delete productProxy.name; // Deleting property: name

The name of the property is logged in the console but throws an error in the process:

Uncaught TypeError: 'deleteProperty' on proxy: trap returned falsish for property 'name'

The error is thrown because the handler didn’t have a return value. That means it defaults to undefined. In strict mode, if the delete operator returns false, it will throw an error, and undefined, being a falsy value, triggers this behavior.

If we try to return true to avoid the error, we will encounter a different sort of issue:

// ...

const handler = {
  deleteProperty(target, property) {
    console.log(`Deleting property: ${property}`);

    return true;
  },
};

const productProxy = new Proxy(product, handler);

delete productProxy.name; // Deleting property: name

console.log(productProxy); // {name: 'vase', price: 10}

The property isn’t deleted!

We replaced the delete operator’s default behavior with this code, so it doesn’t remember it has to “delete” the property.

This is where Reflect comes into play.

Reflect is a global object with a collection of all the internal methods of an object. Its methods can be used as normal operations anywhere, but it’s meant to be used inside a proxy.

For example, we can solve the issue in our code by returning Reflect.deleteProperty() (i.e., the Reflect version of the delete operator) inside of the handler.

const product = {
  name: "vase",
  price: 10,
};

const handler = {
  deleteProperty(target, property) {
    console.log(`Deleting property: ${property}`);

    return Reflect.deleteProperty(target, property);
  },
};

const productProxy = new Proxy(product, handler);

delete productProxy.name; // Deleting property: name

console.log(product); // {price: 10}

It is worth calling out that certain objects, like Math, Date, and JSON, have properties that cannot be deleted using the delete operator or any other method. These are “non-configurable” object properties, meaning that they cannot be reassigned or deleted. If we try to use the delete operator on a non-configurable property, it will fail silently and return false or throw an error if we are running our code in strict mode.

"use strict";

delete Math.PI;

Output:

Uncaught TypeError: Cannot delete property 'PI' of #<Object>

If we want to avoid errors with the delete operator and non-configurable properties, we can use the Reflect.deleteProperty() method since it doesn’t throw an error when trying to delete a non-configurable property — even in strict mode — because it fails silently.

I assume, however, that you would prefer knowing when you are trying to delete a global object rather than avoiding the error.

Contestant D: “I Avoided Mutation By Using Object Destructuring.”

Object destructuring is an assignment syntax that extracts an object’s properties into individual variables. It uses a curly braces notation ({}) on the left side of an assignment to tell which of the properties to get.

const movie = {
  title: "Avatar",
  genre: "science fiction",
};

const {title, genre} = movie;

console.log(title); // Avatar

console.log(genre); // science fiction

It also works with arrays using square brackets ([]):

const animals = ["dog", "cat", "snake", "elephant"];

const [a, b] = animals;

console.log(a); // dog

console.log(b); // cat

The spread syntax (...) is sort of like the opposite operation because it encapsulates several properties into an object or an array if they are single values.

We can use object destructuring to unpack the values of our object and the spread syntax to keep only the ones we want:

const car = {
  type: "truck",
  color: "black",
  doors: 4
};

const {color, ...newCar} = car;

console.log(newCar); // {type: 'truck', doors: 4}

This way, we avoid having to mutate our objects and the potential side effects that come with it!

Here’s an edge case with this approach: deleting a property only when it’s undefined. Thanks to the flexibility of object destructuring, we can delete properties when they are undefined (or falsy, to be exact).

Imagine you run an online store with a vast database of products. You have a function to find them. Of course, it will need some parameters, perhaps the product name and category.

const find = (product, category) => {
  const options = {
    limit: 10,
    product,
    category,
  };

  console.log(options);

  // Find in database...
};

In this example, the product name has to be provided by the user to make the query, but the category is optional. So, we could call the function like this:

find("bedsheets");

And since a category is not specified, it returns as undefined, resulting in the following output:

{limit: 10, product: 'beds', category: undefined}

In this case, we shouldn’t use default parameters because we aren’t looking for one specific category.

Notice how the database could incorrectly assume that we are querying products in a category called undefined! That would lead to an empty result, which is an unintended side effect. Even though many databases will filter out the undefined property for us, it would be better to sanitize the options before making the query. A cool way to dynamically remove an undefined property is through object destructing along with the AND operator (&&).

Instead of writing options like this:

const options = {
  limit: 10,
  product,
  category,
};

…we can do this instead:

const options = {
  limit: 10,
  product,
  ...(category && {category}),
};

It may seem like a complex expression, but after understanding each part, it becomes a straightforward one-liner. What we are doing is taking advantage of the && operator.

The AND operator is mostly used in conditional statements to say,

If A and B are true, then do this.

But at its core, it evaluates two expressions from left to right, returning the expression on the left if it is falsy and the expression on the right if they are both truthy. So, in our prior example, the AND operator has two cases:

  1. category is undefined (or falsy);
  2. category is defined.

In the first case where it is falsy, the operator returns the expression on the left, category. If we plug category inside the rest of the object, it evaluates this way:

const options = {
  limit: 10,

  product,

  ...category,
};

And if we try to destructure any falsy value inside an object, they will be destructured into nothing:

const options = {
  limit: 10,
  product,
};

In the second case, since the operator is truthy, it returns the expression on the right, {category}. When plugged into the object, it evaluates this way:

const options = {
  limit: 10,
  product,
  ...{category},
};

And since category is defined, it is destructured into a normal property:

const options = {
  limit: 10,
  product,
  category,
};

Put it all together, and we get the following betterFind() function:

const betterFind = (product, category) => {
  const options = {
    limit: 10,
    product,
    ...(category && {category}),
  };

  console.log(options);

  // Find in a database...
};

betterFind("sofas");

And if we don’t specify any category, it simply does not appear in the final options object.

{limit: 10, product: 'sofas'}

Contestant E: “I Used JSON.stringify And JSON.parse.”

Surprisingly to me, there is a way to remove a property by reassigning it to undefined. The following code does exactly that:

let monitor = {
  size: 24,
  screen: "OLED",
};

monitor.screen = undefined;

monitor = JSON.parse(JSON.stringify(monitor));

console.log(monitor); // {size: 24}

I sort of lied to you since we are employing some JSON shenanigans to pull off this trick, but we can learn something useful and interesting from them.

Even though JSON takes direct inspiration from JavaScript, it differs in that it has a strongly typed syntax. It doesn’t allow functions or undefined values, so using JSON.stringify() will omit all non-valid values during conversion, resulting in JSON text without the undefined properties. From there, we can parse the JSON text back to a JavaScript object using the JSON.parse() method.

It’s important to know the limitations of this approach. For example, JSON.stringify() skips functions and throws an error if either a circular reference (i.e., a property is referencing its parent object) or a BigInt value is found.

Contestant F: “We Rely On Lodash At My Company.”

It’s worth noting that utility libraries such as Lodash.js, Underscore.js, or Ramda also provide methods to delete — or pick() — properties from an object. We won’t go through different examples for each library since their documentation already does an excellent job of that.

Conclusion

Back to our initial scenario, which contestant is right?

The answer: All of them! Well, except for the first contestant. Setting a property to undefined just isn’t an approach we want to consider for removing a property from an object, given all of the other ways we have to go about it.

Like most things in development, the most “correct” approach depends on the situation. But what’s interesting is that behind each approach is a lesson about the very nature of JavaScript. Understanding all the ways to delete a property in JavaScript can teach us fundamental aspects of programming and JavaScript, such as memory management, garbage collection, proxies, JSON, and object mutation. That’s quite a bit of learning for something seemingly so boring and trivial!

Further Reading On SmashingMag

Categories: Others Tags:

3 Essential Design Trends, November 2023

October 23rd, 2023 No comments

In the season of giving thanks, we often think of comfort and tradition. These are common themes with each of our three featured website design trends this month.

Categories: Designing, Others Tags:

A Simple Guide to Efficient Payroll System Management

October 23rd, 2023 No comments

No matter what industry you’re working in, you don’t want to get on the IRS’ bad side. Whether it’s misclassifying an employee by mistake, or not keeping up-to-date records, there are lots of little ways you can make mistakes with payroll.

Not to mention, with inflation on the rise, over half of all Americans are living paycheck to paycheck. That means all it takes is one small error and you’d have (deservedly!) unhappy staff on your hands. Luckily, many common payroll errors are entirely preventable – especially if you implement strong payroll system management.

What is payroll system management?

Payroll system management encompasses the payroll process that ensures employees receive their salaries from their employers. It also involves ensuring that both the business and the employees meet any required fiscal responsibilities to state or federal authorities. These can range from Social Security and Medicare taxes to FUTA (Federal Unemployment Tax Act) payments. 

There are a number of reasons why efficient payroll management is important. Not only does it keep the organization’s financial records up to date, but it also ensures that a business is complying with relevant regulations. It helps a business pay their employees on time and with the right amount, as well as covering the various tax requirements and maintaining their records. Without that efficiency, organizations can face salary claims and government penalties. 

Managing these payroll challenges is essential to ensure smooth operations and compliance.

Common payroll mistakes to avoid 

Mistakes happen, but payroll-related mistakes can be costly to both the business and the employee. Here are some of the most common errors – and how to avoid them.

  1. Employee classification

One easy-to-make mistake is misclassifying the employees you have. It’s important to understand the difference between an employee and an independent contractor as the tax requirements differ between them. Generally speaking, an independent contractor would submit their own invoices, pay their own tax, and aren’t under anyone’s immediate management. However, the definition can vary from state to state.

 If your payroll department is unsure how to classify a worker, they can ask the IRS to help determine status (form SS-8). Alternatively, if you have already made errors in this area, you can seek help, again from the IRS, through their Voluntary Classification Settlement Program.

Source
  1. Wrong data

Data and information mistakes are the most common type of payroll errors. One thing to note is that there are certain employee documents you are required by law to keep for four years or more:

  • Tax forms
  • Timesheets
  • Proof of payments
  • Any canceled checks

To ensure efficient payroll system management, you need the correct information. Your initial data collection happens when an employee fills out a W-2. You need to be sure the following information is accurate: 

  • Full name. Check spelling and include any middle names. Also, update this if there is any change of name.
  • Social security number. Every American has a unique social security number which helps identify them. 
  • Address. This needs to be their current home address and should be updated in your records if they move. 
  • Date of birth
  • Employment dates. Your records should show when the person started working for you and, if applicable, the date on which employment ended. 
  • Employment details. This section should cover factors such as hourly rates, overtime rates, and any bonus details.

Additionally, make sure you have up-to-date bank details and payment methods, as missing this information can delay payments to your employees.

  1. Missed deadlines

It’s crucial that your business complies with any set deadlines. There are two deadlines you need to consider. The first is the monthly (or biweekly) deadline set for you to deposit your share of taxes and any withholding taxes. You may face penalties of up to 15% if you miss these deadlines. There are also quarterly and annual returns that your business should file with your W-2s. 

  1. Withholding issues

Taxes can be complicated, there’s no denying that. With so many different rules and regulations to consider, it’s no surprise that there are regular issues with the withholding process. Some of the common mistakes you should be aware of include:

  • Erroneous calculations of deductions (both pre and post-tax). 
  • Missing out taxable benefits such as bonuses or gift cards.
  • Failing to withhold taxes at the state or federal level. 
  • Taking the wrong deductions from exempt employees. 
  • Issuing the wrong W-2 forms. 
Source
  1. Exempt vs. non-exempt

This can be an area that causes real confusion. Your non-exempt employees, who are usually hourly ones, are entitled to overtime payments while your non-exempt ones are not. If a non-exempt employee works more than 40 hours in any given week, then you need to pay them time and a half. Trying to avoid any related obligations violates the FLSA (Fair Labor Standards Act) and you could face a lawsuit. 

There are three main conditions to meet to be classified as exempt: 

  • Have earnings that equal or exceed $684 per week or $35,568 annually (as of August 2023).
  • Be in a managerial or administrative role or have a professional degree (for example, an engineer).
  • Be receiving a salary (or a consistent hourly schedule) where their final salary is relatively unchanged). 

How your payroll management system should work

It is likely that every process within your organization has guidelines in place, and payroll management should be no different. By following these guidelines, you reduce the risk of errors and regulatory non-compliance. 

  1. Collect relevant data 

Gone are the days of payroll being a purely manual process. There are now even systems that can help employers manage the payroll process for one employee! So, whatever system or software you are using, the first step is to collect and enter all the data you need to both pay any taxes and provide accurate paychecks.

Source

Your system should be able to calculate and pay all taxes due before it processes payroll for a particular time period. The data you enter into your system should include employee grades, state and federal deductions, any applicable allowances, and details of any loans or advances made. 

Additionally, consider leveraging modern technology to enhance your payroll management. Incorporating AI for accounting can streamline data entry, reduce errors, and improve overall efficiency in managing your financial and payroll records.

  1. Compile all necessary documents

Documenting each employee should start during the onboarding process – using an online form builder that links to your payment management system can ensure your payroll department has all the necessary information. This can include:

  • A list of Federal withholdings that apply to your employees. 
  • Eligibility forms that contain all the professional and personal information about each employee. 
  • Forms for state income tax and any other relevant withholding details. 
  1. Ensure you have all the required company information

You need to make sure that all the details of your company are contained in the system. If you operate in more than one state, you also need to include every state tax ID number that applies to the locations you operate in. You also need to set the frequency of when salaries are paid, and by which method (cheque, direct deposit, etc). 

  1. Integrated time tracking 

To pay the correct salaries, you need to know the hours worked by employees. Investing in time-tracking software that can be integrated can make this much easier. While it can be tempting to stick with manual time tracking, especially if you’re running payroll for small business, it can be very prone to human error.

Source
  1. Set up compliance norms 

You may operate separate company and payroll bank accounts. By linking these, and setting up compliance norms, your payroll system can reconcile all of your payroll details and can help ensure that salary payments are free of errors. It can also help you avoid potential penalties that would be applied if you failed to meet certain regulatory requirements. 

  1. Gross salary

With accurate data and a properly set up payroll management system, the first thing it will normally calculate is gross pay. It should take note of total hours worked including overtime, as well as any bonuses that may be due for that time period. This should mean that you now have a gross figure for each employee and your system can move to the next step. 

  1. Net salary

Once you have a gross figure, you can apply any deductions such as pre-tax and tax withholdings. Efficient payment system management means that these figures are deducted from the employee’s gross salary and will provide a net figure which is what employees receive. 

  1. Employer responsibilities

Your system should recognize the various employer responsibilities such as paying withheld tax, as well as employer contributions for things like Medicare and Social Security. 

Source
  1. Reconcile and verify

Regularly check that your payroll software is working properly and that any updates have been installed. Reconcile all your figures and verify their accuracy. It can be helpful if your payroll system is integrated with any accounting software you use. 

  1. Generate reports and documents

The penultimate stage in the process is to generate reports and documents and send them to any relevant departments and managers such as your finance department. If you feel overburdened by documentation, you can utilize a document-understanding solution. You should also let employees know any necessary details and if there are any expected delays. 

  1. Paycheck details

Depending on the state you operate in, you should be issuing a statement to employees when they receive their salary. These statements should show all pertinent details such as hours worked, gross and net figures, and all deductions made. 

The Takeaway

It’s important to remember that efficient payroll system management is not just about complying with the various tax laws, it’s also about ensuring your employees are not inconvenienced. 

With so many software solutions available, the likelihood of human error is vastly reduced. However, the employer still has a responsibility to ensure that tax deadlines are met and that accurate data is put into the system they use so that salaries are paid on time and that the figures are correct.

Featured image by Money Knack, www.moneyknack.com on Unsplash

The post A Simple Guide to Efficient Payroll System Management appeared first on noupe.

Categories: Others Tags:

A Roundup Of WCAG 2.2 Explainers

October 21st, 2023 No comments

WCAG 2.2 is officially the latest version of the Web Content Accessibility Guidelines now that it has become a “W3C Recommended” web standard as of October 5.

The changes between WCAG 2.1 and 2.2 are nicely summed up in “What’s New in WCAG 2.2”:

“WCAG 2.2 provides 9 additional success criteria since WCAG 2.1. […] The 2.0 and 2.1 success criteria are essentially the same in 2.2, with one exception: 4.1.1 Parsing is obsolete and removed from WCAG 2.2.”

This article is not a deep look at the changes, what they mean, and how to conform to them. Plenty of other people have done a wonderful job of that already. So, rather than add to the pile, let’s round up what has already been written and learn from those who keep a close pulse on the WCAG beat.

There are countless articles and posts about WCAG 2.2 written ahead of the formal W3C recommendation. The following links were selected because they were either published or updated after the announcement and reflect the most current information at the time of this writing. It’s also worth mentioning that we’re providing these links purely for reference — by no means are they sponsored, nor do they endorse a particular person, company, or product.

The best place for information on WCAG standards will always be the guidelines themselves, but we hope you enjoy what others are saying about them as well.

Hidde de Vries: What’s New In WCAG 2.2?

Hidde is a former W3C staffer, and he originally published this WCAG 2.2 overview last year when a draft of the guidelines was released, updating his post immediately when the guidelines became a recommendation.

Patrick Lauke: What’s New In WCAG 2.2

Patrick is a current WCAG member and contributor, also serving as Principal Accessibility Specialist at TetraLogical, which itself is also a W3C member.

This overview goes deeper than most, reporting not only what is new in WCAG 2.2 but how to conform to those standards, including specific examples with excellent visuals.

James Edwards: New Success Criteria In WCAG 2.2

James is a seasoned accessibility consultant with TPGi, a provider of end-to-end accessibility services and products.

Like Patrick, James gets into thorough and detailed information about WCAG 2.2 and how to meet the updated standards. Watch for little asides strewn throughout the post that provide even further context on why the changes were needed and how they were developed.

GOV.UK: Understanding WCAG 2.2

It’s always interesting to see how large organizations approach standards, and governments are no exception because they have a mandate to meet accessibility requirements. GOV.UK published an addendum on WCAG 2.2 updates to its Service Manual.

Notice how the emphasis is on the impact the new guidelines have on specific impairments, as well as ample examples of what it looks like to meet the standards. Equally impressive is the documented measured approach GOV.UK takes, including a goal to be fully compliant by October 2024 while maintaining WCAG 2.1 AA compliance in the meantime.

Deque Systems: Deque Systems Welcomes and Announces Support for WCAG 2.2

Despite being more of a press release, this brief overview has a nice clean table that outlines the new standards and how they align with those who stand to benefit most from them.

Kate Kalcevich: WCAG 2.2: What Changes for Websites and How Does It Impact Users?

Kate really digs into the benefits that users get with WCAG 2.2 compliance. Photos of Kate’s colleague, Samuel Proulx, don’t provide new context but are a nice touch for remembering that the updated guidelines are designed to help real people, a point that is emphasized in the conclusion:

“[W]hen thinking about accessibility beyond compliance, it becomes clear that the latest W3C guidelines are just variations on a theme. The theme is removing barriers and making access possible for everyone.”
— Kate Kalcevich

Level Access: WCAG 2.2 AA Summary and Checklist for Website Owners

Finally, we’ve reached the first checklist! That may be in name only, as this is less of a checklist of tasks than it is a high-level overview of the latest changes. There is, however, a link to download “the must-have WCAG checklist,” but you will need to hand over your name and email address in exchange.

Chris Pycroft: WCAG 2.2 Is Here

While this is more of an announcement than a guide, there is plenty of useful information in there. The reason I’m linking it up is the “WCAG 2.2 Map” PDF that Chris includes in it. It’d be great if there was a web version of it, but I’ll take it either way! The map neatly outlines the success criteria by branching them off the four core WCAG principles.

Shira Blank and Joshua Stein: After More Than a Year of Delays, It Is Time to Officially Welcome WCAG 2.2

This is a nice overview. Nothing more, nothing less. It does include a note that WCAG 2.2 is slated to be the last WCAG 2 update between now and WCAG 3, which apparently is codenamed “Silver”? Nice.

Nathan Schmidt: Demystifying WCAG 2.2

True to its title, this overview nicely explains WCAG 2.2 updates devoid of complex technical jargon. What makes it worth including in this collection, however, are the visuals that help drive home the points.

Craig Abbott: WCAG 2.2 And What It Means For You

Craig’s write-up is a lot like the others in that it’s a high-level overview of changes paired with advice for complying with them. But Craig has a knack for discussing the changes in a way that’s super approachable and even reads like a friendly conversation. There are personal anecdotes peppered throughout the post, including Craig’s own views of the standards themselves.

“I personally feel like the new criteria for Focus Appearance could have been braver and removed some of the ambiguity around what is already often an accessibility issue.”
— Craig Abbott

Dennis Lembrée: WCAG 2.2 Checklist With Filter And Links

Dennis published a quick post on his Web Axe blog reporting on WCAG 2.2, but it’s this CodePen demo he put together that’s the real gem.

See the Pen WCAG 2.2 Checklist with Filter and Links [forked] by Web Overhauls.

It’s a legit checklist of WCAG 2.0 requirements you can filter by release, including the new WCAG 2.2 changes and which chapter of the specifications they align to.

Jason Taylor: WCAG 2.2 Is Here! What It Means For Your Business

Yet another explainer, this time from Jason Taylor at UsableNet. You’ll find a lot of cross-over between this and the others in this roundup, but it’s always good to read about the changes with someone else’s words and perspectives.

Wrapping Up

There are many, many WCAG 2.2 explainers floating around — many more than what’s included in this little roundup. The number of changes introduced in the updated guidelines is surprisingly small, considering WCAG 2.1 was adopted in 2018, but that doesn’t make them any less impactful. So, yes, you’re going to see plenty of overlapping information between explainers. The nuances between them, though, are what makes them valuable, and each one has something worth taking with you.

And we’re likely to see even more explainers pop up! If you know of one that really should be included in this roundup, please do link it up in the comments to share with the rest of us.

Categories: Others Tags: