Archive

Archive for the ‘’ Category

Gatsby Headaches: Working With Media (Part 2)

October 16th, 2023 No comments

Gatsby is a true Jamstack framework. It works with React-powered components that consume APIs before optimizing and bundling everything to serve as static files with bits of reactivity. That includes media files, like images, video, and audio.

The problem is that there’s no “one” way to handle media in a Gatsby project. We have plugins for everything, from making queries off your local filesystem and compressing files to inlining SVGs and serving images in the responsive image format.

Which plugins should be used for certain types of media? How about certain use cases for certain types of media? That’s where you might encounter headaches because there are many plugins — some official and some not — that are capable of handling one or more use cases — some outdated and some not.

That is what this brief two-part series is about. In Part 1, we discussed various strategies and techniques for handling images, video, and audio in a Gatsby project.

This time, in Part 2, we are covering a different type of media we commonly encounter: documents. Specifically, we will tackle considerations for Gatsby projects that make use of Markdown and PDF files. And before wrapping up, we will also demonstrate an approach for using 3D models.

Solving Markdown Headaches In Gatsby

In Gatsby, Markdown files are commonly used to programmatically create pages, such as blog posts. You can write content in Markdown, parse it into your GraphQL data layer, source it into your components, and then bundle it as HTML static files during the build process.

Let’s learn how to load, query, and handle the Markdown for an existing page in Gatsby.

Loading And Querying Markdown From GraphQL

The first step on your Gatsby project is to load the project’s Markdown files to the GraphQL data layer. We can do this using the gatsby-source-filesystem plugin we used to query the local filesystem for image files in Part 1 of this series.

npm i gatsby-source-filesystem

In gatsby-config.js, we declare the folder where Markdown files will be saved in the project:

module.exports = {
  plugins: [
    {
      resolve: `gatsby-source-filesystem`,
      options: {
        name: `assets`,
        path: `${ __dirname }/src/assets`,
      },
    },
  ],
};

Let’s say that we have the following Markdown file located in the project’s ./src/assets directory:

---
title: sample-markdown-file
date: 2023-07-29
---

# Sample Markdown File

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed consectetur imperdiet urna, vitae pellentesque mauris sollicitudin at. Sed id semper ex, ac vestibulum nunc. Etiam ,

bash
lorem ipsum dolor sit

## Subsection

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed consectetur imperdiet urna, vitae pellentesque mauris sollicitudin at. Sed id semper ex, ac vestibulum nunc. Etiam efficitur, nunc nec placerat dignissim, ipsum ante ultrices ante, sed luctus nisl felis eget ligula. Proin sed quam auctor, posuere enim eu, vulputate felis. Sed egestas, tortor

This example consists of two main sections: the frontmatter and body. It is a common structure for Markdown files.

  • Frontmatter
    Enclosed in triple dashes (---), this is an optional section at the beginning of a Markdown file that contains metadata and configuration settings for the document. In our example, the frontmatter contains information about the page’s title and date, which Gatsby can use as GraphQL arguments.
  • Body
    This is the content that makes up the page’s main body content.

We can use the gatsby-transformer-remark plugin to parse Markdown files to a GraphQL data layer. Once it is installed, we will need to register it in the project’s gatsby-config.js file:

module.exports = {
  plugins: [
    {
      resolve: `gatsby-transformer-remark`,
      options: { },
    },
  ],
};

Restart the development server and navigate to http://localhost:8000/___graphql in the browser. Here, we can play around with Gatsby’s data layer and check our Markdown file above by making a query using the title property (sample-markdown-file) in the frontmatter:

query {
  markdownRemark(frontmatter: { title: { eq: "sample-markdown-file" } }) {
    html
  }
}

This should return the following result:

{
  "data": {
    "markdownRemark": {
      "html": "<h1>Sample Markdown File</h1>n<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed consectetur imperdiet urna, vitae pellentesque mauris sollicitudin at."
      // etc.
    }
  },
  "extensions": {}
}

Notice that the content in the response is formatted in HTML. We can also query the original body as rawMarkdownBody or any of the frontmatter attributes.

Next, let’s turn our attention to approaches for handling Markdown content once it has been queried.

Using DangerouslySetInnerHTML

dangerouslySetInnerHTML is a React feature that injects raw HTML content into a component’s rendered output by overriding the innerHTML property of the DOM node. It’s considered dangerous since it essentially bypasses React’s built-in mechanisms for rendering and sanitizing content, opening up the possibility of cross-site scripting (XSS) attacks without paying special attention.

That said, if you need to render HTML content dynamically but want to avoid the risks associated with dangerouslySetInnerHTML, consider using libraries that sanitize HTML input before rendering it, such as dompurify.

The dangerouslySetInnerHTML prop takes an __html object with a single key that should contain the raw HTML content. Here’s an example:

const DangerousComponent = () => {
  const rawHTML = "<p>This is <em>dangerous</em> content!</p>";

  return <div dangerouslySetInnerHTML={ { __html: rawHTML } } />;
};

To display Markdown using dangerouslySetInnerHTML in a Gatsby project, we need first to query the HTML string using Gatsby’s useStaticQuery hook:

import * as React from "react";
import { useStaticQuery, graphql } from "gatsby";

const DangerouslySetInnerHTML = () => {
  const data = useStaticQuery(graphqlquery {
      markdownRemark(frontmatter: { title: { eq: "sample-markdown-file" } }) {
        html
      }
    });

  return <div></div>;
};

Now, the html property can be injected into the dangerouslySetInnerHTML prop.

import * as React from "react";
import { useStaticQuery, graphql } from "gatsby";

const DangerouslySetInnerHTML = () => {
  const data = useStaticQuery(graphqlquery {
      markdownRemark(frontmatter: { title: { eq: "sample-markdown-file" } }) {
        html
      }
    });

  const markup = { __html: data.markdownRemark.html };

  return <div dangerouslySetInnerHTML={ markup }></div>;
};

This might look OK at first, but if we were to open the browser to view the content, we would notice that the image declared in the Markdown file is missing from the output. We never told Gatsby to parse it. We do have two options to include it in the query, each with pros and cons:

  1. Use a plugin to parse Markdown images.
    The gatsby-remark-images plugin is capable of processing Markdown images, making them available when querying the Markdown from the data layer. The main downside is the extra configuration it requires to set and render the files. Besides, Markdown images parsed with this plugin only will be available as HTML, so we would need to select a package that can render HTML content into React components, such as rehype-react.
  2. Save images in the static folder.
    The /static folder at the root of a Gatsby project can store assets that won’t be parsed by webpack but will be available in the public directory. Knowing this, we can point Markdown images to the /static directory, and they will be available anywhere in the client. The disadvantage? We are unable to leverage Gatsby’s image optimization features to minimize the overall size of the bundled package in the build process.

The gatsby-remark-images approach is probably most suited for larger projects since it is more manageable than saving all Markdown images in the /static folder.

Let’s assume that we have decided to go with the second approach of saving images to the /static folder. To reference an image in the /static directory, we just point to the filename without any special argument on the path.

const StaticImage = () => {
  return <img src={ "/desert.png" } alt="Desert" />;
};

react-markdown

The react-markdown package provides a component that renders markdown into React components, avoiding the risks of using dangerouslySetInnerHTML. The component uses a syntax tree to build the virtual DOM, which allows for updating only the changing DOM instead of completely overwriting it. And since it uses remark, we can combine react-markdown with remark’s vast plugin ecosystem.

Let’s install the package:

npm i react-markdown

Next, we replace our prior example with the ReactMarkdown component. However, instead of querying for the html property this time, we will query for rawMarkdownBody and then pass the result to ReactMarkdown to render it in the DOM.

import * as React from "react";
import ReactMarkdown from "react-markdown";
import { useStaticQuery, graphql } from "gatsby";

const MarkdownReact = () => {
  const data = useStaticQuery(graphqlquery {
      markdownRemark(frontmatter: { title: { eq: "sample-markdown-file" } }) {
        rawMarkdownBody
      }
    });

  return <ReactMarkdown>{data.markdownRemark.rawMarkdownBody}</ReactMarkdown>;
};

markdown-to-jsx

markdown-to-jsx is the most popular Markdown component — and the lightest since it comes without any dependencies. It’s an excellent tool to consider when aiming for performance, and it does not require remark’s plugin ecosystem. The plugin works much the same as the react-markdown package, only this time, we import a Markdown component instead of ReactMarkdown.

npm i markdown-to-jsx
import * as React from "react";
import Markdown from "markdown-to-jsx";
import { useStaticQuery, graphql } from "gatsby";

const MarkdownToJSX = () => {
  const data = useStaticQuery(graphqlquery {
      markdownRemark(frontmatter: { title: { eq: "sample-markdown-file" } }) {
        rawMarkdownBody
      }
    });

  return <Markdown> { data.markdownRemark.rawMarkdownBody }</Markdown>;
};

We have taken raw Markdown and parsed it as JSX. But what if we don’t necessarily want to parse it at all? We will look at that use case next.

react-md-editor

Let’s assume for a moment that we are creating a lightweight CMS and want to give users the option to write posts in Markdown. In this case, instead of parsing the Markdown to HTML, we need to query it as-is.

Rather than creating a Markdown editor from scratch to solve this, several packages are capable of handling the raw Markdown for us. My personal favorite is
react-md-editor.

Let’s install the package:

npm i @uiw/react-md-editor

The MDEditor component can be imported and set up as a controlled component:

import * as React from "react";
import { useState } from "react";
import MDEditor from "@uiw/react-md-editor";

const ReactMDEditor = () => {
  const [value, setValue] = useState("**Hello world!!!**");

  return <MDEditor value={ value } onChange={ setValue } />;
};

The plugin also comes with a built-in MDEditor.Markdown component used to preview the rendered content:

import * as React from "react";
import { useState } from "react";
import MDEditor from "@uiw/react-md-editor";

const ReactMDEditor = () => {
  const [value, setValue] = useState("**Hello world!**");

  return (
    <>
      <MDEditor value={value} onChange={ setValue } />
      <MDEditor.Markdown source={ value } />
    </>
  );
};

That was a look at various headaches you might encounter when working with Markdown files in Gatsby. Next, we are turning our attention to another type of file, PDF.

Solving PDF Headaches In Gatsby

PDF files handle content with a completely different approach to Markdown files. With Markdown, we simplify the content to its most raw form so it can be easily handled across different front ends. PDFs, however, are the content presented to users on the front end. Rather than extracting the raw content from the file, we want the user to see it as it is, often by making it available for download or embedding it in a way that the user views the contents directly on the page, sort of like a video.

I want to show you four approaches to consider when embedding a PDF file on a page in a Gatsby project.

Using The Element

The easiest way to embed a PDF into your Gatsby project is perhaps through an iframe element:

import * as React from "react";
import samplePDF from "./assets/lorem-ipsum.pdf";

const IframePDF = () => {
  return <iframe src={ samplePDF }></iframe>;
};

It’s worth calling out here that the iframe element supports lazy loading (loading="lazy") to boost performance in instances where it doesn’t need to load right away.

Embedding A Third-Party Viewer

There are situations where PDFs are more manageable when stored in a third-party service, such as Drive, which includes a PDF viewer that can embedded directly on the page. In these cases, we can use the same iframe we used above, but with the source pointed at the service.

import * as React from "react";

const ThirdPartyIframePDF = () => {
  return (
    <iframe
      src="https://drive.google.com/file/d/1IiRZOGib_0cZQY9RWEDslMksRykEnrmC/preview"
      allowFullScreen
      title="PDF Sample in Drive"
    />
  );
};

It’s a good reminder that you want to trust the third-party content that’s served in an iframe. If we’re effectively loading a document from someone else’s source that we do not control, your site could become prone to security vulnerabilities should that source become compromised.

Using react-pdf

The react-pdf package provides an interface to render PDFs as React components. It is based on pdf.js, a JavaScript library that renders PDFs using HTML Canvas.

To display a PDF file on a , the react-pdf library exposes the Document and Page components:

  • Document: Loads the PDF passed in its file prop.
  • Page: Displays the page passed in its pageNumber prop. It should be placed inside Document.

We can install to our project:

npm i react-pdf

Before we put react-pdf to use, we will need to set up a service worker for pdf.js to process time-consuming tasks such as parsing and rendering a PDF document.

import * as React from "react";
import { pdfjs } from "react-pdf";

pdfjs.GlobalWorkerOptions.workerSrc = "https://unpkg.com/pdfjs-dist@3.6.172/build/pdf.worker.min.js";

const ReactPDF = () => {
  return <div></div>;
};

Now, we can import the Document and Page components, passing the PDF file to their props. We can also import the component’s necessary styles while we are at it.

import * as React from "react";
import { Document, Page } from "react-pdf";

import { pdfjs } from "react-pdf";
import "react-pdf/dist/esm/Page/AnnotationLayer.css";
import "react-pdf/dist/esm/Page/TextLayer.css";

import samplePDF from "./assets/lorem-ipsum.pdf";

pdfjs.GlobalWorkerOptions.workerSrc = "https://unpkg.com/pdfjs-dist@3.6.172/build/pdf.worker.min.js";

const ReactPDF = () => {
  return (
    <Document file={ samplePDF }>
      <Page pageNumber={ 1 } />
    </Document>
  );
};

Since accessing the PDF will change the current page, we can add state management by passing the current pageNumber to the Page component:

import { useState } from "react";

// ...

const ReactPDF = () => {
  const [currentPage, setCurrentPage] = useState(1);

  return (
    <Document file={ samplePDF }>
      <Page pageNumber={ currentPage } />
    </Document>
  );
};

One issue is that we have pagination but don’t have a way to navigate between pages. We can change that by adding controls. First, we will need to know the number of pages in the document, which is accessed on the Document component’s onLoadSuccess event:

// ...

const ReactPDF = () => {
  const [pageNumber, setPageNumber] = useState(null);
  const [currentPage, setCurrentPage] = useState(1);

  const handleLoadSuccess = ({ numPages }) => {
    setPageNumber(numPages);
  };

  return (
    <Document file={ samplePDF } onLoadSuccess={ handleLoadSuccess }>
      <Page pageNumber={ currentPage } />
    </Document>
  );
};

Next, we display the current page number and add “Next” and “Previous” buttons with their respective handlers to change the current page:

// ...

const ReactPDF = () => {
  const [currentPage, setCurrentPage] = useState(1);
  const [pageNumber, setPageNumber] = useState(null);

  const handlePrevious = () => {
    // checks if it isn't the first page
    if (currentPage > 1) {
      setCurrentPage(currentPage - 1);
    }
  };

  const handleNext = () => {
    // checks if it isn't the last page
    if (currentPage < pageNumber) {
      setCurrentPage(currentPage + 1);
    }
  };

  const handleLoadSuccess = ({ numPages }) => {
    setPageNumber(numPages);
  };

  return (
    <div>
      <Document file={ samplePDF } onLoadSuccess={ handleLoadSuccess }>
        <Page pageNumber={ currentPage } />
      </Document>
      <button onClick={ handlePrevious }>Previous</button>
      <p>{currentPage}</p>
      <button onClick={ handleNext }>Next</button>
    </div>
  );
};

This provides us with everything we need to embed a PDF file on a page via the HTML element using react-pdf and pdf.js.

There is another similar package capable of embedding a PDF file in a viewer, complete with pagination controls. We’ll look at that next.

Using react-pdf-viewer

Unlike react-pdf, the react-pdf-viewer package provides built-in customizable controls right out of the box, which makes embedding a multi-page PDF file a lot easier than having to import them separately.

Let’s install it:

npm i @react-pdf-viewer/core@3.12.0 @react-pdf-viewer/default-layout

Since react-pdf-viewer also relies on pdf.js, we will need to create a service worker as we did with react-pdf, but only if we are not using both packages at the same time. This time, we are using a Worker component with a workerUrl prop directed at the worker’s package.

import * as React from "react";
import { Worker } from "@react-pdf-viewer/core";

const ReactPDFViewer = () => {
  return (
    <>
      <Worker workerUrl="https://unpkg.com/pdfjs-dist@3.4.120/build/pdf.worker.min.js"></Worker>
    </>
  );
};

Note that a worker like this ought to be set just once at the layout level. This is especially true if you intend to use the PDF viewer across different pages.

Next, we import the Viewer component with its styles and point it at the PDF through its fileUrl prop.

import * as React from "react";
import { Viewer, Worker } from "@react-pdf-viewer/core";

import "@react-pdf-viewer/core/lib/styles/index.css";

import samplePDF from "./assets/lorem-ipsum.pdf";

const ReactPDFViewer = () => {
  return (
    <>
      <Viewer fileUrl={ samplePDF } />
      <Worker workerUrl="https://unpkg.com/pdfjs-dist@3.6.172/build/pdf.worker.min.js"></Worker>
    </>
  );
};

Once again, we need to add controls. We can do that by importing the defaultLayoutPlugin (including its corresponding styles), making an instance of it, and passing it in the Viewer component’s plugins prop.

import * as React from "react";
import { Viewer, Worker } from "@react-pdf-viewer/core";
import { defaultLayoutPlugin } from "@react-pdf-viewer/default-layout";

import "@react-pdf-viewer/core/lib/styles/index.css";
import "@react-pdf-viewer/default-layout/lib/styles/index.css";

import samplePDF from "./assets/lorem-ipsum.pdf";

const ReactPDFViewer = () => {
  const defaultLayoutPluginInstance = defaultLayoutPlugin();

  return (
    <>
      <Viewer fileUrl={ samplePDF } plugins={ [defaultLayoutPluginInstance] } />
      <Worker workerUrl="https://unpkg.com/pdfjs-dist@3.6.172/build/pdf.worker.min.js"></Worker>
    </>
  );
};

Again, react-pdf-viewer is an alternative to react-pdf that can be a little easier to implement if you don’t need full control over your PDF files, just the embedded viewer.

There is one more plugin that provides an embedded viewer for PDF files. We will look at it, but only briefly, because I personally do not recommend using it in favor of the other approaches we’ve covered.

Why You Shouldn’t Use react-file-viewer

The last plugin we will check out is react-file-viewer, a package that offers an embedded viewer with a simple interface but with the capacity to handle a variety of media in addition to PDF files, including images, videos, PDFs, documents, and spreadsheets.

import * as React from "react";
import FileViewer from "react-file-viewer";

const PDFReactFileViewer = () => {
  return <FileViewer fileType="pdf" filePath="/lorem-ipsum.pdf" />;
};

While react-file-viewer will get the job done, it is extremely outdated and could easily create more headaches than it solves with compatibility issues. I suggest avoiding it in favor of either an iframe, react-pdf, or react-pdf-viewer.

Solving 3D Model Headaches In Gatsby

I want to cap this brief two-part series with one more media type that might cause headaches in a Gatsby project: 3D models.

A 3D model file is a digital representation of a three-dimensional object that stores information about the object’s geometry, texture, shading, and other properties of the object. On the web, 3D model files are used to enhance user experiences by bringing interactive and immersive content to websites. You are most likely to encounter them in product visualizations, architectural walkthroughs, or educational simulations.

There is a multitude of 3D model formats, including glTF OBJ, FBX, STL, and so on. We will use glTF models for a demonstration of a headache-free 3D model implementation in Gatsby.

The GL Transmission Format (glTF) was designed specifically for the web and real-time applications, making it ideal for our example. Using glTF files does require a specific webpack loader, so for simplicity’s sake, we will save the glTF model in the /static folder at the root of our project as we look at two approaches to create the 3D visual with Three.js:

  1. Using a vanilla implementation of Three.js,
  2. Using a package that integrates Three.js as a React component.

Using Three.js

Three.js creates and loads interactive 3D graphics directly on the web with the help of WebGL, a JavaScript API for rendering 3D graphics in real-time inside HTML elements.

Three.js is not integrated with React or Gatsby out of the box, so we must modify our code to support it. A Three.js tutorial is out of scope for what we are discussing in this article, although excellent learning resources are available in the Three.js documentation.

We start by installing the three library to the Gatsby project:

npm i three

Next, we write a function to load the glTF model for Three.js to reference it. This means we need to import a GLTFLoader add-on to instantiate a new loader object.

import * as React from "react";
import * as THREE from "three";

import { GLTFLoader } from "three/addons/loaders/GLTFLoader.js";

const loadModel = async (scene) => {
  const loader = new GLTFLoader();
};

We use the scene object as a parameter in the loadModel function so we can attach our 3D model once loaded to the scene.

From here, we use loader.load() which takes four arguments:

  1. The glTF file location,
  2. A callback when the resource is loaded,
  3. A callback while loading is in progress,
  4. A callback for handling errors.
import * as React from "react";
import * as THREE from "three";

import { GLTFLoader } from "three/addons/loaders/GLTFLoader.js";

const loadModel = async (scene) => {
  const loader = new GLTFLoader();

  await loader.load(
    "/strawberry.gltf", // glTF file location
    function (gltf) {
      // called when the resource is loaded
      scene.add(gltf.scene);
    },
    undefined, // called while loading is in progress, but we are not using it
    function (error) {
      // called when loading returns errors
      console.error(error);
    }
  );
};

Let’s create a component to host the scene and load the 3D model. We need to know the element’s client width and height, which we can get using React’s useRef hook to access the element’s DOM properties.

import * as React from "react";
import * as THREE from "three";

import { useRef, useEffect } from "react";

// ...

const ThreeLoader = () => {
  const viewerRef = useRef(null);

  return <div style={ { height: 600, width: "100%" } } ref={ viewerRef }></div>; // Gives the element its dimensions
};

Since we are using the element’s clientWidth and clientHeight properties, we need to create the scene on the client side inside React’s useEffect hook where we configure the Three.js scene with its necessary complements, e.g., a camera, the WebGL renderer, and lights.

useEffect(() => {
  const { current: viewer } = viewerRef;

  const scene = new THREE.Scene();

  const camera = new THREE.PerspectiveCamera(75, viewer.clientWidth / viewer.clientHeight, 0.1, 1000);

  const renderer = new THREE.WebGLRenderer();

  renderer.setSize(viewer.clientWidth, viewer.clientHeight);

  const ambientLight = new THREE.AmbientLight(0xffffff, 0.4);
  scene.add(ambientLight);

  const directionalLight = new THREE.DirectionalLight(0xffffff);
  directionalLight.position.set(0, 0, 5);
  scene.add(directionalLight);

  viewer.appendChild(renderer.domElement);
  renderer.render(scene, camera);
}, []);

Now we can invoke the loadModel function, passing the scene to it as the only argument:

useEffect(() => {
  const { current: viewer } = viewerRef;

  const scene = new THREE.Scene();

  const camera = new THREE.PerspectiveCamera(75, viewer.clientWidth / viewer.clientHeight, 0.1, 1000);

  const renderer = new THREE.WebGLRenderer();

  renderer.setSize(viewer.clientWidth, viewer.clientHeight);

  const ambientLight = new THREE.AmbientLight(0xffffff, 0.4);
  scene.add(ambientLight);

  const directionalLight = new THREE.DirectionalLight(0xffffff);
  directionalLight.position.set(0, 0, 5);
  scene.add(directionalLight);

  loadModel(scene); // Here!

  viewer.appendChild(renderer.domElement);
  renderer.render(scene, camera);
}, []);

The last part of this vanilla Three.js implementation is to add OrbitControls that allow users to navigate the model. That might look something like this:

import * as React from "react";
import * as THREE from "three";

import { useRef, useEffect } from "react";
import { OrbitControls } from "three/examples/jsm/controls/OrbitControls";
import { GLTFLoader } from "three/addons/loaders/GLTFLoader.js";

const loadModel = async (scene) => {
  const loader = new GLTFLoader();

  await loader.load(
    "/strawberry.gltf", // glTF file location
    function (gltf) {
      // called when the resource is loaded
      scene.add(gltf.scene);
    },
    undefined, // called while loading is in progress, but it is not used
    function (error) {
      // called when loading has errors
      console.error(error);
    }
  );
};

const ThreeLoader = () => {
  const viewerRef = useRef(null);

  useEffect(() => {
    const { current: viewer } = viewerRef;

    const scene = new THREE.Scene();

    const camera = new THREE.PerspectiveCamera(75, viewer.clientWidth / viewer.clientHeight, 0.1, 1000);

    const renderer = new THREE.WebGLRenderer();

    renderer.setSize(viewer.clientWidth, viewer.clientHeight);

    const ambientLight = new THREE.AmbientLight(0xffffff, 0.4);
    scene.add(ambientLight);

    const directionalLight = new THREE.DirectionalLight(0xffffff);
    directionalLight.position.set(0, 0, 5);
    scene.add(directionalLight);

    loadModel(scene);

    const target = new THREE.Vector3(-0.5, 1.2, 0);
    const controls = new OrbitControls(camera, renderer.domElement);
    controls.target = target;

    viewer.appendChild(renderer.domElement);

    var animate = function () {
      requestAnimationFrame(animate);
      controls.update();
      renderer.render(scene, camera);
    };
    animate();
  }, []);

  <div style={ { height: 600, width: "100%" } } ref={ viewerRef }></div>;
};

That is a straight Three.js implementation in a Gatsby project. Next is another approach using a library.

Using React Three Fiber

react-three-fiber is a library that integrates the Three.js with React. One of its advantages over the vanilla Three.js approach is its ability to manage and update 3D scenes, making it easier to compose scenes without manually handling intricate aspects of Three.js.

We begin by installing the library to the Gatsby project:

npm i react-three-fiber @react-three/drei

Notice that the installation command includes the @react-three/drei package, which we will use to add controls to the 3D viewer.

I personally love react-three-fiber for being tremendously self-explanatory. For example, I had a relatively easy time migrating the extensive chunk of code from the vanilla approach to this much cleaner code:

import * as React from "react";
import { useLoader, Canvas } from "@react-three/fiber";
import { OrbitControls } from "@react-three/drei";
import { GLTFLoader } from "three/examples/jsm/loaders/GLTFLoader";

const ThreeFiberLoader = () => {
  const gltf = useLoader(GLTFLoader, "/strawberry.gltf");

  return (
    <Canvas camera={ { fov: 75, near: 0.1, far: 1000, position: [5, 5, 5] } } style={ { height: 600, width: "100%" } }>
      <ambientLight intensity={ 0.4 } />
      <directionalLight color="white" />
      <primitive object={ gltf.scene } />
      <OrbitControls makeDefault />
    </Canvas>
  );
};

Thanks to react-three-fiber, we get the same result as a vanilla Three.js implementation but with fewer steps, more efficient code, and a slew of abstractions for managing and updating Three.js scenes.

Two Final Tips

The last thing I want to leave you with is two final considerations to take into account when working with media files in a Gatsby project.

Bundling Assets Via Webpack And The /static Folder

Importing an asset as a module so it can be bundled by webpack is a common strategy to add post-processing and minification, as well as hashing paths on the client. But there are two additional use cases where you might want to avoid it altogether and use the static folder in a Gatsby project:

  • Referencing a library outside the bundled code to prevent webpack compatibility issues or a lack of specific loaders.
  • Referencing assets with a specific name, for example, in a web manifest file.

You can find a detailed explanation of the static folder and use it to your advantage in the Gatsby documentation.

Embedding Files From Third-Party Services

Secondly, you can never be too cautious when embedding third-party services on a website. Replaced content elements, like , can introduce various security vulnerabilities, particularly when you do not have control of the source content. By integrating a third party’s scripts, widgets, or content, a website or app is prone to potential vulnerabilities, such as iframe injection or cross-frame scripting.

Moreover, if an integrated third-party service experiences downtime or performance issues, it can directly impact the user experience.

Conclusion

This article explored various approaches for working around common headaches you may encounter when working with Markdown, PDF, and 3D model files in a Gatsby project. In the process, we leveraged several React plugins and Gatsby features that handle how content is parsed, embed files on a page, and manage 3D scenes.

This is also the second article in a brief two-part series that addresses common headaches working with a variety of media types in Gatsby. The first part covers more common media files, including images, video, and audio.

If you’re looking for more cures to Gatsby headaches, please check out my other two-part series that investigates internationalization.

See Also

Categories: Others Tags:

15 Best New Fonts, October 2023

October 16th, 2023 No comments

We’re entering the final quarter of 2023, and even in the pre-holiday lull, there are still plenty of fonts to get excited about. In this month’s edition of our roundup of the best new fonts for designers, there are lots of revivals, some excellent options for logo designers, and some creative twists on letterforms. Enjoy!

Categories: Designing, Others Tags:

How can Employee Attendance Tracking Improve Workflow Efficiency

October 16th, 2023 No comments

The methodology or strategy utilized by associations to screen representative participation, consistency, and nonappearance is called participation following. Also, it monitors the representatives’ working and movement hours. Organizations use various apparatuses, programs, and applications to keep definite and exact records of their staff.

While customary approaches to recording participation are still being used, participation following methods has progressed alongside innovation. Monitoring representative participation utilizing successful bookkeeping sheets and biometric advancements are two of the most famous and widely used methods.

However much you might screen these elements physically or with unwieldy timesheets since time is cash, there is a superior strategy to achieve it quicker inferable from the computerized period. By using time and participation frameworks, organizations can now improve their cycles and boost the viability of their labor force. As you extend yield, smooth out cycles, and raise incomes, the conceivable outcomes are limitless.

Let us highlight the points to find out how the time and attendance system can enhance workflow efficiency.

No Fake Attendance

Usual and routine techniques for checking participation are inclined to control. A worker could go home for the day by mentioning a collaborator to monitor their behalf or by giving their login certifications to another person.

Integrating attendance tracking software is essential for accurate attendance information for employees. Forging a fingerprint or retina pattern is impossible. This procedure helps the HR department to ensure that no employee can violate company policies. It also reduces the workload of HR because it is no longer necessary to manually check the attendance of each employee.

Productivity is Improved 

To avoid any kind of errors with the paychecks of employees, attendance policies like clock-in and clock-out times play a major role. The main positive aspect of getting a time and attendance system into action will definitely save the time of every single employee in the organization.   

It is very necessary to cut short the lengthy and unwanted process where they have to manually report their schedules or even have to get their attendance reports signed by their superiors or managers. Instead, there can be automatic systems for attendance like biometric fingerprints which not only make the entire procedure short but will also be error-free. 

This will result in high productivity as they can focus on their goals and the procedures of the organization rather than tracking downtime they worked for and also keeping a check on the areas that caused errors. In fact, this way business operations will see definite growth and the high productivity will impact the performance financially.  

Accountability of Employees

What makes an employee more responsible? The answer would be software that tracks time and attendance in an effective way. The employees will surely be consistent and be on time when they are aware that their working hours are correctly tracked without any mistakes. 

This entire proceeding results in an efficient workflow of the business and encourages building an attitude of responsibility and answerability. 

Labour Laws 

Not only payroll and timesheets but ensuring legal requirements and corporate concerns are also taken properly care of is a must in any of the organizations. 

Tracking employees’ attendance will assist you in abiding by the Fair Labour Standards Act and other employment-related rules by streamlining schedules, processing accruals, and implementing precise punching in and out procedures.

A firm or business that violates labor regulations faces financial penalties, the potential loss of time, effort, and staff members, as well as possible negative effects on their brand. That is why it is necessary to take these variables carefully. A customizable system enables you to create and implement corporate rules that are advantageous to both management and the entire workforce, creating a win-win situation.

Less Human Errors

Both employers and employees benefit from this automated attendance tracking system because the chance of human error is minimal. Typically, employees spend a lot of time manually calculating hourly reports and preparing payroll reports. This is definitely a  time-consuming and labor-intensive process, especially if there are mistakes that need to be corrected. 

In addition, discovered bugs can also lead to more serious compliance issues. These errors can be avoided with timekeeping systems that can track and highlight errors and effectively correct them. From now on, there will be no complications, and everyone, especially the employees, will get fair and just compensation.

Saves Money

These time-keeping systems not only prevent the time theft that some mediocre employees do but save you more than you can imagine. It automates accrual for paid leaves, minimizes the overtime pay amount, generates a smoother way to calculate salary budget, improves employees’ communication, and more.

Assists Advance Planning

To address difficulties in daily tasks, the employer must inform the employee more quickly about last-minute changes in the work schedule. Instead of relying on emails, text messages, and calls, the attendance management system can certainly help employees receive quick notifications of significant schedule changes. This reduces the possibility of messages being missed or mishandled, which can cause significant confusion.

Time Utilization

The overhead expenses are mostly made up of labor costs.  We all know that time in an actual sense is money, and this applies to most corporate processes as a whole. 

There are some workers who engage in wrong practices like punching for their coworkers. They can so easily falsify their punch-in and punch-out times, which will have a substantial negative impact on the budget. The employees also get involved in tardiness, distraction, and clocking off early.

However, few organizations still use manual methods and in that case, employees would be able to tamper with the recording of their schedules.

The finest part about a time and attendance system is that an employee will have to log their working hours using their IDs, biometrics, or other time input devices. These elements make it impossible for employees to cheat leaving no scope of time theft.

Conclusion

The enormous advantages of employee attendance tracking can significantly enhance the workflow efficiency of any organization. It provides insights, reduces errors, streamlines communication, and boosts productivity. By ensuring transparency, businesses can save a lot of money and reduce potential time theft caused by employees who are not honest with the organization. Embracing modern attendance tracking systems can transform the way businesses operate and lead to a more efficient working environment.

Featured Image by rawpixel.com on Freepik

The post How can Employee Attendance Tracking Improve Workflow Efficiency appeared first on noupe.

Categories: Others Tags:

Progressive Web Apps (PWAs): Unlocking The Future of Mobile-First Web Development

October 13th, 2023 No comments

There are over 5.4 billion mobile users today, meaning that over 68% of the population tap into an online business via their smartphone devices.

Categories: Designing, Others Tags:

Quality Assurance in Software Testing: A Comprehensive Guide

October 12th, 2023 No comments

In today’s tech-driven world, software is the heartbeat of innovation. Software touches every aspect of our lives, from user-friendly apps to intricate business solutions. Yet, amidst this complexity, ensuring software quality is non-negotiable. This is where quality assurance in software testing steps in. QA is not a mere stage; it’s a mindset, a systematic approach ensuring the software meets high standards.

In this blog, we unravel the core of quality assurance in software testing. We’ll dive deep, from understanding its basics to exploring vital components and best practices. Whether you’re a seasoned professional or a curious beginner, we’ll demystify QA’s challenges, discussing innovative strategies. So let’s get started.

What is Quality Assurance in Software Testing?

Quality assurance is a systematic process. It is often referred to as quality management in software engineering and is employed to ensure that the software being developed meets customer requirements. Not only this! It involves a series of planned activities, processes, and methodologies. These are aimed at preventing defects and issues in the software development lifecycle. It holds utmost importance due to the following reasons:

  • Ensures reliability

QA methods rigorously test software, guaranteeing flawless performance under diverse conditions. Moreover, it fosters user reliance. It meticulously identifies and eliminates bugs. As a result, it ensures users experience consistent and dependable functionality.

  • Customer satisfaction

QA aligns software with customer expectations. As a result, it boosts satisfaction by delivering precisely what users anticipate. Furthermore, it ensures that user interfaces are intuitive and features are functional. Assurance of these elements leads to content and loyal users.

  • Cost-effectiveness

Early defect detection through QA reduces post-release costs. As a result, it ensures efficient use of resources during development. By identifying issues in the early stages, QA prevents costly fixes later. QA thereby optimizes the development process and budget allocation.

  • Brand reputation

Quality assurance in software testing services is essential to establishing a brand’s reputation. It also guarantees the excellent performance and dependability of the software. Delivering high-quality software on a continuous basis thereby improves the brand’s reputation. Moreover, it fosters user and stakeholder confidence. 

  • Compliance and security

Software compliance with industry rules and standards is ensured by QA. As a result, it strengthens security protocols and safeguards important information. QA guarantees that the program is resistant to cyber attacks by thoroughly testing security mechanisms. Additionally, it protects both user information and the credibility of the company.

  • Optimized performance

Performance bottlenecks are found and fixed through QA, ensuring the product runs effectively. This optimization guarantees a seamless user experience, even during high-traffic periods. 

Many people tend to confuse quality assurance in software testing with quality control. However, both terms show some differences. What are they, you ask? Let’s find out.

What Difference do Quality Assurance and Quality Control Hold?

Quality assurance is a proactive and process-oriented approach that focuses on preventing defects before they occur. Therefore, it entails using systematic methods and activities across the SDLC. These rules guarantee that the item conforms to all relevant requirements and standards. QA strongly emphasizes process improvement, improving them over time and using the best practices. It’s all about developing methods to stop mistakes from occurring.

To guarantee a high-quality outcome, QC adopts a proactive approach. To identify and address problems with the completed product, particular steps and techniques must be followed. Also, QC employs processes including testing, inspections, and reviews to find and fix problems. The primary objective of quality control in software engineering is to identify flaws in the finished product and confirm that it adheres to the desired quality standards.

Let us now go deeper into quality assurance in software testing and understand its key components.

What are the Key Components of Quality Assurance?

Each of the vital components of QA makes a difference in some manner to the process of testing software. In this part, we will discuss three crucial components, emphasizing their significance and connection:

  • Test planning and strategy

Test planning is the first step in the quality assurance in software testing. It demands the creation of a test strategy that outlines the testing’s objectives, limitations, and other features. Moreover, this stage guides the subsequent testing procedures and creates the structure for the entire QA process.

  • Test design and execution

In circumstances where a plan is in place, QA companies offer comprehensive test cases and scenarios. It closely conforms to the project’s requirements and user scenarios. These test scenarios outline the testing processes. QA specialists may manually execute these scenarios or use automated testing technologies. Further use of regression testing is made. The process ensures that updated code does not adversely affect features that currently exist.

  • Defect reporting and tracking

During testing, QA engineers identify, document, and report defects and issues. They meticulously record each defect with detailed information. As a result, it enables developers to understand the problem fully. Teams prioritize and track these defects throughout their lifecycle.

  • Performance testing and optimization

Performance testing is crucial to evaluate the software’s responsiveness, stability, etc. The testing tools simulate user loads, enabling the identification of bottlenecks and areas of improvement. Furthermore, once identified, teams optimize the software based on the testing results to address performance issues.

  • Continuous improvement

Post-testing, an analysis of testing outcomes and user feedback is conducted. This analysis helps identify areas for improvement within QA processes. So, one incorporates lessons learned from past projects and establishes iterative feedback mechanisms. Moreover, it ensures the software is becoming increasingly efficient and effective, aligning with changing user needs.

Now, are there any practices you can maintain to boost the efficiency of quality assurance in software testing? Yes, there are! Let’s find out what they are.

Quality Assurance in Software Testing: Best Practices

Quality assurance in software testing is not just a phase but a mindset. It is a system of rules and procedures that ensures the distribution of superior software goods. In order to maintain effectiveness and dependability throughout the development lifecycle, best practices in QA must be implemented:

  • Clear and detailed test cases

Build comprehensive test cases that cover a variety of scenarios and edge cases. QA engineers are guided by clear, thorough test cases. It also makes it possible to test thoroughly and consistently. So, these situations have to be clear and well-documented.

  • Comprehensive test planning

Careful test planning is the foundation of effective quality assurance in software testing. Establish precise goals, boundaries, budgets, and deadlines. A well-structured test plan also offers QA activities a roadmap. As a result, it guarantees that every component of the software is carefully inspected.

  • Automation where appropriate

Automation accelerates repetitive and time-consuming testing tasks. As a result, it ensures rapid feedback during development. Implement test automation for regression tests, smoke tests, and repetitive scenarios. It allows QA engineers to focus on complex, exploratory testing and automate the quality assurance process.

  • Continuous integration and continuous testing

Utilise CI and CT to include quality assurance in software testing when it is still in the development phase. Automate testing as part of the CI/CD cycle to help developers see problems early. This results in quicker problem fixes and better program stability.

  • Realistic test data management

Recreate real-world scenarios using a range of tests and realistic data. In order to ensure the software’s resilience, test data should span a range of inputs. Realistic test results can aid in spotting possible problems. These are the problems in handling and validating data.

  • Rigorous defect reporting and tracking

Implement a robust defect reporting and tracking system. Clearly, document defects, providing detailed information about the issue. Furthermore, prioritize defects based on severity and track their progress until resolution.

To Sum Up

Quality assurance in software testing sits at the core of flawless software. You need to be comprehensive in your approach to QA. Through our blog, we have tried to give you elements that can help you achieve that. So don’t wait any further and take your software testing to the next level!

Featured image by Rezvani on Unsplash

The post Quality Assurance in Software Testing: A Comprehensive Guide appeared first on noupe.

Categories: Others Tags:

The 12 Most Controversial Ad Campaigns of the 21st Century

October 11th, 2023 No comments

How far would an organization be willing to go for the chance to generate a little extra buzz? In this list, we’re going to find out.

Categories: Designing, Others Tags:

Why are Email Signatures Necessary in the World of Business?

October 11th, 2023 No comments

Every day, roughly 4 billion people communicate via email. This includes, of course, those who take to this medium to conduct business. If email is a strong part of your daily business interactions, then you’ll want to read this.

When crafting the perfect email that you want your recipient to read carefully and respond to, it’s not enough to come up with a catchy subject line and a juicy introduction. The way you sign off your message is just as important to keep that conversation going.

Specifically, we’re talking about your email signature. Not only are email signatures necessary if you want to provide some essential details about yourself, but they can also generate some impressive and perhaps unexpected benefits. Let’s find out more.

What is an email signature?

Not to be confused with a digital signature, an email signature refers to the block of text that email users place at the bottom of a message and where they include any relevant business details about themselves and their company. 

Additionally, many email signatures also feature some visual elements, such as a headshot of the sender or the company logo. It’s important to place your email signature at the very end of your email, preferably separated from the main body with a clear line or another visual cue.

While it is, of course, essential to ensure that your email signature is as comprehensive and informative as possible, you’ll also want to be wary of creating a signature that overwhelms the reader by providing too much information. 

Similarly, you’ll want to keep your email signature size fairly small, so that it doesn’t stand out abruptly and confuse the recipient while they’re reading your message. But let’s take a closer look at the best practices to craft an effective email signature.

The dos and dont’s of a professional email signature

Do: Include your full name

It may sound obvious, but the first element you’ll want to include in your email signature is your full name. It doesn’t matter if you are also used to signing off your email by using your first name. As your email signature is a separate, independent block of text, it should feature both your first and last name.

Don’t: Add any confidential information

Personal details such as your home address, private phone number, and any links to your personal social media accounts should be avoided.

Do: Incorporate visuals

While you may not necessarily need (or want) to include a photo of yourself in your email signature, it’s important to display a logo of the company you work for, to establish trust and help strengthen brand identity (more on that later).

Don’t: Be overwhelmed with information

As we mentioned earlier, you don’t need your email signature to be packed with information that might end up putting off the reader. Your full name, job title, business contact details, company name (and logo), and company website are often more than enough.

Image by Cytonn Photography on Unsplash

Do: Add relevant business links

Have you recently won an award? Has your company just launched an advanced enterprise cloud communications platform? Do you want to promote a press release? It may be worth including some of these business links in your email signature, too.

Don’t: Include any personal links

While you might be tempted to add your personal Instagram or Twitter handle to your email signature in order to gain more exposure and win a few more followers, it’s important that you refrain from doing so. Email signatures should only have business-related purposes, and it’s crucial that you don’t mix your professional and personal lives.

Are email signatures necessary? Seven reasons why you need them

If you’re still asking yourself, “Are email signatures necessary?” then keep reading as we reveal the seven main reasons why they, indeed, are.

  1. They increase brand awareness

Is your brand new to the industry? An email signature can become your best friend. It’s a quick and easy way to stick in the minds of your readers, as they’ll be able to associate your name with a specific brand, and your brand with a specific logo or website.

All this, in turn, helps you to boost brand awareness, which is something all brands benefit from, especially small and young ones.

  1. They showcase expertise and professionalism

How many times have you received an email from someone you didn’t know and binned it because it didn’t look “legit”? Well, if that resonates, it might be because that person either didn’t have an email signature, or their email signature was poorly written.

By crafting an email signature that includes verifiable links, detailed information, and other credible sources such as your preferred cloud communications method, you’ll instantly come across as a serious and trustworthy professional. People will no longer suspect that your email might be spam, and will feel more eager to connect with you and get to know your brand better.

Image by ThisIsEngineering on Pexels
  1. They make it easy for people to contact you

How frustrating is it when you’ve received someone’s email and have no way of tracking their company website or business phone number? Well, one of the most important benefits of having a clear and compelling email signature is the ability for your recipients to get in touch with you quickly and easily. 

Sure, you’ll want to spend some time putting together an email signature that looks easy on the eye and reflects your brand’s identity, but at the same time, you’ll want to focus on its main goal: making it easier for people to contact you.

Image by Rohit Tandon on Unsplash
  1. They help your personality stand out

We touched on this earlier (and will expand on it in a moment): email signatures are a great tool to help you consolidate your brand identity. This happens through the use of your company logo, company website, and any other typography or visual elements that make your brand unique and recognizable.

However, this doesn’t mean that every single email signature that people from the same company create should look exactly the same. In some cases, your company may give you some leeway in what you add to your email signatures besides the basics.

For example, it’s not unheard of for people to incorporate a brief quote they love at the bottom of their email signature. Similarly, you may be allowed to switch the company’s main font for your favorite one, while keeping all the other main elements on-brand.

  1. They boost web and social traffic

Adding your company’s website to your email signature is a surefire way to grow your web traffic organically. In parallel, you may also want to add other relevant business links, such as your company’s LinkedIn page, blog page, or anything else that might help boost traffic, visibility, and engagement.

  1. They consolidate brand identity

Whether you run a nimble startup or a large multinational, you’ll want to keep your brand identity solid, coherent, and cohesive across all your channels. And, you’ve guessed it, email signatures are one of the tools you can leverage to showcase your brand, make it memorable, and bring more people to it.

If you want to achieve this, though, you need to make sure that you have a standard set of rules in place for creating email signatures across your company. These should include adding your company logo, website, and other links, as well as using your brand’s color palette.

Image by Mitchell Luo on Unsplash
  1. They support your marketing efforts

You may not have thought about this, but an email signature can also double as a powerful marketing tool. As we mentioned earlier, you might have just launched a marketing campaign, or a new product that you want your customers to check out.

By including an interactive link – or, even better, a banner – to your email signature, you maximize the results of whatever you’re trying to promote, with no need to invest money in any additional marketing activities.

The bottom line

You may have approached this article because you’ve been wondering, “are email signatures necessary in 2023?” and hopefully you are coming away from this with a clear idea as to why they are.

With an email signature, you can establish trust, foster connections, and reinforce brand identity. Now that you know exactly how to write a fabulous email signature, why not go ahead and craft yours?

Featured image by Solen Feyissa on Unsplash

The post Why are Email Signatures Necessary in the World of Business? appeared first on noupe.

Categories: Others Tags:

A High-Level Overview Of Large Language Model Concepts, Use Cases, And Tools

October 10th, 2023 No comments

Even though a simple online search turns up countless tutorials on using Artificial Intelligence (AI) for everything from generative art to making technical documentation easier to use, there’s still plenty of mystery around it. What goes inside an AI-powered tool like ChatGPT? How does Notion’s AI feature know how to summarize an article for me on the fly? Or how are a bunch of sites suddenly popping up that can aggregate news and auto-publish a slew of “new” articles from it?

It all can seem like a black box of mysterious, arcane technology that requires an advanced computer science degree to understand. What I want to show you, though, is how we can peek inside that box and see how everything is wired up.

Specifically, this article is about large language models (LLMs) and how they “imbue” AI-powered tools with intelligence for answering queries in diverse contexts. I have previously written tutorials on how to use an LLM to transcribe and evaluate the expressed sentiment of audio files. But I want to take a step back and look at another way around it that better demonstrates — and visualizes — how data flows through an AI-powered tool.

We will discuss LLM use cases, look at several new tools that abstract the process of modeling AI with LLM with visual workflows, and get our hands on one of them to see how it all works.

Large Language Models Overview

Forgoing technical terms, LLMs are vast sets of text data. When we integrate an LLM into an AI system, we enable the system to leverage the language knowledge and capabilities developed by the LLM through its own training. You might think of it as dumping a lifetime of knowledge into an empty brain, assigning that brain to a job, and putting it to work.

“Knowledge” is a convoluted term as it can be subjective and qualitative. We sometimes describe people as “book smart” or “street smart,” and they are both types of knowledge that are useful in different contexts. This is what artificial “intelligence” is created upon. AI is fed with data, and that is what it uses to frame its understanding of the world, whether it is text data for “speaking” back to us or visual data for generating “art” on demand.

Use Cases

As you may imagine (or have already experienced), the use cases of LLMs in AI are many and along a wide spectrum. And we’re only in the early days of figuring out what to make with LLMs and how to use them in our work. A few of the most common use cases include the following.

  • Chatbot
    LLMs play a crucial role in building chatbots for customer support, troubleshooting, and interactions, thereby ensuring smooth communications with users and delivering valuable assistance. Salesforce is a good example of a company offering this sort of service.
  • Sentiment Analysis
    LLMs can analyze text for emotions. Organizations use this to collect data, summarize feedback, and quickly identify areas for improvement. Grammarly’s “tone detector” is one such example, where AI is used to evaluate sentiment conveyed in content.
  • Content Moderation
    Content moderation is an important aspect of social media platforms, and LLMs come in handy. They can spot and remove offensive content, including hate speech, harassment, or inappropriate photos and videos, which is exactly what Hubspot’s AI-powered content moderation feature does.
  • Translation
    Thanks to impressive advancements in language models, translation has become highly accurate. One noteworthy example is Meta AI’s latest model, SeamlessM4T, which represents a big step forward in speech-to-speech and speech-to-text technology.
  • Email Filters
    LLMs can be used to automatically detect and block unwanted spam messages, keeping your inbox clean. When trained on large datasets of known spam emails, the models learn to identify suspicious links, phrases, and sender details. This allows them to distinguish legitimate messages from those trying to scam users or market illegal or fraudulent goods and services. Google has offered AI-based spam protection since 2019.
  • Writing Assistance
    Grammarly is the ultimate example of an AI-powered service that uses LLM to “learn” how you write in order to make writing suggestions. But this extends to other services as well, including Gmail’s “Smart Reply” feature. The same thing is true of Notion’s AI feature, which is capable of summarizing a page of content or meeting notes. Hemmingway’s app recently shipped a beta AI integration that corrects writing on the spot.
  • Code and Development
    This is the one that has many developers worried about AI coming after their jobs. It hit the commercial mainstream with GitHub Copilot, a service that performs automatic code completion. Same with Amazon’s CodeWhisperer. Then again, AI can be used to help sharpen development skills, which is the case of MDN’s AI Help feature.

Again, these are still the early days of LLM. We’re already beginning to see language models integrated into our lives, whether it’s in our writing, email, or customer service, among many other services that seem to pop up every week. This is an evolving space.

Types Of Models

There are all kinds of AI models tailored for different applications. You can scroll through Sapling’s large list of the most prominent commercial and open-source LLMs to get an idea of all the diverse models that are available and what they are used for. Each model is the context in which AI views the world.

Let’s look at some real-world examples of how LLMs are used for different use cases.

Natural Conversation
Chatbots need to master the art of conversation. Models like Anthropic’s Claude are trained on massive collections of conversational data to chat naturally on any topic. As a developer, you can tap into Claude’s conversational skills through an API to create interactive assistants.

Emotions
Developers can leverage powerful pre-trained models like Falcon for sentiment analysis. By fine-tuning Falcon on datasets with emotional labels, it can learn to accurately detect the sentiment in any text provided.

Translation
Meta AI released SeamlessM4T, an LLM trained on huge translated speech and text datasets. This multilingual model is groundbreaking because it translates speech from one language into another without an intermediary step between input and output. In other words, SeamlessM4T enables real-time voice conversations across languages.

Content Moderation
As a developer, you can integrate powerful moderation capabilities using OpenAI’s API, which includes a LLM trained thoroughly on flagging toxic content for the purpose of community moderation.

Spam Filtering
Some LLMs are used to develop AI programs capable of text classification tasks, such as spotting spam emails. As an email user, the simple act of flagging certain messages as spam further informs AI about what constitutes an unwanted email. After seeing plenty of examples, AI is capable of establishing patterns that allow it to block spam before it hits the inbox.

Not All Language Models Are Large

While we’re on the topic, it’s worth mentioning that not all language models are “large.” There are plenty of models with smaller sets of data that may not go as deep as ChatGPT 4 or 5 but are well-suited for personal or niche applications.

For example, check out the chat feature that Luke Wrobleski added to his site. He’s using a smaller language model, so the app at least knows how to form sentences, but is primarily trained on Luke’s archive of blog posts. Typing a prompt into the chat returns responses that read very much like Luke’s writings. Better yet, Luke’s virtual persona will admit when a topic is outside of the scope of its knowledge. An LLM would provide the assistant with too much general information and would likely try to answer any question, regardless of scope. Members from the University of Edinburgh and the Allen Institute for AI published a paper in January 2023 (PDF) that advocates the use of specialized language models for the purpose of more narrowly targeted tasks.

Low-Code Tools For LLM Development

So far, we’ve covered what an LLM is, common examples of how it can be used, and how different models influence the AI tools that integrate them. Let’s discuss that last bit about integration.

Many technologies require a steep learning curve. That’s especially true with emerging tools that might be introducing you to new technical concepts, as I would argue is the case with AI in general. While AI is not a new term and has been studied and developed over decades in various forms, its entrance to the mainstream is certainly new and sparks the recent buzz about it. There’s been plenty of recent buzz in the front-end development community, and many of us are scrambling to wrap our minds around it.

Thankfully, new resources can help abstract all of this for us. They can power an AI project you might be working on, but more importantly, they are useful for learning the concepts of LLM by removing advanced technical barriers. You might think of them as “low” and “no” code tools, like WordPress.com vs. self-hosted WordPress or a visual React editor that is integrated with your IDE.

Low-code platforms make it easier to leverage large language models without needing to handle all the coding and infrastructure yourself. Here are some top options:

Chainlit

Chainlit is an open-source Python package that is capable of building a ChatGPT-style interface using a visual editor.

LLMStack is another low-code platform for building AI apps and chatbots by leveraging large language models. Multiple models can be chained together into “pipelines” for channeling data. LLMStack supports standalone app development but also provides hosting that can be used to integrate an app into sites and products via API or connected to platforms like Slack or Discord.

LLMStack is also what powers Promptly, a cloud version of the app with freemium subscription pricing that includes a free tier.

FlowiseAI

Stack AI is another no-code offering for developing AI apps integrated with LLMs. It is much like FlowiseAI, particularly the drag-and-drop interface that visualizes connections between apps and APIs. One thing I particularly like about Stack AI is how it incorporates “data loaders” to fetch data from other platforms, like Slack or a Notion database.

I also like that Stack AI provides a wider range of LLM offerings. That said, it will cost you. While Stack AI offers a free pricing tier, it is restricted to a single project with only 100 runs per month. Bumping up to the first paid tier will set you back $199 per month, which I suppose is used toward the costs of accessing a wider range of LLM sources. For example, Flowise AI works with any LLM in the Hugging Face community. So does Stack AI, but it also gives you access to commercial LLM offerings, like Anthropic’s Claude models and Google’s PaLM, as well as additional open-source offerings from Replicate.

Voiceflow

Install FlowiseAI

First things first, we need to get FlowiseAI up and running. FlowiseAI is an open-source application that can be installed from the command line.

You can install it with the following command:

npm install -g flowise

Once installed, start up Flowise with this command:

npx flowise start

From here, you can access FlowiseAI in your browser at localhost:3000.

It’s possible to serve FlowiseAI so that you can access it online and provide access to others, which is well-covered in the documentation.

Setting Up Retrievers

Retrievers are templates that the multi-prompt chain will query.

Different retrievers provide different templates that query different things. In this case, we want to select the Prompt Retriever because it is designed to retrieve documents like PDF, TXT, and CSV files. Unlike other types of retrievers, the Prompt Retriever does not actually need to store those documents; it only needs to fetch them.

Let’s take the first step toward creating our career assistant by adding a Prompt Retriever to the FlowiseAI canvas. The “canvas” is the visual editing interface we’re using to cobble the app’s components together and see how everything connects.

Adding the Prompt Retriever requires us to first navigate to the Chatflow screen, which is actually the initial page when first accessing FlowiseAI following installation. Click the “Add New” button located in the top-right corner of the page. This opens up the canvas, which is initially empty.

The “Plus” (+) button is what we want to click to open up the library of items we can add to the canvas. Expand the Retrievers tab, then drag and drop the Prompt Retriever to the canvas.

The Prompt Retriever takes three inputs:

  1. Name: The name of the stored prompt;
  2. Description: A brief description of the prompt (i.e., its purpose);
  3. Prompt system message: The initial prompt message that provides context and instructions to the system.

Our career assistant will provide career suggestions, tool recommendations, salary information, and cities with matching jobs. We can start by configuring the Prompt Retriever for career suggestions. Here is placeholder content you can use if you are following along:

  • Name: Career Suggestion;
  • Description: Suggests careers based on skills and experience;
  • Prompt system message: You are a career advisor who helps users identify a career direction and upskilling opportunities. Be clear and concise in your recommendations.

Be sure to repeat this step three more times to create each of the following:

  • Tool recommendations,
  • Salary information,
  • Locations.

Adding A Multi-Prompt Chain

A Multi-Prompt Chain is a class that consists of two or more prompts that are connected together to establish a conversation-like interaction between the user and the career assistant.

The idea is that we combine the four prompts we’ve already added to the canvas and connect them to the proper tools (i.e., chat models) so that the career assistant can prompt the user for information and collect that information in order to process it and return the generated career advice. It’s sort of like a normal system prompt but with a conversational interaction.

The Multi-Prompt Chain node can be found in the “Chains” section of the same inserter we used to place the Prompt Retriever on the canvas.

Once the Multi-Prompt Chain node is added to the canvas, connect it to the prompt retrievers. This enables the chain to receive user responses and employ the most appropriate language model to generate responses.

To connect, click the tiny dot next to the “Prompt Retriever” label on the Multi-Prompt Chain and drag it to the “Prompt Retriever” dot on each Prompt Retriever to draw a line between the chain and each prompt retriever.

Integrating Chat Models

This is where we start interacting with LLMs. In this case, we will integrate Anthropic’s Claude chat model. Claude is a powerful LLM designed for tasks related to complex reasoning, creativity, thoughtful dialogue, coding, and detailed content creation. You can get a feel for Claude by registering for access to interact with it, similar to how you’ve played around with OpenAI’s ChatGPT.

From the inserter, open “Chat Models” and drag the ChatAnthropic option onto the canvas.

Once the ChatAnthropic chat model has been added to the canvas, connect its node to the Multi-Prompt Chain’s “Language Model” node to establish a connection.

It’s worth noting at this point that Claude requires an API key in order to access it. Sign up for an API key on the Anthropic website to create a new API key. Once you have an API key, provide it to the Mutli-Prompt Chain in the “Connect Credential” field.

Adding A Conversational Agent

The Agent component in FlowiseAI allows our assistant to do more tasks, like accessing the internet and sending emails.

It connects external services and APIs, making the assistant more versatile. For this project, we will use a Conversational Agent, which can be found in the inserter under “Agent” components.

Once the Conversational Agent has been added to the canvas, connect it to the Chat Model to “train” the model on how to respond to user queries.

Integrating Web Search Capabilities

The Conversational Agent requires additional tools and memory. For example, we want to enable the assistant to perform Google searches to obtain information it can use to generate career advice. The Serp API node can do that for us and is located under “Tools” in the inserter.

Like Claude, Serp API requires an API key to be added to the node. Register with the Serp API site to create an API key. Once the API is configured, connect Serp API to the Conversational Agent’s “Allowed Tools” node.

Building In Memory

The Memory component enables the career assistant to retain conversation information.

This way, the app remembers the conversation and can reference it during the interaction or even to inform future interactions.

There are different types of memory, of course. Several of the options in FlowiseAI require additional configurations, so for the sake of simplicity, we are going to add the Buffer Memory node to the canvas. It is the most general type of memory provided by LangChain, taking the raw input of the past conversation and storing it in a history parameter for reference.

Buffer Memory connects to the Conversational Agent’s “Memory” node.

The Final Workflow

At this point, our workflow looks something like this:

  • Four prompt retrievers that provide the prompt templates for the app to converse with the user.
  • A multi-prompt chain connected to each of the four prompt retrievers that chooses the appropriate tools and language models based on the user interaction.
  • The Claude language model connected to the multi-chain prompt to “train” the app.
  • A conversational agent connected to the Claude language model to allow the app to perform additional tasks, such as Google web searches.
  • Serp API connected to the conversational agent to perform bespoke web searches.
  • Buffer memory connected to the conversational agent to store, i.e., “remember,” conversations.

If you haven’t done so already, this is a great time to save the project and give it a name like “Career Assistant.”

Final Demo

Watch the following video for a quick demonstration of the final workflow we created together in FlowiseAI. The prompts lag a little bit, but you should get the idea of how all of the components we connected are working together to provide responses.

Conclusion

As we wrap up this article, I hope that you’re more familiar with the concepts, use cases, and tools of large language models. LLMs are a key component of AI because they are the “brains” of the application, providing the lens through which the app understands how to interact with and respond to human input.

We looked at a wide variety of use cases for LLMs in an AI context, from chatbots and language translations to writing assistance and summarizing large blocks of text. Then, we demonstrated how LLMs fit into an AI application by using FlowiseAI to create a visual workflow. That workflow not only provided a visual of how an LLM, like Claude, informs a conversation but also how it relies on additional tools, such as APIs, for performing tasks as well as memory for storing conversations.

The career assistant tool we developed together in FlowiseAI was a detailed visual look inside the black box of AI, providing us with a map of the components that feed the app and how they all work together.

Now that you know the role that LLMs play in AI, what sort of models would you use? Is there a particular app idea you have where a specific language model would be used to train it?

References

Categories: Others Tags:

The Business Case for Sustainability: Balancing Profitability and Environmental Responsibility 

October 10th, 2023 No comments

Investors have embraced responsible portfolio management strategies to encourage sustainable enterprises and support socio-economic development. Meanwhile, customers refuse to buy from a brand that fails to curb labor malpractices, pollution, waste generation, and petroleum consumption. This post will describe the business case for sustainability to increase awareness about these trends. 

What is Sustainable Business Development? 

A company engages in sustainable business development when it revises its operations, product design, and resource allocation to contribute to social and environmental problem resolution. It is not about pretending to be eco-friendly or slowing industrial progress. Instead, sustainability for business ensures companies can thrive without harming social harmony and Earth’s resources. 

Today, business leaders leverage?sustainability consulting services?to navigate modern regulations demanding more responsible corporate approaches. Besides, several pressing matters range from safekeeping consumer data to making workplaces more inclusive. 

At the same time, multiple compliance guidelines have overwhelmed managers. So, it is imperative to embrace a tech-led strategy. It will help increase your firm’s compliance across all the major frameworks, like the ones described below. 

  1. Environmental, social, and governance (ESG) reporting
  2. Task force on climate-related financial disclosures (TCFD), 
  3. Global Reporting Initiative (GRI), 
  4. And the EU taxonomy. 

How Can a Business Balance Profitability, Ethics, and Sustainability? 

The older the organization, the more challenges you must overcome to go green. An excellent method is multistakeholder brainstorming. Let customers, suppliers, employees, business associates, and investors chime in and provide improvement ideas. 

Another indirect approach involves taking advantage of extensive data collection methods, insight extraction, and reporting. Using automated computing systems, companies can monitor the policy dynamics in the target markets and improve specific operations in realistic stages. 

They do not need to transform all practices and risk productivity loss. Since they will utilize data from authoritative sources, their decisions will also have a sound foundation. Available technologies can involve? ESG data solutions tailored for private companies, financial materiality estimates, controversy analytics, and risk forecasting tools. 

Finally, leaders, board directors, and the rest must periodically evaluate their business sustainability initiatives. If they notice some strategies becoming obsolete, they must devise appropriate action plans to rectify such issues. 

Advantages of Business Sustainability 

1- Efficient Resource Consumption 

Pollution and carbon risk mitigation require brands to replace conventional energy systems with greener alternatives. This renewable energy transition allows companies to rationalize how they allocate resources to operations. Moreover, they can reduce dependence on public infrastructure for power and water using modern technologies. 

Treating and reusing water might not be suitable for all enterprises. However, the scope of these practices encompasses offices, factories, and post-sales product maintenance. In other words, you want to consider the entire product lifecycle to increase your ESG ratings and positive impact potential. 

Integrating green technology to fulfill the efficient resource consumption requirements makes you more competitive and attracts more investors. Therefore, business profitability increases thanks to sustainability accounting compliance. 

2- Resilient Supply Chains 

Socio-economic and ecological threats limit your enterprise’s growth potential. They endanger the well-being of consumers and supply partners. Consider how environmental problems, political chaos, or social issues hinder free transportation, making timely product delivery more arduous. 

However, brands can voluntarily work toward building a peaceful, green, tolerant, and resilient community. They must collaborate with employees, educators, policymakers, and local stakeholders. After all, most corporate social responsibility (CSR) projects aimed at literacy increase, women empowerment, and cyber safety awareness create opportunities to create a more stable world. 

Aside from social disharmony threats, your supply chain is often vulnerable to fraud, region-specific quality norm inconsistencies, and controversies. You cannot eliminate these risks, but you can reduce the harm they might cause using predictive analysis and contingency plans. Many sustainability frameworks address these aspects in their reporting guidelines. 

3- Long-Term Stakeholder Relationships 

Responsible consumption and a solid supply chain increase an organization’s reputation. Socially conscious customers prefer companies that realize the cost of human-caused industrial activities. Therefore, they want leaders to embrace business sustainability, transparent communication, and ethical human resource practices. 

Simultaneously, social networking sites (SNS) have empowered individuals to voice their disappointment with brands that fail to improve compliance. So, customers will likely stop purchasing from you if you lag behind competitors in sustainable business development. 

Conversely, corporations with adequate CSR programs get loyal customers, free press, positive value association, and investor goodwill. They become leaders in establishing new industry norms while others struggle to understand sustainability accounting principles. 

Precaution: Greenwashing is Not a Business Case for Sustainability 

Consider the following: 

  1. What can anyone do if companies manipulate their financial and carbon disclosures? 
  2. Will investors, regulators, and customers trust other brands who also report on sustainability or ESG performance metrics? 
  3. What types of green claims can an organization include in its marketing campaigns? 
  4. How do we verify that a brand’s reported CSR outcomes are genuine and tangible? 
  5. Can ESG disclosures backfire and expose your company to controversies and misinformation attacks?

These are legitimate fears expressed by many because of the greenwashing cases. The perpetrators use deceptive verbal and design tactics to boast about on-paper CSR achievements with no on-ground impact. Some claimed they use 100% renewable energy by cunningly omitting crucial reporting elements. Others had suppliers employing child labor in life-threatening working conditions. 

Greenwashing hurts stakeholder trust in ESG, TCFD, and GRI documentation. When one company receives greenwashing allegations, others in the same industry also attract criticism. Sooner or later, the media picks up the story, and the hard-earned brand reputation evaporates into infinity. 

Avoid greenwashing and disassociate with partners, municipalities, and suppliers doing it. 

Conclusion 

Leaders who recognize the significance of sustainable business development are visionaries. They know their business can thrive if the consumers, employees, investors, and regulators are happy with their work. So, the world has witnessed a rekindled interest in ethics-driven corporate attitudes and investment strategies. 

Reducing byproduct generation, enforcing anti-harassment policies, and adopting practical data governance standards make brands more sustainable. Their supply chain resilience increases while more stakeholders trust them. 

Nevertheless, greenwashing risks prevent organizations and investors from unlocking the full potential business sustainability use cases. Therefore, all the stakeholders must be honest with compliance reporting. It is okay if your ESG ratings are low because you can implement initiatives and work with experts to improve them. 

In the end, only transparency matters. Without it, sustainability reporting will become a worthless formality. However, with the proper oversight, the opposite will happen, and your organization will surpass all competitors while solving social and environmental problems.  

Featured Image by Daniel Öberg on Unsplash

The post The Business Case for Sustainability: Balancing Profitability and Environmental Responsibility  appeared first on noupe.

Categories: Others Tags:

Guarding the Gateway: How to Protect Your Online Forms from Security Risks

October 9th, 2023 No comments

Internet connectivity has brought light to every dark corner of the world, and businesses are embracing digital tools and techniques to make their work processes efficient. 

Online forms or web forms are ubiquitous in today’s digital landscape and play a pivotal role in online activities. Today, 74% of companies make use of web forms to help them in lead generation. The prevalence of online forms is driven by digital transformation, eCommerce, information gathering, better communication, job applications, event registration, government services, etc.

Image by Pixabay on Pexels

Online forms are very easy to use and are great for communicating with audiences. However, they are vulnerable to attacks from malware and hackers. This may result in leaks of confidential information that may deter customers from tuning in to your offering. In this article, we’ll offer a step-by-step guide to keep your online forms safe and secure.

Why Online Forms Are Critical

The ultimate goal of any business is to generate a sustainable profit using successful lead generation and conversion tools. In today’s time, if a company has failed to create pages or online forms that are asking visitors to fill in their details, then their marketing strategy is aimless.

Online forms are interactive web pages that will allow users to input their answers. The data that is entered by the user is directly received by the main server for processing. It streamlines the entire process of data collection and is a cost-effective digital solution. The forms are easily accessible anywhere and are great for customer engagements, especially for those organizations that have a diverse customer base – usually industries like insurance, tourism, hospitality, and financial services like simple loans or credit reports. Users can fill them at their convenience, and they are processed quickly. If there is an increase in the number of users and submissions, it can be scaled to handle huge volumes without any manual effort.

An online form can be customized and integrated with software, allowing for automated data processing. These forms come with analytics tools that offer valuable insights into user behavior and understanding of their responses. This data is valuable to make informed decisions to improve your offering and overall user experience.

Many major companies today are built around information systems derived from forms or other sources. Some of these include eBay, Amazon, Cloud computing services, and Alibaba. Even Google derives most of its revenue through collecting information through advertising keywords on its internet searches. Governments use them to gather and provide information and services to their citizens. Apart from this, digital goods- eBooks, video products, tools and software, and gaming social networks use them to generate sales for their products. Individuals use them for shopping, banking, entertainment, etc.

The Anatomy of Online Form Attacks

Online form attacks are caused when malicious elements exploit vulnerabilities and steal sensitive information or perform malicious activities. The attacks can target different applications of the forms. It is, therefore, important for organizations to understand the anatomy of online form attacks to keep them and their users protected from potential threats. Here are 3 key components in such attacks.

1. CSRF (Cross-Site Request Forgery)

CSRF is an attack on the web security of a form where the attacker tricks the user into performing unwanted actions that the user has not authenticated. This will result in data loss, changes in the user account, and other malicious actions.

This is how a CSRF attack will typically work

  • Authentication and request – The victim is already authenticated to a web application through a login session. The attacker will send a malicious request, which may include an HTTP parameter and a URL containing certain actions on the target website application. For example, you could be requested to change the user password, delete data, etc.
  • Trick – The attacker will trick the victim into clicking on a link or web page that will make an unintended request for a target online form. 
  • Unintended action – Once the command is accepted by the form, the attacker will get access to your information.

2. Data Scraping

In data scraping, attackers make use of bots to scrape through the information from your online forms. According to a report by the Automated Fraud Benchmark Report, data scraping has increased by 102%, in the past few years.

Attackers may use this opportunity to copy your content and usurp its search engine ranking to duplicate its look and branding for fraudulent purposes. They can easily create phishing forms and fake advertisement campaigns to trick users into entering their personal information.

3. Phishing

Phishing is a cyberattack on online forms where attackers trick individuals into divulging sensitive information. The attacks involve deceptive tactics, where an attacker can create fake forms that are nearly identical to your form and manipulate victims to take specific information.

A phishing attack starts with deceptive emails, phone calls, or text messages which may seem like they are coming from trusted sources. They may also create a sense of urgency or fear, claiming a security breach or an offer that is too good to miss.

The attacks are continuously evolving, and it is important to employ good cybersecurity to keep your online form safe and secure. Let us look at them below.

Foundational Security Measures

Implementing foundational security measures is important to set a base to protect your sensitive data and prevent malicious activities. Here are some security measures you need to consider.

1. Data Encryption

Data encryption helps protect your data during rest and transit. This will keep the confidentiality and integrity of sensitive information safe, making it challenging for unauthorized individuals or malicious elements.

At rest, data encryption revolves around securing data when it is stored in physical or digital storage media, like databases or backups. Even if an attacker gains physical access to the storage, they won’t be able to decipher the sensitive information.

Data encryption during transit will protect your information between the user’s device and the server. This will prevent the interception of sensitive information during transmission. So you can be sure that the data received at the destination is the same as when it is sent. Any tampering of the data will result in decryption errors and will alert the recipient of any issues.

To implement strong data encryption algorithms, you can follow the steps.

  • Understand your requirement
  • Select appropriate encryption algorithms (common choices are AES, RSA, and ECC).
  • Implement strong key management and secure the exchange key.
  • Combine encryption with strong authentication and authorization.
  • Use encryption algorithms to provide data integrity checks and use random number generation code.
  • Regularly update and paste the codes.
  • Thoroughly test and validate the offering.
  • Comply with industry-specific standards and ensure proper data backup and recovery.
  • Implement logging and monitoring mechanisms.
  • Allow secure disposal and maintain detailed documentation of your encryption.

2. Secure Tokens

Security tokens for forms help enhance the security of your web forms to prevent cross-site request forgery attacks. These tokens help keep your online and digital transactions protected and safe. They are utilized to identify a user’s identity, grant access to certain resources, and authorize said action. Secure tokens come in several forms with their own level of security.

3. Data Masking

Data masking for online forms is a privacy technique that helps mask the sensitive information entered by a user in the web-based form. The primary goal is to hide data portions like passwords to prevent unauthorized access to sensitive information.

Intermediate Security Strategies

Intermediate security strategies implement practices and measures that are beyond basic security measures. They are not as comprehensive or complex as advanced security strategies. Here are popular intermediate security strategies you have to consider.

1. Content Security Policy (CSP)

A Content Security Policy is a layer of security that is built into all modern browsers. The feature helps you recognize and mitigate risks like XSS and data injection attacks. The security policy whitelists the URLs by specifying a value that has to be in the tag that will load the script.

2. Secure Cookies

Secure cookies are pivotal for protecting your user data and privacy.  Essentially, cookies are small pieces of data that a website will send to a user’s web browser, which are stored on the user’s device.

A safe cookie is only transmitted over an encrypted HTTPS connection to prevent sensitive data from being sent over unsecured connections. If a cookie is marked as HttpOnly, it cannot be accessed by the JavaScript running on the client side. This helps prevent XXS, where attackers inject malicious scripts.

3. Input Sanitization

It is a crucial security practice to protect against malicious input, SQL injection, XXS attacks, etc. The primary goal of input sanitization is to ensure that the data provided by the user or any external source is clean and safe from harmful content before it can be processed or stored.

Advanced Techniques for Form Security

Advanced security techniques for online forms is the highest level of security to protect your information. Below are three advanced techniques for optimum performance.

1. Behavioral Biometrics

Behavioral biometrics is an advanced security technique to protect forms against malware. It focuses on identifying individuals based on behavioral patterns like typing speed, mouse movements, etc. These patterns are unique to each individual and help authenticate who is using, inputting, or making any alterations to the form.

Behavioral biometrics is being used increasingly in every industry from government facilities and financial services for authentication to educational institutions for correspondence or online courses as a mode of study and examination.

2. AI and Machine Learning

AI algorithms allow users to analyze the behavior of the online form to detect any malware variants. Any abnormal patterns or malicious activities are flagged with machine learning techniques. It helps analyze larger datasets to identify any deviations from normal behavior.

3. API Security

API security is another powerful feature that is essential to protect the integrity and availability of all data and services that are exposed through APIs. They work as a framework in the backend for mobile and web applications.

Compliance and Regulations

Data compliance and regulations exist in different countries and regions across the world. These regulations help protect the personal data of individuals, protect your privacy rights, and impose obligations on organizations that will process the data. Some of the prominent data protection regulations are

  • GDPR – It is a comprehensive data protection regulation in the EU and EEA. It offers individuals greater control over their personal data.
  • CCPA – It is a data protection law in California, USA. It gives users rights over their personal information.
  • Other relevant data protection regulations are LGPD, PDPA, PIPEDA, HIPAA, and APEC Privacy Framework.

Compliance with data protection can have a significant impact on your form security. Failure to comply with them can result in legal consequences or damage. So, try and comply with these rules to ensure your online form is safe from malware or attackers.

Best Practices and Checklists

In the article so far, we’ve listed some of the best ways you can protect your data from being attacked by malware or hackers. Here is a list of security changes you can make right now to protect your online form

  • Implement and validate a sitewide SSL to protect your data from tampering or scammers and also improve search engine rankings.
  • Create a 2-factor authentication or SHA256 encryption for your passwords.
  • Enforce communication strictly through HTTP.
  • Select a trustworthy hosting provider if you already don’t have one.
  • Guarantee input validation in the forms for optimal security.

Performing periodic security audits is very important to identify the organization’s overall cybersecurity strategy. These audits make sure that your business is always updated against bugs and security gaps. You don’t have to take the back seat when it comes to the security of your online forms. Move forward and perform a security check before the problem escalates and becomes an issue, saving you financial and reputational losses.

Image by FLY:D on Unsplash

Conclusion

The security of online forms is essential to protect your sensitive data, maintain user trust, and comply with different regulations. Neglecting the security of your online forms can lead to leakage of sensitive information and reputation loss. However, security is not a one-time thing, and you have to constantly audit your security to stay ahead of any new issues or malware.

Featured Image by Franck on Unsplash

The post Guarding the Gateway: How to Protect Your Online Forms from Security Risks appeared first on noupe.

Categories: Others Tags: