Archive

Archive for August, 2020

Chapter 2: Browsers

August 12th, 2020 No comments

Previously in web history…

Sir Tim Berners-Lee creates the technologies behind the web — HTML, HTTP, and the URL which blend hypertext with the Internet — with a small team at CERN. He convinces the higher-ups in the organizations to put the web in the public domain so anyone can use it.

Dennis Ritchie had a problem.

He was working on a new, world class operating system. He and a few other colleagues were building it from the ground up to be simple and clean and versatile. It needed to run anywhere and it needed to be fast.

Ritchie worked at Bell Labs. A hotbed of innovation, in the 60s, and 70s, Bell employed some of the greatest minds in telecommunications. While there, Ritchie had worked on a time-sharing project known as Multics. He was fiercely passionate about what he saw as the future of computing. Still, after years of development and little to show for it, Bell eventually dropped the project. But Ritchie and a few of his colleagues refused to let the dream go. They transformed Multics into a new operating system adaptable and extendable enough to be used for networked time sharing. They called it Unix.

Ritchie’s problem was with Unix’s software. More precisely, his problem was with the language the software ran on. He had been writing most of Unix in assembly code, quite literally feeding paper tape into the computer, the way it was done in the earliest days of computing. Programming directly in assembly — being “close to the metal” as some programmers refer to it — made Unix blazing fast and memory efficient. The process, on the other hand, was laborious and prone to errors.

Ritchie’s other option was to use B, an interpreted programming language developed by his co-worker Ken Thompson. B was much simpler to code with, several steps abstracted from the bare metal. However, it lacked features Ritchie felt were crucial. B also suffered under the weight of its own design; it was slow to execute and lacked the resilience needed for time-sharing environments.

Ritchie’s solution was to chose neither. Instead, he created a compiled programming language with many of the same features as B, but with more access to the kinds of things you could expect from assembly code. That language is called C.

By the time Unix shipped, it had been fully rewritten in C, and the programming language came bundled in every operating system that ran on top of it, which, as it turned out, was a lot of them. As more programmers tried C, they adapted to it quickly. It blended, as some might say, perfectly abstract functions and methods for creating predictable software patterns with the ability to get right down to the metal if needed. It isn’t prescriptive, but it doesn’t leave you completely lost. Saron Yitabrek, host of the Command Heroes podcast, describes C as “a nearly universal tool for programming; just as capable on a personal computer as it was on a supercomputer.”

C has been called a Swiss Army language. There is very little it can’t do, and very little that hasn’t been done with it. Computer scientist Bill Dally once said, “It set the tone for the way that programming was done for several decades.” And that’s true. Many of the programming paradigms developed in the latter half of the 20th century originated in C. Compilers were developed beyond Unix, available in every operating system. Rob Pike, a software engineer involved in the development of Unix, and later Go, has a much simpler way of putting it. “C is a desert island language.”

Ritchie has a saying of his own he was fond of repeating. “C has all the elegance and power of assembly language with all the readability and maintainability of… assembly language.” C is not necessarily everyone’s favorite programming language, and there are plenty of problems with it. (C#, created in the early 2000s, was one of many attempts to improve it.) However, as it proliferated out into the world, bundled in Unix-like operating systems like X-Windows, Linux, and Mac OSX, software developers turned to it as a way to speak to one another. It became a kind of common tongue. Even if you weren’t fluent, you could probably understand the language conversationally. If you needed to bundle up and share a some code, C was a great way to do it.

In 1993, Jean-François Groff and Sir Tim Berners-Lee had to release a package with all of the technologies of the web. It could be used to build web servers or browsers. They called it libwww, and released it to the public domain. It was written in C.


Think about the first time you browsed the web. That first webpage. Maybe it was a rich experience, filled with images, careful design and content you couldn’t find anywhere else. Maybe it was unadorned, uninteresting, and brief. No matter what that page was, I’d be willing to bet that it had some links. And when you clicked that link, there was magic. Suddenly, a fresh page arrives on your screen. You are now surfing the web. And in that moment you understand what the web is.

Sir Tim Berners-Lee finished writing the first web browser, WorldWideWeb, in the final days of 1990. It ran on his NeXT machine, and had read and write capabilities (the latter of which could be used to manage a homepage on the web). The NeXTcube wasn’t the heaviest computer you’ve ever seen, but it was still a desktop. That didn’t stop Berners-Lee from lugging it from conference to conference so he could plug it in and show people the web.

Again and again, he ran into the same problem. It will seem obvious to us now when considering the difficulty of demonstrating a globally networked hypertext application running on a little-used operating system (NeXT) on a not-widely-owned computer (NeXT Computer System) alone at a conference without the Internet. The problem came after the demo with the inevitable question: how can I start using it? The web lacks its magic if you can’t connect to the network yourself. It’s entirely useless isolated on a single computer. To make the idea click, Berners-Lee need to get everybody surfing the web. And he couldn’t very well lend his computer out to anybody that wanted to use it.

That’s where Nicola Pellow came in. An undergraduate at Leicester Polytechnic, Pellow was still an intern at CERN. She was assigned to Berners-Lee’s and Calliau’s team, so they tasked her with building an interoperable browser that could be installed anywhere. The fact that she had no background in programming (she was studying mathematics) and that she was at CERN as part of an internship didn’t concern her much. Within a couple of months she picked up a bit of C programming and built the Line Mode Browser.

Using the Line Mode Browser today, you would probably feel like a hacker from the 1980s. It was a text-only browser designed to run from a command line terminal. In most cases, just plain white text on a black background, pixels bleeding from edge to edge. Typing out a web address into the browser would bring up that website’s text on the screen. The up and down arrows on a keyboard could be used for navigation. Links were visible as a numbered list, and one could jump from site to site by entering the right number.

It was designed that way for a reason. Its simplicity guaranteed interoperability. The Line Mode Browser holds the unique distinction of being the only browser for many years to be platform-agnostic. It could be installed anywhere, on just about any computer or operating system. It made getting online easy, provided you knew what to do once you installed it. Pellow left CERN a few months after she released the Line Mode Browser. She returned after graduation, and helped build the first Mac browser.

Almost soon as Pellow left, Berners-Lee and Cailliau wrangled another recruit. Jean-François Groff was working at CERN, one office over. A programmer for years, Groff had written the French translation of the official C Programming Guide by Brian Kernighan and the language’s creator, Dennis Ritchie. He was working on a bit of physics software for UNIX systems when he got a chance to see what Berners-Lee was working on.

Not everybody understood what the web was going for. It can be difficult to grasp without the worldwide picture we have today. Groff was not one of those people. He longed for something just like the web. He understood perfectly what the web could be. Almost as soon as he saw a demo, he requested a transfer to the team.

He noticed one problem right away. “So this line mode browser, it was a bit of a chicken and egg problem,” he once described in an interview, “because to use it, you had to download the software first and install it and possibly compile it.” You had to use the web to download a web browser, but you needed a web browser to use the web. Groff found a clever solution. He built a simple mechanism that allowed users to telnet in to the NeXT server and browse the web using its built-in Line Mode Browser. So anyone in the world could remotely access the web without even needing to install the browser. Once they were able to look around, Groff hoped, they’d be hooked.

But Groff wanted to take it one step further. He came from UNIX systems, and C programming. C is a desert island language. Its versatility makes it invaluable as a one-size-fits-all solution. Groff wanted the web to be a desert island platform. He wanted it to be used in ways he hadn’t even imagined yet, ways that scientists at research institutions couldn’t even fathom. The one medium you could do anything with. To do that, he would need to make the web far more portable.

Working alongside Berners-Lee, Groff began pulling out the essential elements of the NeXT browser and porting them to the C programming language. Groff chose C not only because he was familiar with it, but because he knew most other programmers would be as well. Within a few months, he had built the libwww package (its official title would come a couple of years later). The libwww package was a set of common components for making graphical browsers. Included was the necessary code for parsing HTML, processing HTTP requests and rendering pages. It also provided a starting point for creating browser UI, and tools for embedding browser history and managing graphical windows.

Berners-Lee announced the web to the public for the first time on August 7, 1991. He posted a brief description along with a simple note:

If you’re interested in using the code, mail me. It’s very prototype, but available by anonymous FTP from info.cern.ch. It’s copyright CERN but free distribution and use is not normally a problem.

If you were to email Sir Tim Berners-Lee, he’d send you back the libwww package.

By November of 1992, the library had fully matured into a set of reusable tools. When CERN put the web in the public domain the following year, its terms included the libwww package. By 1993, anyone with a bit of time on their hands and a C compiler could create their own browser.

Before he left CERN to become one of the first web consultants, Groff did one final thing. He created a new mailing list, called www-talk, for a new generation of browser developers to talk shop.


On December 13, 1991 — almost a year after Berners-Lee had put the finishing touches on the first ever browser — Pei-Yuan Wei posted to the www-talk mailing list. After a conversation with Berners-Lee, he had built a browser called ViolaWWW. In a few months, it would be the most popular of the early browsers. In the middle of his post, Wei offhandedly — in a tone that would come off as bragging if it weren’t so sincere — mentioned that the browser build was a one night hack.

A one night hack. Not even Berners-Lee or Pellow could pull that off. Wei continued the post with the reasons he was able to get it up and running so quickly. But that nuance would be lost to history. What programmers would remember is that the it only took one day to build a browser. It was “hacked” together and shipped to the world, buggy, but usable. That phrase would set the tone and pace of browser development for at least the next decade. It is arguably the dominant ideology among browser makers today.

The irony is the opposite was true. ViolaWWW was the product of years of work that simply culminated in a single night. Wei is a great software programmer. But he also had all the pieces he needed before the night even started.

Pei-Yuan Wei has made a few appearances on the frontlines of web history. Apart from the ViolaWWW browser, he was hired by Dale Dougherty to work on an early version of GNN.com, the first commercial website. He was at a meeting of web pioneers the day the idea of the W3C was first discussed. In 2012, he was on the list of witnesses to speak in court to the many dangers of the Stop Online Privacy Act (SOPA). In the web’s early history Wei was a persistent presence.

Wei was a student at UCLA Berkley in the early 90s. It was HyperCard that set off his fascination with hypertext software. HyperCard was an application built for the Mac operating system in the late 80s. It allowed its users to create stacks of virtual “cards,” each with a bit of info. Users could then connect these cards however they wanted, and quickly sort, search, and navigate through their stacks. People used it to organize their recipes, replace their Rolodexes, organize research notes, and a million other things. HyperCard is the kind of software that attracts a person who demands a certain level of digital meticulousness, the kind of user that organizes their desktop folders into neat sections and precisely tags their data. This core group of power users manipulated the software using its built-in scripting language, HyperScript, to extend it to new heights.

Wei had just glimpsed Hypercard before he knew he needed to use it. But he was on an X-Windows computer, and HyperCard could only run on a Mac. Wei was not to be deterred. Instead of buying a Mac computer (an expensive but reasonable solution the problem) Wei began to write software of his own. He even went one step further. Wei began by creating his very own programming language. He called it Viola, and the first thing he built with it was a HyperCard clone.

Wei felt that the biggest limitation of HyperCard — and by extension his own hypertext software — was that it lacked access to a network. What good was data if it was locked up inside of a single computer? By the time he had reached that conclusion, it was nearing the end of 1991, around the time he saw a mention of the World Wide Web. So one night, he took Viola, combined it with libwww, and built a web browser. ViolaWWW was officially released.

ViolaWWW was built so quickly because most of it was already done by the time Wei found out about the web project. The Viola programming language was in the works for a couple of years at that point. It had already been built to accept hyperlinks and hypermedia for the HyperCard clone. It had been built to be extendable to other possible applications. Once Wei was able to pick apart libwww, he ported his software to read HTML, which itself was still a preposterously simple language. And that piece, the final tip of the iceberg, only took him a single night.

ViolaWWW would be the site of a lot of experimentation on the early web. Wei was the first to include an early version of stylesheets. He added a bookmarking function. The browser supported forms and embedded media. In a prescient move, Wei also included downloadable applets, allowing fairly advanced applications running inside of the browser. This became the template for what would eventually become Java applets.

For X-Windows users, ViolaWWW was the most popular browser on the market. Until the next thing came along.


Releasing a browser in the early 90s was almost a rite of passage. There was a useful exercise in downloading the libwww package and opening it up in your text editor. The web wasn’t all that complicated: there was a bit of code for rendering HTML, and processing HTTP requests from web servers (or other origins, like FTP or Gopher). Programmers of the web used a browser project as a way of getting familiar with its features. It was kind of like the “Hello World” of the early web.

In June of 1993, there were 130 websites in the entire world. There was easily a dozen browsers to chose from. That’s roughly one browser for every ten websites.

This rapid development of browsers was driven by the nature of innovation in the web community. When Berners-Lee put the web in the public domain, he did more than just give it to the world. He put openness at the center of its ideology. It would take five years — with the release of Netscape — for the web to get its first commercial browser. Until then, the “browser makers” were a small community of programmers talking things out the www-talk mailing list trying to make web browsing feel as revolutionary as they wanted it to be.

Some of the earliest projects ported one browser to another operating system. Occasionally, one of the browser makers would spontaneously release something that now feels essential. The first PDF rendering inside of a browser window was a part of the Midas browser. HTML tables were introduced and properly laid out in another called Arena. Tabbed browsing was a prominent feature in InternetWorks. All of these features were developed before 1995.

Most early browsers have faded into obscurity. But the people behind them didn’t. Counted among the earliest browser makers are future employees at Netscape, members of the W3C and the web standards movement, the inventor of cookies (and the blink tag), and the creators of some of the most important websites of the early web.

Of course, no one knew that at the time. To most of the creators, it was simply an exercise in making something cool they could pass along to their Internet friends.


The New York Times introduced its readers to the web on December 8, 1993. “Think of it as a map to the buried treasures of the Information Age,” read the first line. But the “map” the writer was referring to — one he would spend the first half of the article describing — wasn’t the World Wide Web; it was its most popular browser. A browser called Mosaic.

Mosaic was created, in part, by Marc Andreessen. Like many of the early web pioneers, Andreessen is a man of lofty ambition. He is drawn to big ideas and grand statements (he once said that software will “eat the world”). In college, he was known for being far more talkative than your average software engineer, chatting it up about the next bing thing.

Andreessen has had a decades-long passion for technology. Years later, he would capture the imagination of the public with the world’s first commercial browser: Netscape Navigator. He would grace the cover of Time magazine. He would become a cornerstone of Silicon Valley, define its rapid “ship first, think later” ethos for years, and seek and capture his fortune in the world of venture capital.

But Mosaic’s story does not begin with a commanding legend of Silicon Valley overseeing, for better or worse, the future of technology. It begins with a restless college student.

When Sir Tim Berners-Lee posted the initial announcement about the web, about a year before the article in The New York Times, Andreessen was an undergraduate student at the University of Illinois. While he attended school he worked at the university-affiliated computing lab known as the National Center for Supercomputing Applications (NCSA). NCSA occupied a similar space as ARPA in that they both were state-sponsored projects without an explicit goal other than to further the science of computing. If you worked at NCSA, it was possible to move from project to project without arising too much suspicion from the higher ups.

Andreessen was supposed to be working on visualization software, which he had found a way to run mostly on auto-pilot. In his spare time, Andreessen would ricochet around the office listening to everyone about what it was they were interested in. It was during one of those sessions that a colleague introduced him to the World Wide Web. He was immediately taken aback. He downloaded the ViolaWWW browser, and within a few days he had decided that the web would be his primary focus. He decided something else too. He needed to make a browser of his own.

In 1992, browsers could be cumbersome software. They lacked the polish and the conventions of modern browsers without decades of learning to build off of. They were difficult to download and install, often requiring users to make modifications to system files. And early browser makers were so focused on developing the web they didn’t think too much about the visual interface of their software.

Andreessen wanted to build a well-designed, performant, easy-to-install browser while simultaneously building on the features that Wei was adding to the ViolaWWW browser. He pitched his idea to a programmer at NCSA, Eric Bina. “Marc’s a very good salesman,” Bina would later recall, so he joined up.

Taking their cue from the pace of others, Andreessen and Bina finished the first version of the Mosaic browser in just a few weeks. It was available for X Windows computers. To announce the browser, Andreessen posted a download link to the www-talk mailing list, with the message “By the power vested in me by nobody in particular, alpha/beta version 0.5 of NCSA’s Motif-based networked information systems and World Wide Web browser, X Mosaic, is hereby released.” The web got more than just a popular browser. It got its first pitchman.

That first version of the browser was impressive in a somewhat crowded field. To be sure, it had forms and some media support early on. But it wasn’t the best browser, nor was it the most advanced browser. Instead, Andreessen and Bina focused on something else entirely. Mosaic set itself apart because it was the easiest to use. The installation process was simple and the interface was, relatively speaking, intuitive.

The Mosaic browser’s secret weapon was its iteration. Before long, other programmers at NCSA wanted in on the project. They parceled off different operating systems to port the browser to. One team took the Mac, another Windows. By the fall of 1993, a few months after its initial release, Mosaic had feature-paired versions on Mac, Windows and Unix systems, as well as compatible server software.

After that, the pace of development only accelerated. Beta versions were released often and were available to download via FTP. New features were added at a rapid pace and new versions seemed to ship every week. The NCSA Mosaic was fully engaged with the web community, active in the www-talk mailing list, talking with users and gathering bug reports. It was not at all unusual to submit a bug report and hear back a few hours later from an NCSA programmer with a fix.

Andreessen was a particularly active presence, posting to threads almost daily. When the Mosaic team decided they might want to collect anonymous analytics about browser usage, Andreessen polled the www-talk list to see if it was a good idea. When he got a lot of questions about how to use HTML, he wrote a guide for beginners.

When one Mosaic user posted some issues he was having, it led to a tense back and forth between that user and Andreessen. He claimed he wasn’t a customer, and Andreessen shouldn’t care too much about what he thought. Andreessen replied, “We do care what you think simply because having the wonderful distributed beta team that we essentially have due to this group gives us the opportunity to make our product much better than it could be otherwise.” What Andreessen understood better than any of the early browser makers was that Mosaic was a product, and feedback from his users could drive its development. If they kept the feedback loop tight, they could keep the interface clean and bug-free while staying on the cutting edge of new features. It was the programming parable given enough eyeballs, all bugs are shallow come to life in browser development.

There was an electricity to Mosaic development at NCSA. Internal competition fueled OS teams to get features out the door. Sometimes the Mac version would get to something first. Sometimes it was Bina and Andreessen continuing to work on X-Mosaic. “We would get together, middle of the night, and come up with some cool idea — images was an example of that — then we would go off and race and see who would do it first,” creator of the Windows version of Mosaic Jon Mittelhauser later recalled. Sometimes, the features were duds and would hardly go anywhere at all. Other times, as Mittelhauser points out, they were absolutely essential.

In the months after launch, they started to surpass the feature list of even their nearest competitor, ViolaWWW. They added forms support and rich media. They added bookmarks for users to keep track of their links. They even created their own “What’s New” page, updated every single day, which tracked the web’s most popular links. When you opened up Mosaic, the NCSA What’s New page was the first thing you saw. They weren’t just building a browser. They were building a window to the web.

As Mittelhauser points out, it was the tag which became Mosaic’s defining feature. It succeeded in doing two things. The tag was added without input from Sir Tim Berners-Lee or the wider web community. (Andreessen posted a note to www-talk only after it had already been implemented.) So firstly, that set the Mosaic team in a conflict with other browser makers and some parts of the web community that would last for years.

Secondly, it made Mosaic infinitely more popular. The tag allowed for images to be embedded directly inline in the Mosaic browser. People found the web boring to browse. It was sterile, rigid, and scientific. Inline images changed all that. Within a few months, a new class of web designer was beginning to experiment with what was possible with images on the web. In some ways, it was the tag that made the web famous.

The image tag prompted the feature in The New York Times, and a subsequent write-up in Wired. By the time the press got around to talking about the web, Mosaic was the most popular browser and became a surrogate for the larger web world. “Mosaic” was to browsing the web as “Google” is to searching now.

Ultimately, the higher ups got involved. NCSA was not a tech company. They were a supercomputing lab. They came in to help make the Mosaic browser more cohesive, and maybe, more profitable. Licenses were parceled out to a dozen or so companies. Mosaic was bundled into Spry’s Internet in a Box product. It was embedded in enterprise software by the Santa Cruz Operation.

In the end, Mosaic split off into two directions. Pressure from management pushed Andreessen to leave and start a new company. It would be called Netscape. Another of the licensees of the software was a company called Spyglass. They were beginning to have talks with Microsoft. Both would ultimately choose to rewrite the Mosaic browser from scratch, for different reasons. Yet that browser would be their starting point and their products would have lasting implications on the browser market for decades as the world began to see its first commercial browsers.


The post Chapter 2: Browsers appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

Practical Use Cases for JavaScript’s closest() Method

August 12th, 2020 No comments

Have you ever had the problem of finding the parent of a DOM node in JavaScript, but aren’t sure how many levels you have to traverse up to get to it? Let’s look at this HTML for instance:

<div data-id="123">
  <button>Click me</button>
</div>

That’s pretty straightforward, right? Say you want to get the value of data-id after a user clicks the button:

var button = document.querySelector("button");


button.addEventListener("click", (evt) => {
  console.log(evt.target.parentNode.dataset.id);
  // prints "123"
});

In this very case, the Node.parentNode API is sufficient. What it does is return the parent node of a given element. In the above example, evt.targetis the button clicked; its parent node is the div with the data attribute.

But what if the HTML structure is nested deeper than that? It could even be dynamic, depending on its content.

<div data-id="123">
  <article>
    <header>
      <h1>Some title</h1>
      <button>Click me</button>
    </header>
     <!-- ... -->
  </article>
</div>

Our job just got considerably more difficult by adding a few more HTML elements. Sure, we could do something like element.parentNode.parentNode.parentNode.dataset.id, but come on… that isn’t elegant, reusable or scalable.

The old way: Using a while-loop

One solution would be to make use of a while loop that runs until the parent node has been found.

function getParentNode(el, tagName) {
  while (el && el.parentNode) {
    el = el.parentNode;
    
    if (el && el.tagName == tagName.toUpperCase()) {
      return el;
    }
  }
  
  return null;
}

Using the same HTML example from above again, it would look like this:

var button = document.querySelector("button");


console.log(getParentNode(button, 'div').dataset.id);
// prints "123"

This solution is far from perfect. Imagine if you want to use IDs or classes or any other type of selector, instead of the tag name. At least it allows for a variable number of child nodes between the parent and our source.

There’s also jQuery

Back in the day, if you didn’t wanted to deal with writing the sort of function we did above for each application (and let’s be real, who wants that?), then a library like jQuery came in handy (and it still does). It offers a .closest() method for exactly that:

$("button").closest("[data-id='123']")

The new way: Using Element.closest()

Even though jQuery is still a valid approach (hey, some of us are beholden to it), adding it to a project only for this one method is overkill, especially if you can have the same with native JavaScript.

And that’s where Element.closest comes into action:

var button = document.querySelector("button");


console.log(button.closest("div"));
// prints the HTMLDivElement

There we go! That’s how easy it can be, and without any libraries or extra code.

Element.closest() allows us to traverse up the DOM until we get an element that matches the given selector. The awesomeness is that we can pass any selector we would also give to Element.querySelector or Element.querySelectorAll. It can be an ID, class, data attribute, tag, or whatever.

element.closest("#my-id"); // yep
element.closest(".some-class"); // yep
element.closest("[data-id]:not(article)") // hell yeah

If Element.closest finds the parent node based on the given selector, it returns it the same way as document.querySelector. Otherwise, if it doesn’t find a parent, it returns null instead, making it easy to use with if conditions:

var button = document.querySelector("button");


console.log(button.closest(".i-am-in-the-dom"));
// prints HTMLElement


console.log(button.closest(".i-am-not-here"));
// prints null


if (button.closest(".i-am-in-the-dom")) {
  console.log("Hello there!");
} else {
  console.log(":(");
}

Ready for a few real-life examples? Let’s go!

Use Case 1: Dropdowns

CodePen Embed Fallback

Our first demo is a basic (and far from perfect) implementation of a dropdown menu that opens after clicking one of the top-level menu items. Notice how the menu stays open even when clicking anywhere inside the dropdown or selecting text? But click somewhere on the outside, and it closes.

The Element.closest API is what detects that outside click. The dropdown itself is a

    element with a .menu-dropdown class, so clicking anywhere outside the menu will close it. That’s because the value for evt.target.closest(".menu-dropdown") is going to be null since there is no parent node with this class.

    function handleClick(evt) {
      // ...
      
      // if a click happens somewhere outside the dropdown, close it.
      if (!evt.target.closest(".menu-dropdown")) {
        menu.classList.add("is-hidden");
        navigation.classList.remove("is-expanded");
      }
    }

    Inside the handleClick callback function, a condition decides what to do: close the dropdown. If somewhere else inside the unordered list is clicked, Element.closest will find and return it, causing the dropdown to stay open.

    Use Case 2: Tables

    CodePen Embed Fallback

    This second example renders a table that displays user information, let’s say as a component in a dashboard. Each user has an ID, but instead of showing it, we save it as a data attribute for each

    element.

    <table>
      <!-- ... -->
      <tr data-userid="1">
        <td>
          <input type="checkbox" data-action="select">
        </td>
        <td>John Doe</td>
        <td>john.doe@gmail.com</td>
        <td>
          <button type="button" data-action="edit">Edit</button>
          <button type="button" data-action="delete">Delete</button>
        </td>
      </tr>
    </table>

    The last column contains two buttons for editing and deleting a user from the table. The first button has a data-action attribute of edit, and the second button is delete. When we click on either of them, we want to trigger some action (like sending a request to a server), but for that, the user ID is needed.

    A click event listener is attached to the global window object, so whenever the user clicks somewhere on the page, the callback function handleClick is called.

    function handleClick(evt) {
      var { action } = evt.target.dataset;
      
      if (action) {
        // `action` only exists on buttons and checkboxes in the table.
        let userId = getUserId(evt.target);
        
        if (action == "edit") {
          alert(`Edit user with ID of ${userId}`);
        } else if (action == "delete") {
          alert(`Delete user with ID of ${userId}`);
        } else if (action == "select") {
          alert(`Selected user with ID of ${userId}`);
        }
      }
    }

    If a click happens somewhere else other than one of these buttons, no data-action attribute exists, hence nothing happens. However, when clicking on either button, the action will be determined (that’s called event delegation by the way), and as the next step, the user ID will be retrieved by calling getUserId:

    function getUserId(target) {
      // `target` is always a button or checkbox.
      return target.closest("[data-userid]").dataset.userid;
    }

    This function expects a DOM node as the only parameter and, when called, uses Element.closest to find the table row that contains the pressed button. It then returns the data-userid value, which can now be used to send a request to a server.

    Use Case 3: Tables in React

    Let’s stick with the table example and see how we’d handle it on a React project. Here’s the code for a component that returns a table:

    function TableView({ users }) {
      function handleClick(evt) {
        var userId = evt.currentTarget
        .closest("[data-userid]")
        .getAttribute("data-userid");
    

        // do something with `userId`
      }
    

      return (
        <table>
          {users.map((user) => (
            <tr key={user.id} data-userid={user.id}>
              <td>{user.name}</td>
              <td>{user.email}</td>
              <td>
                <button onClick={handleClick}>Edit</button>
              </td>
            </tr>
          ))}
        </table>
      );
    }

    I find that this use case comes up frequently — it’s fairly common to map over a set of data and display it in a list or table, then allow the user to do something with it. Many people use inline arrow-functions, like so:

    <button onClick={() => handleClick(user.id)}>Edit</button>

    While this is also a valid way of solving the issue, I prefer to use the data-userid technique. One of the drawbacks of the inline arrow-function is that each time React re-renders the list, it needs to create the callback function again, resulting in a possible performance issue when dealing with large amounts of data.

    In the callback function, we simply deal with the event by extracting the target (the button) and getting the parent

    element that contains the data-userid value.

    function handleClick(evt) {
      var userId = evt.target
      .closest("[data-userid]")
      .getAttribute("data-userid");
    

      // do something with `userId`
    }

    Use Case 4: Modals

    CodePen Embed Fallback

    This last example is another component I’m sure you’ve all encountered at some point: a modal. Modals are often challenging to implement since they need to provide a lot of features while being accessible and (ideally) good looking.

    We want to focus on how to close the modal. In this example, that’s possible by either pressing Esc on a keyboard, clicking on a button in the modal, or clicking anywhere outside the modal.

    In our JavaScript, we want to listen for clicks somewhere in the modal:

    var modal = document.querySelector(".modal-outer");
    
    modal.addEventListener("click", handleModalClick);

    The modal is hidden by default through a .is-hidden utility class. It’s only when a user clicks the big red button that the modal opens by removing this class. And once the modal is open, clicking anywhere inside it — with the exception of the close button — will not inadvertently close it. The event listener callback function is responsible for that:

    function handleModalClick(evt) {
      // `evt.target` is the DOM node the user clicked on.
      if (!evt.target.closest(".modal-inner")) {
        handleModalClose();
      }
    }

    evt.target is the DOM node that’s clicked which, in this example, is the entire backdrop behind the modal,