Both are essentially database-backed systems for managing data. HubSpot is both, and much more. Where a CMS might be very focused on content and the metadata around making content useful, a CRM is focused on leads and making communicating with current and potential customers easier.
They can be brothers-in-arms. We’ll get to that.
Say a CRM is set up for people. You run a Lexus dealership. There is a quote form on the website. People fill it out and enter the CRM. That lead can go to your sales team for taking care of that customer.
But a CRM could be based on other things. Say instead of people it’s based on real estate listings. Each main entry is a property, with essentially metadata like photos, address, square footage, # of bedrooms/baths, etc. Leads can be associated with properties.
That would be a nice CRM setup for a real estate agency, but the data that is in that CRM might be awfully nice for literally building a website around those property listings. Why not tap into that CRM data as literal data to build website pages from?
That’s what I mean by a CRM and CMS being brothers-in-arms. Use them both! That’s why HubSpot can be an ideal home for websites like this.
To keep that tornado of synergy going, HubSpot can also help with marketing, customer service, and integrations. So there is a lot of power packed into one platform.
And with that power, also a lot of comfort and flexibility.
You’re still developing locally.
You’re still using Git.
You can use whatever framework or site-building tools you want.
You’ve got a CLI to control things.
There is a VS Code Extension for super useful auto-complete of your data.
There is a staging environment.
And the feature just keep coming. HubSpot really has a robust set of tools to make sure you can do what you need to do.
Do you have to use some third-party thing for search? Nope, they got it.
As developer-rich as this all is, it doesn’t mean that it’s developer-only. There are loads of tools for working with the website you build that require no coding at all. Dashboard for content management, data wrangling, style control, and even literal drag-and-drop page builders.
It’s all part of a very learnable system.
Themes, templates, modules, and fields are the objects you’ll work with most in HubSpot CMS as a developer. Using these different objects effectively lets you give content creators the freedom to work and iterate on websites independently while staying inside style and layout guardrails you set.
Every day design fans submit incredible industry stories to our sister-site, Webdesigner News. Our colleagues sift through it, selecting the very best stories from the design, UX, tech, and development worlds and posting them live on the site.
The best way to keep up with the most important stories for web professionals is to subscribe to Webdesigner News or check out the site regularly. However, in case you missed a day this week, here’s a handy compilation of the top curated stories from the last seven days. Enjoy!
Well, --accent-color is declared, so it’s definitely not orange (the fallback).
The value for the background is revert, so it’s essentially background: revert;
The background property doesn’t inherit though, and even if you force it to, it would inherit from the , not the root.
So… transparent.
Nope.
Lea:
[Because the value is revert it] cancels out any author styles, and resets back to whatever value the property would have from the user stylesheet and UA stylesheet. Assuming there is no --accent-color declaration in the user stylesheet, and of course UA stylesheets don’t set custom properties, then that means the property doesn’t have a value.
Since custom properties are inherited properties (unless they are registered with inherits: false, but this one is not), this means the inherited value trickles in, which is — you guessed it — skyblue.
Stephen posted a similar quiz the other day:
Again, my brain does it totally wrong. It goes:
OK, well, --color is declared, so it’s not blue (the fallback).
It’s not red because the second declaration will override that one.
So, it’s essentially like p { color: inherit; }.
The will inherit yellow from the , which it would have done naturally anyway, but whatever, it’s still yellow.
Nope.
Apparently inherit there is actually inheriting from the next place up the tree that sets it, which html does, so green. That actually is now normal inheriting works. It’s just a brain twister because it’s easy to conflate color the property with --color the custom property.
It also might be useful to know that when you actually declare a custom property with @property you can say whether you want it to inherit or not. So that would change the game with these brain twisters!
I recently had to create a widget in React that fetches data from multiple API endpoints. As the user clicks around, new data is fetched and marshalled into the UI. But it caused some problems.
One problem quickly became evident: if the user clicked around fast enough, as previous network requests got resolved, the UI was updated with incorrect, outdated data for a brief period of time.
We can debounce our UI interactions, but that fundamentally does not solve our problem. Outdated network fetches will resolve and update our UI with wrong data up until the final network request finishes and updates our UI with the final correct state. The problem becomes more evident on slower connections. Furthermore, we’re left with useless networks requests that waste the user’s data.
Here is an example I built to illustrate the problem. It grabs game deals from Steam via the cool Cheap Shark API using the modern fetch() method. Try rapidly updating the price limit and you will see how the UI flashes with wrong data until it finally settles.
CodePen Embed Fallback
The solution
Turns out there is a way to abort pending DOM asynchronous requests using an AbortController. You can use it to cancel not only HTTP requests, but event listeners as well.
The AbortControllerinterface represents a controller object that allows you to abort one or more Web requests as and when desired.
The AbortController API is simple: it exposes an AbortSignal that we insert into our fetch() calls, like so:
const abortController = new AbortController()
const signal = abortController.signal
fetch(url, { signal })
From here on, we can call abortController.abort() to make sure our pending fetch is aborted.
Let’s rewrite our example to make sure we are canceling any pending fetches and marshalling only the latest data received from the API into our app:
CodePen Embed Fallback
The code is mostly the same with few key distinctions:
It creates a new cached variable, abortController, in a useRef in the component.
For each new fetch, it initializes that fetch with a new AbortController and obtains its corresponding AbortSignal.
It passes the obtained AbortSignal to the fetch() call.
It aborts itself on the next fetch.
const App = () => {
// Same as before, local variable and state declaration
// ...
// Create a new cached variable abortController in a useRef() hook
const abortController = React.useRef()
React.useEffect(() => {
// If there is a pending fetch request with associated AbortController, abort
if (abortController.current) {
abortController.abort()
}
// Assign a new AbortController for the latest fetch to our useRef variable
abortController.current = new AbortController()
const { signal } = abortController.current
// Same as before
fetch(url, { signal }).then(res => {
// Rest of our fetching logic, same as before
})
}, [
abortController,
sortByString,
upperPrice,
lowerPrice,
])
}
Conclusion
That’s it! We now have the best of both worlds: we debounce our UI interactions and we manually cancel outdated pending network fetches. This way, we are sure that our UI is updated once and only with the latest data from our API.
In April of 2009, Yahoo! shut down GeoCities. Practically overnight, the once beloved service had its signup page replaced with a vague message announcing its closure.
We have decided to discontinue the process of allowing new customers to sign up for GeoCities accounts as we focus on helping our customers explore and build new relationships online in other ways. We will be closing GeoCities later this year.
Existing GeoCities accounts have not changed. You can continue to enjoy your web site and GeoCities services until later this year. You don’t need to change a thing right now — we just wanted you to let you know about the closure as soon as possible. We’ll provide more details about closing GeoCities and how to save your site data this summer, and we will update the help center with more details at that time.
In the coming months, the company would offer little more detail than that. Within a year, user homepages built with GeoCities would blink out of existence, one by one, until they were all gone.
Reactions to the news ranged from outrage to contemptful good riddance. In general, however, the web lamented about a great loss. Former GeoCities users recalled the sites that they built using the service, often hidden from public view, and often while they were very young.
For programmer and archivist Jason Scott, nostalgic remembrances did not go far enough. He had only recently created the Archive Team, a rogue group of Internet archivists willing to lend their compute cycles to the rescue of soon departed websites. The Archive Team monitors sites on the web marked for closure. If they find one, they run scripts on their computers to download as much of the site as they could before it disappears.
Scott did not think the question of whether or not GeoCities deserved to exist was relevant. “Please recall, if you will, that for hundreds of thousands of people, this was their first website,” he posted to his website not long after Yahoo!‘s announcement. “[Y]ou could walk up to any internet-connected user, hand them the URL, and know they would be able to see your stuff. In full color.” GeoCities wasn‘t simply a service. It wasn’t just some website. It was burst of creative energy that surged from the web.
In the weeks and months that followed, the Archive Team set to work downloading as many GeoCities sites as they could. They would end up with millions in their archive before Yahoo! pulled the plug.
Chris Wilson recalled the promise of an early web in a talk looking back on his storied career with Mosaic, then Internet Explorer, and later Google Chrome. The first web browser, developed by Sir Tim Berners-Lee, included the ability for users to create their own websites. As Wilson remembers it, that was the de-facto assumption about the web—that it would be a participatory medium.
“Everyone can be an author. Everyone would generate content,” Wilson said, “We had the idea that web server software should be free and everyone would run a server on their machine.” His work on Mosaic included features well ahead of their time, like built-in annotations so that users could collaborate and share thoughts on web documents together. They built server software in the hopes that groups of friends would cluster around common servers. By the time Netscape skyrocketed to popularity, however, all of those features had faded away.
GeoCities represented the last remaining bastion of this original promise of the web. Closing the service down, abruptly and without cause, was a betrayal of that promise. For some, it was the writing on the wall: the web of tomorrow was to look nothing like the web of yesterday.
In a story he recalls frequently, David Bohnett learned about the web on an airplane. Tens of thousands of feet up, untethered from any Internet network, he first saw mention of the web in a magazine. Soon thereafter, he fell in love.
Bohnett is a naturally empathetic individual. The long arc of his career so far has centered on bringing people together, both as a technologist and as a committed activist. As a graduate student, he worked as a counselor answering calls on a crisis hotline and became involved in the gay rights movement at his school. In more recent years, Bohnett has devoted his life to philanthropy.
Finding connection through compassion has been a driving force for Bohnett for a long time. At a young age, he recognized the potential of technology to help him reach others. “I was a ham radio operator in high school. It was exciting to collect postcards from people you talked to around the world,” he would later say in an interview. “[T]hat is a lot of what the Web is about.‘’
Some of the earliest websites brought together radical subcultures and common interests. People felt around in the dark of cyberspace until they found something they liked.
Riding a wave of riot grrrl ephemera in the early 1990’s, ChickClick was an early example. Featuring a mix of articles and message boards, women and young girls used ChickClick as a place to gather and swap stories from their own experience.
Much of the site centered on its strident creators, sisters Heather and Heidi Swanson. Though they each had their own areas of responsibility—Heidi provided the text and the editorial, Heather acted as the community liaison—both were integral parts of the community they created. ChickClick would not exist without the Swanson sisters. They anchored the site to their own personalities and let it expand through like-minded individuals.
Eventually, ChickClick grew into a network of linked sites, each focused on a narrower demographic; an interconnected universe of women on the web. The cost to expanding was virtually zero, just a few more bytes zipping around the Internet. ChickClick’s greatest innovation came when they offered their users their own homepages. Using a rudimentary website builder, visitors could create their own space on the web, for free and hosted by ChickClick. Readers were suddenly transformed into direct participants in the universe they had grown to love.
Bohnett would arrive at a similar idea not long after. After a brief detour running a more conventional web services agency called Beverley Hills Internet, Bohnett and his business partner John Rezner tried something new. In 1994, Bohnett sent around an email to some friends inviting them to create a free homepage (up to 15MB) on their experimental service. The project was called GeoCities.
What made GeoCities instantly iconic was that it reached for a familiar metaphor in its interface. When users created an account for the first time they had to pick an actual physical location on a virtual map—the digital “address” of their website. “This is the next wave of the net—not just information but habitation,” Bohnett would say in a press release announcing the project. Carving out a real space in cyberspace would become a trademark of the GeoCities experience. For many new users of the web, it made the confusing world of the web feel lived in and real.
The GeoCities map was broken up into a handful of neighborhoods users could join. Each neighborhood had a theme, though there wasn‘t much rhyme or reason to what they were called. Some were based on real world locations, like Beverley Hills for fashion aficionados or Broadway for theater nerds. Others simply played to a theme, like Area51 for the sci-fi crowd or Heartland for parents and families. Themes weren’t enforced, and most were later dropped in everything but name.
Neighborhoods were limited to 10,000 people. When that number was reached, the neighborhood expanded into suburbs. Everywhere you went on GeoCities there was a tether to real, physical spaces.
Like any real-world community, no two neighborhoods were the same. And while some people weeded their digital gardens and tended to their homepages, others left their spaces abandoned and bare, gone almost as soon as they arrived. But a core group of people often gathered in their neighborhoods around common interests and established a set of ground rules.
Historian Ian Milligan has done extensive research on the mechanics and history of GeoCities. In his digital excavation, he discovered a rich network of GeoCities users who worked hard to keep their neighborhoods orderly and constructive. Some neighborhoods assigned users as community liaisons, something akin to a dorm room RA, or neighborhood watch. Neighbors were asked to (voluntarily) follow a set of rules. Select members acted as resources, reaching out to others to teach them how to build better homepages. “These methods, grounded in the rhetoric of both place and community,” Milligan argues, “helped make the web accessible to tens of millions of users.”
For a large majority of users, however, GeoCities was simply a place to experiment, not a formal community. GeoCities would eventually become one of the web’s most popular destinations. As more amateurs poured in, it would become known for a certain garish aesthetic, pixelated GIFs of construction workers, or bright text on bright backgrounds. People used their homepages to host their photo albums, or make celebrity fan sites, or to write about what they had for lunch. The content of GeoCities was as varied as the entirety of human experience. And it became the grounding for a lot of what came next.
“So was it community?” Black Planet founder Omar Wasow would later ask. “[I]t was community in the sense that it was user-generated content; it was self-expression.” Self-expression is a powerful ideal, and one that GeoCities proved can bring people together.
Many early communities, GeoCities in particular, offered a charming familiarity in real world connection. Other sites flipped the script entirely to create bizarre and imaginative worlds.
Neopets began as an experiment by students Donna Williams and Adam Powell in 1999. Its first version—a prototype that mixed Williams art and Powell’s tech—had many of the characteristics that would one day make it wildly popular. Users could collect and raise fictional virtual pets inside the fictional universe of Neopia. It operated like the popular handheld toy Tamagotchi, but multiplied and remixed for cyberspace.
Beyond a loose set of guidelines, there were no concrete objectives. No way to “win” the game. There were only the pets, and pet owners. Owners could create their own profiles, which let them display an ever expanding roster of new pets. Pulled from their imagination, Williams and Powell infused the site with their own personality. They created “unique characters,” as Williams later would describe it, “something fantasy-based that could live in this weird, wonderful world.”
As the site grew, the universe inside it did as well. Neopoints could be earned through online games, not as much a formal objective as much as in-world currency. They could be spent on accessories or trinkets to exhibit on profiles, or be traded in the Neopian stock market (a fully operational simulation of the real one), or used to buy pets at auction. The tens and thousands of users that soon flocked to the site created an entirely new world, mapped on top of of a digital one.
Like many community creators, Williams and Powell were fiercely protective of what they had built, and the people that used it. They worked hard to create an online environment that was safe and free from cheaters, scammers, and malevolent influence. Those who were found breaking the rules were kicked out. As a result, a younger audience, and one that was mostly young girls, were able to find their place inside of Neopia.
Neopians—as Neopets owners would often call themselves—rewarded the effort of Powell and Williams by enriching the world however they could. Together, and without any real plan, the users of Neopets crafted a vast community teeming with activity and with its own set of legal and normative standards. The trade market flourished. Users traded tips on customizing profiles, or worked together to find Easter eggs hidden throughout the site. One of the more dramatic examples of users taking ownership of the site was The Neopian Times, an entirely user-run in-universe newspaper documenting the fictional going-ons of Neopia. Its editorial has spanned decades, and continues to this day.
Though an outside observer might find the actions of Neopets frivolous, they were a serious endeavor undertaken by the site’s most devoted fans. It became a place for early web adventurers, mostly young girls and boys, to experience a version of the web that was fun, and predicated on an idea of user participation. Using a bit of code, Neopians could customize their profile to add graphics, colors, and personality to it. “Neopets made coding applicable and personal to people (like me),” said one former user, “who otherwise thought coding was a very impersonal activity.” Many Neopets coders went on to make that their careers.
Neopets was fun and interesting and limited only by the creativity of its users. It was what many imagined a version of the web would look like.
The site eventually languished under its own ambition. After it was purchased and run by Doug Dohring and later, Viacom, it set its sights on a multimedia franchise. “I never thought we could be bigger than Disney,” Dohring once said in a profile inWired,revealing just how far that ambition went, “but if we could create something like Disney – that would be phenomenal.” As the site began to lean harder into somewhat deceptive advertising practices and emphasize expansion into different mediums (TV, games, etc.), Neopets began to overreach. Unable to keep pace with the rapid developments of the web, it has been sold to a number of different owners. The site is still intact, and thanks to its users, thriving to this day.
Candice Carpenter thought a village was a handy metaphor for an online community. Her business partner, and co-founder, Nancy Evans suggested adding an “i” to it, for interactive. Within a few years, iVillage would rise to the highest peak of Internet fortunes and hype. Carpenter would cultivate a reputation for being charismatic, fearless, and often divisive, a central figure in the pantheon of dot-com mythology. Her meteoric rise, however, began with a simple idea.
By the mid-90’s, community was a bundled, repeatable, commotized product (or to some, a “totally overused buzzword,” as Omar Wasow would later put it). Search portals like Yahoo! and Excite were popular, but their utility came from bouncing visitors off to other destinations. Online communities had a certain stickiness, as one one profile inThe New Yorkerput it, “the intangible quality that brings individuals to a Web site and holds them for long sessions.”
That unique quality attracted advertisers hoping to monetize the attention of a growing base of users. Waves of investment in community, whatever that meant at any given moment, followed. “The lesson was that users in an online community were perfectly capable of producing value all by themselves,” Internet historian Brian McCullough describes. The New Yorker piece framed it differently. “Audience was real estate, and whoever secured the most real estate first was bound to win.”
TheGlobe.com was set against the backdrop of this grand drama. Its rapid and spectacular rise to prominence and fall from grace is well documented. The site itself was a series of chat rooms organized by topic, created by recent Cornell alumni Stephan Paternot and Todd Krizelman. It offered a fresh take on standard chat rooms, enabling personalization and fun in-site tools.
Backed by the notoriously aggressive Wall Street investment bank Bear Stearns, and run by green, youngish recent college grads, theGlobe rose to a heavily inflated valuation in full public view. “We launched nationwide—on cable channels, MTV, networks, the whole nine yards,” Paternot recalls in his book about his experience, “We were the first online community to do any type of advertising and fourth or the fifth site to launch a TV ad campaign.” Its collapse would be just as precipitous; and just as public. The site’s founders would be on the covers of magazines and the talk of late night television shows as examples of dot-com glut, with just a hint of schadenfreude.
So too does iVillage get tucked into the annals of dot-com history. The site‘s often controversial founders were frequent features in magazine profiles and television interviews. Carpenter attracted media attention as deftly as she maneuvered her business through rounds of investment and a colossally successful IPO. Its culture was well-known in the press for being chaotic, resulting in a high rate of turnover that saw the company go through five Chief Financial Officer’s in four years.
And yet this ignores the community that iVillage managed to build. It began as a collection of different sites, each with a mix of message boards and editorial content centered around a certain topic. The first, a community for parents known as Parent Soup which began at AOL, was their flagship property. Before long, it spanned across sixteen interconnected websites. “iVillage was built on a community model,” writer Claire Evans describes in her bookBroad Band, “its marquee product was forums, where women shared everything from postpartum anxiety and breast cancer stories to advice for managing work stress and unruly teenage children.”
Carpenter had a bold and clear vision when she began, a product that had been brewing for years. After growing tired of the slow pace of growth in positions at American Express and QVC, Carpetner was given more free rein consulting for AOL. It was her first experience with an online world. There wasn‘t a lot that impressed her about AOL, but she liked the way people gathered together in groups. “Things about people‘s lives that were just vibrant,” she’d later remark in an interview, “that’s what I felt the Internet would be.”
Parent Soup began as a single channel on AOL, but it soon moved to the web along with similar sites for different topics and interests—careers, dating, health and more. What drew people to iVillage sites was their authenticity, their ability to center conversations around topics and bring together people that were passionate about spreading advice. The site was co-founded by Nancy Evans, who had years of experience as an editor in the media industry. Together, they resisted the urge to control every aspect of their community. “The emphasis is more on what visitors to the site can contribute on the particulars of parenthood, relationships and workplace issues,” one writer noted, “rather than on top-tier columnists spouting advice and other more traditional editorial offerings used by established media companies.”
There was, however, something that bound all of the site‘s together: a focus that made iVillage startlingly consistent and popular. Carpenter would later put it concisely: “the vision is to help women in their lives with the stuff big and small that they need to get through.” Even as the site expanded to millions of users, and positioned itself as a network specifically for women, and went through one of the largest IPO’s in the tech industry, that simple fact would remain true.
What’s forgotten in the history of dot-com community is the community. There were, of course, lavish stories of instant millionaires and unbounded ambition. But much of the content that was created was generated by people, people that found each other across vast distances among a shared understanding. The lasting connections that became possible through these communities would outlast the boom and bust cycle of Internet business. Sites like iVillage became benchmarks for later social experiments to aspire to.
In February of 2002, Edgar Enyedy an active contributor to a still new Spanish version of Wikipedia posted to the Wikipedia mailing list and to Wikipedia‘s founder, Jimmy Wales. “I’ve left the project,” he announced, “Good luck with your wikiPAIDia [sic].”
As Wikipedia grew in the years after it officially launched in 2001, it began to expand to other countries. As it did, each community took on its own tenor and tone, adapting the online encyclopedia to the needs of each locale. “The organisation of topics, for example,” Enyedy would later explain, “is not the same across languages, cultures and education systems. Historiography is also obviously not the same.”
Enyedy‘s abrupt exit from the project, and his callous message, was prompted by a post from Wikipedia’s first editor-in-chief Larry Sanger. Sanger had been instrumental in the creation of Wikipedia, but he had recently been asked to step back as a paid employee due to lack of funds. Sanger suggested that sometime in the near future, Wikpedia may turn to ads.
It was more wishful thinking than actual fact—Sanger hoped that ads may bring him his job back. But it was enough to spurn Enyedy into action. The Wikipedia Revolution, author Andrew Lih explains why. “Advertising is the third-rail topic in the community—touch it only if you’re not afraid to get a massive shock.”
By the end of the month, Enyedy had created an independent fork of the Spanish Wikipedia site, along with a list of demands for him to rejoin the project. The list included moving the site from .com to .org domain and moving servers to infrastructure owned by the community and, of course, a guarantee that ads would not be used. Most of these demands would eventually be met, though its hard to tell what influence Enyedy had.
The fork of Wikipedia was both a legally and ideologically acceptable project. Wikipedia’s content is licensed under the Creative Commons license; it is freely open and distributable. The code that runs it is open source. It was never a question of whether a fork of Wikipedia was possible. It was a question of why it felt necessary. And the answer speaks to the heart of the Wikipedia community.
Wikipedia did not begin with a community, but rather as something far more conventional. The first iteration was known as Nupedia, created by Jimmy Wales in early 2000. Wales imagined a traditional encyclopedia ported into the digital space. An encyclopedia that lived online, he reasoned, could be more adaptable than the multi-volume tomes found buried in library stacks or gathering dust on bookshelves.
Wales was joined by then graduate student Larry Sanger, and together they recruited a team of expert writers and editors to contribute to Nupedia. To guarantee that articles were accurate, they set up a meticulous set of guidelines for entries. Each article contributed to Nupedia went through rounds of feedback and was subject to strict editorial oversight. After a year of work, Nupedia had less than a dozen finished articles and Wales was ready to shut the project down.
However, he had recently been introduced to the concept of a wiki, a website that anybody can contribute to. As software goes, the wiki is not overly complex. Every page has a publicly accessible “Edit” button. Anyone can go in and make edits, and those edits are tracked and logged in real time.
In order to solicit feedback on Nupedia, Wales had set up a public mailing list anyone could join. In the year since it was created, around 2,000 people had signed up. In January of 2001, he sent a message to that mailing list with a link to a wiki.
His hope was that he could crowdsource early drafts of articles from his project’s fans. Instead, users contributed a thousand articles in the first month. Within six months, there were ten thousand. Wales renamed the project to Wikipedia, changed the license for the content so that it was freely distributable, and threw open the doors to anybody that wanted to contribute.
The rules and operations of Wikipedia can be difficult to define. It has evolved almost in spite of itself. Most articles begin with a single, random contribution and evolve from there. “Wikipedia continues to grow, and articles continue to improve,” media theorist Clary Shirky wrote of the site in his seminal work Here Comes Everybody, “the process is more like creating a coral reef, the sum of millions of individual actions, than creating a car. And the key to creating those individual actions is to hand as much freedom as possible to the average user.”
From these seemingly random connections and contributions, a tight knit group of frequent editors and writers have formed at the center of Wikipedia. Programmer and famed hacktivist Aaron Swartz described how it all came together. “When you put it all together, the story become clear: an outsider makes one edit to add a chunk of information, then insiders make several edits tweaking and reformatting it,” described Swartz, adding, “as a result, insiders account for the vast majority of the edits. But it’s the outsiders who provide nearly all of the content.” And these insiders, as Swartz referes to them them, created a community.
“One of the things I like to point out is that Wikipedia is a social innovation, not a technical innovation,” Wales once said. In the discussion pages of articles and across mailing lists and blogs, Wikipedians have found ways to collaborate and communicate. The work is distributed and uneven—a small community is responsible for a large number of edits and refinements to articles—but it is impressively collated. Using the ethos of open source as a guide, the Wikipedia community created a shared set of expectations and norms, using the largest repository of human knowledge in existence as their anchor.
Loosely formed and fractured into factions, the Wikipedia community nevertheless follows a set of principles that it has defined over time. Their conventions are defined and redefined on a regular basis, as the community at the core of Wikipedia grows. When it finds a violation of these principles—such as the suggestion that ads will be plastered on the article they helped they create—they sometimes react strongly.
Wikipedia learned from the fork of Spanish Wikipedia, and set up a continuous feedback loop that has allowed its community to remain at the center of making decisions. This was a primary focus of Katherine Maher, who became exectuvie director of Wikimedia, the company behind Wikipedia, in 2016, and then CEO three years later. Wikimedia’s involvement in the community, in Maher’s words, “allows us to be honest with ourselves, and honest with our users, and accountable to our users in the spirit of continuous improvement. And I think that that is a different sort of incentive structure that is much more freeing.”
The result is a hive mind sorting collective knowledge that thrives independently twenty years after it was created. Both Maher and Wales have referred to Wikipedia as a “part of the commons,” a piece of informational infrastructure as important as the cables that pipe bandwidth around the world, built through the work of community.
Fanfiction can be hard to define. It has been the seeds of subculture and an ideological outlet; the subject of intense academic and philosophical inquiry. Fanfiction has often been noted for its unity through anti-hegemony—it is by its very nature illegal or, at the very least, extralegal. As a practice, Professor Brownen Thomas has put it plainly: “Stories produced by fans based on plot lines and characters from either a single source text or else a ‘canon’ of works; these fan-created narratives often take the pre-existing storyworld in a new, sometimes bizarre, direction.” Fanfiction predates the Internet, but the web acted as its catalyst.
Message boards, or forums, began as a technological experiment on the web, a way of replicating the Usenet groups and bulletin boards of the pre-web Internet. Once the technology had matured, people began to use them to gather around common interests. These often began with a niche—fans of a TV show, or a unique hobby—and then used as the beginning point for much wider conversation. Through threaded discussions, forum-goers would discuss a whole range of things in, around, and outside of the message board theme. “If urban history can be applied to virtual space and the evolution of the Web,” one writer recalls, “the unruly and twisted message boards are Jane Jacobs. They were built for people, and without much regard to profit.”
Some stayed small (and some even remain so). Others grew. Fans of the TV show Buffy the Vampire Slayer had used the official message board of the show for years. It famously took on a life of its own when the boards where shut down, and the users funded and maintained an identical version to keep the community alive. Sites like Newgrounds and DeviantART began as places to discuss games and art, respectively. Before long they were the launching pad for the careers of an entire generation of digital creators.
Fandom found something similar on the web. On message boards and on personal websites, writers swapped fanfiction stories, and readers flocked to boards to find them. They hid in plain sight, developing rules and conventions for how to share among one another without being noticed.
In the fall of 1998, developer Xing Li began posting to a number of Usenet fanfiction groups. In what would come to be known as his trademark sincerity, his message read: “I’m very happy to announce that www.fanfiction.net is now officially open!!!!!! And we have done it 3 weekss ahead of projected finish date. While everyone trick-or-treated we were hard at working debugging the site.”
Li wasn’t a fanfiction creator himself, but he thought he stumbled upon a formula for its success. What made Fanfiction.net unique was that its community tools—built-in tagging, easy subscriptions to stories, freeform message boards for discussions—was built with fandom in mind. As one writer would later describe this winning combination, “its secret to success is its limited moderation and fully-automated system, meaning posting is very quick and easy and can be done by anyone.”
Fanfiction creators found a home at Fanfiction.net, or FF.net as it was often shortened to. Throughout its early years, Li had a nerdy and steadfast devotion to the development of the site. He‘d post sometimes daily to an open changelog on the site, a mix of site-related updates and deeply personal anecdotes. “Full-text searching allows you to search for keywords/phrases within every fanfiction entry in our huge archive,” one update read. “I can‘t get the song out of my head and I need to find the song or I will go bonkers. Thanks a bunch. =)” read another (the song was The Cure‘s “Boys Don’t Cry”).
Li’s cult of personality and the unique position of the site made it immensely popular. For years, the fanfiction community had stuck to the shadows. FF.net gave them a home. Members took it upon themselves to create a welcoming environment, establishing norms and procedures for tagging and discoverability, as well as feedback for writers.
The result was a unique community on the web that attempted to lift one another up. “Sorry. It‘s just really gratifying to post your first fic and get three hits within about six seconds. It‘s pretty wild, I haven’t gotten one bad review on FF.N…” one fanfic writer posted in the site’s early days. “That makes me pretty darn happy :)”
The reader and writer relationship on FF.net was fluid. The stories generated by users acted as a reference for conversation among fellow writers and fanfiction readers. One idea often flows into the next, and it is only through sharing content that it takes on meaning. “Yes, they want recognition and adulation for their work, but there‘s also the very strong sense that they want to share, to be part of something bigger than themselves. There’s a simple, human urge to belong.”
As the dot-com era waned, community was repackaged and resold as the social web. The goals of early social communities were looser than the tight niches and imaginative worlds of early community sites. Most functioned to bring one’s real life into digital space. Classmates.com, launched in 1995, is one of the earliest examples of this type of site. Its founder, Randy Conrads, believed that the web was best suited for reconnecting people with their former schoolmates.
Not long after, AsianAve launched from the chaotic New York apartment where the site‘s six co-founders lived and worked. Though it had a specific demographic—Asian Americans—AsianAve was modeled after a few other early social web experiences, like SixDegrees. The goal was to simulate real life friend groups, and to make the web a fun place to hang out. “Most of Asian Avenue‘s content is produced by members themselves,” an early article inThe New York Timesdescribes. “[T]he site offers tool kits to create personal home pages, chat rooms and interactive soap operas.” Eventually, one of the site‘s founders, Benjamin Sun, began to explore how he could expand his idea beyond a single demographic. That’s when he met Omar Wasow.
Wasow was fascinated with technology from a young age. When he was a child, he fell in love first with early video games like Pong and Donkey Kong. By high school, he made the leap to programmer. “I begged my way out of wood shop into computer science class. And it really changed my life. I went to being somebody who consumed video games to creating video games.”
In 1993, Wasow founded New York Online, a Bulletin Board System that targeted a “broad social and ethnic ‘mix’,” instead of pulling from the same limited pool of upper-middle class tech nerds most networked projects focused on. To earn an actual living, Wasow developed websites for popular magazine brands like Vibe and Essence. It was through this work that he crossed paths with Benjamin Sun.
By the mid-1990‘s, Wasow had already gathered a loyal following and public profile, featured in magazines like Newsweek and Wired. Wasow’s reputation centered on his ability to build communities thoughtfully, to explore the social ramifications of his tech before and while he built it. When Sun approached him about expanding AsianAve to an African American audience, a site that would eventually be known as BlackPlanet, he applied the same thinking.
Wasow didn’t want to build a community from scratch. Any site that they built would need to be a continuation of the strong networks Black Americans had been building for decades. “A friend of mine once shared with me that you don’t build an online community; you join a community,” Wasow once put it, “BlackPlanet allowed us to become part of a network that already had centuries of black churches and colleges and barbecues. It meant that we, very organically, could build on this very powerful, existing set of relationships and networks and communities.”
BlackPlanet offered its users a number of ways to connect. A central profile—the same kind that MySpace and Facebook would later adopt—anchored a member’s digital presence. Chat rooms and message boards offered opportunities for friendly conversation or political discourse (or sometimes, fierce debate). News and email were built right into the app to make it a centralized place for living out your digital life.
By the mid-2000’s BlackPlanet was a sensation. It captured a large part of African Americans who were coming online for the first time. Barack Obama, still a Senator running for President, joined the site in 2007. Its growth exploded into the millions; it was a seminal experience for black youth in the United States.
After being featured on a segment on the The Oprah Winfrey Show, teaching Oprah how to use the Internet, Wasow‘s profile reached soaring heights. The New York Times dubbed him the “philosopher-prince of the digital age,” for his considered community building. “The best the Web has to offer is community-driven,” Wasow would later say. He never stopped building his community thoughtfully. and they in turn, became an integral part of the country’s culture.
Before long, a group of developers would look at BlackPlanet and wonder how to adapt it to a wider audience. The result were the web’s first true social networks.
But Bramus took it the rest of the nine yards and looked at all of the table enhancements. Every one of these is great. The kind of thing that makes CSS ever-so-slightly less frustrating.
The CSS image-set() function has been supported in Chromium-based browsers since 2012 and in Safari since version 6. Support recently landed in Firefox 88. Let’s dive in and see what we can and can’t do today with image-set().
Multiple resolutions of the same image
Here’s what the CSS spec has to say about image-set():
Delivering the most appropriate image resolution for a user’s device can be a difficult task. Ideally, images should be in the same resolution as the device they’re being viewed in, which can vary between users. However, other factors can factor into the decision of which image to send; for example, if the user is on a slow mobile connection, they may prefer to receive lower-res images rather than waiting for a large proper-res image to load.
It’s basically a CSS background equivalent to the HTML srcset attribute for img tags. By using image-set we can provide multiple resolutions of an image and trust the browser to make the best decision about which one to use. This can be used to specify a value for three different CSS properties: content, cursor, and most useful of all, background-image.
1x is used to identify the low-res image, while 2x is used to define the high-res image. x is an alias of dppx, which stands for dots per pixel unit.
Chrome/Edge/Opera/Samsung Internet currently require a -webkit- prefix. If you’re using Autoprefixer, this will be handled automatically. Safari no longer requires the prefix but uses an older syntax that requires a url() function to specify the image path. We could also include a regular old background-image: url() to support any browsers that don’t support image-set.
.hero {
/* Fallback */
background-image: url( "platypus.png");
/* Chrome/Edge/Opera/Samsung, Safari will fallback to this as well */
background-image: -webkit-image-set(url("platypus.png") 1x, url("platypus-2x.png") 2x);
/* Standard use */
background-image: image-set("platypus.png" 1x, "platypus-2x.png" 2x);
}
Now users on expensive fancy devices will see a super sharp image. Performance will be improved for users on slow connections or with cheaper screens as their browser will automatically request the lower-res image. If you wanted to be sure that the high-res image was used on high-res devices, even on slow connections, you could make use of the min-resolution media query instead of image-set. For more on serving sharp images to high density screens, check out Jake Archibald’s recent post over on his blog.
That’s pretty cool, but what I really want is to be able to adopt the latest image formats in CSS while still catering for older browsers…
New image formats
Safari 14 shipped support for WebP. It was the final modern browser to do so which means the image format is now supported everywhere (except Internet Explorer). WebP is useful in that it can make images that are often smaller than (but of the same quality as) JPG, PNG, or GIF.
There’s also a whole bunch of even newer image formats cropping up. AVIF images are shockingly tiny. Chrome, Opera and Samsung Internet have already shipped support for AVIF. It’s already in Firefox behind a flag. This image format isn’t supported by many design tools yet but you can convert images to AVIF using the Squoosh app built by the Chrome team at Google. WebP 2, HEIF and JPEG XL might also make it into browsers eventually. This is all rather exciting, but we want browsers that don’t support these newer formats to get some images. Fortunately image-set() has a syntax for that.
Using new image formats by specifying a type
Browser support note: The feature of image-set that I’m about to talk about currently has pretty terrible browser support. Currently it’s only supported in Firefox 89.
HTML has supported the element for years now.
<picture>
<source srcset="./kitten.avif" type="image/avif">
<img src="./kitten.jpg" alt="a small kitten">
</picture>
image-set provides the CSS equivalent, allowing for the use of next-gen image formats by specifying the image’s MIME type:
The next-gen image goes first while the fallback image for older browsers goes second. Only one image will be downloaded. If the browser doesn’t support AVIF it will ignore it and only download the second image you specify. If AVIF is supported, the fallback image is ignored.
In the above example we used an AVIF image and provided a JPEG as a fallback, but the fallback could be any widely supported image format. Here’s an example using a PNG.
In Chromium and Safari, specifying the type is not supported yet. That means you can use image-set today only to specify different resolutions of widely-supported image formats but not to add backwards-compatibility when using WebP or AVIF in those browsers. It should be possible to provide both multiple resolutions and multiple image formats, if you are so inclined:
Maybe you don’t need background-image at all. If you want to use modern image formats, you might be able to use the element, which has better browser support. If you set the image to position: absolute it’s easy to display other elements on top of it.
CodePen Embed Fallback
As an alternative approach to using position: absolute, CSS grid is another easy way to overlap HTML elements.
A new design trend has emerged in the last year: Soft UI or Neumorphism is everywhere.
Even Apple is in on the trend; the company introduced a host of changes in both its mobile and desktop operating systems that use the style. The elements of Soft UI introduced by Apple reflect various aspects of the Microsoft Fluent UI design too.
So, if soft UI is such a huge concept, what do we need to know about it? How does soft UI work, and what are the pros and cons of using it?
What is Soft UI (Neumorphism)?
Soft UI involves using highlights and shadows in design elements to make them look as though they’re layered on the page.
The term neumorphism is derived from a previous design style — skeuomorphism, where designers create something as close to its real-life counterpart as possible. If you remember the shift between iOS 6 and 7, you’ll remember the switch between skeuomorphic and flat designs. However, neumorphic design isn’t quite as dramatic.
Neumorphism doesn’t focus excessively on things like contrast or similarities between real and digital elements. Instead, this “soft UI” practice creates a smoother experience for users.
With neumorphism, you get the sense that buttons and cards are actually part of the background they’re on. This trend removes the flashier aspects of a typical interface and focuses on a softer style that stays consistent throughout the design.
The Common Features of Soft UI
Soft UI is all about smoothing out the experience by making everything feel more connected. There’s nothing overly harsh in the aesthetic, hence the term “soft.”
So, what kind of features can you expect?
Rounded Corners: Soft UI removes some of the sharper parts of the interface, like the corners on modules and segments. This allows for a more gentle appearance overall. In this experimentation from Iqonic Design, we can see how the round corners tie everything together.
Transparency and Background Blur: Background blur and transparency are more popular today since the infamous iOS 7 solution emerged. Most people hated the appearance of ultra-minimalism, combined with thin fonts. However, the background blur effect was more popular. The blur in soft UI shows that part of the window is connected to the rest of the OS. It seems like parts of the background in the app are pushing through to the surface.
Unified Symbols: Everything needs to fit perfectly in a soft UI design. Anything that doesn’t look like it’s part of the same entity throws off the experience. In this design experiment by Surja Sen Das Raj, you can see how all the colors, shadows, and gradients tie together consistently. Because everything is more uniform, the experience flows perfectly for the end-user.
Implementing Soft UI Elements in Your Design
So, what does neumorphism look like in your UI design process?
Ultimately, it’s all about subtle contrast and aligned colors. Every part of your interface needs to look like it’s part of the same form. Your element and background need to be the same color so that you can create a feeling of objects protruding from the background.
With Soft UI, the keys to success are shadows and highlights.
Let’s take a look at some key steps.
Achieving the Soft Look
When you’re designing your interface, remember that sharp edges make the interface more serious and formal. Rounded corners are more playful and friendly.
What also makes the design look lightweight and delicate is plenty of deep shadows and highlights. When you add shadows to elements, you create a visual hierarchy. The items that cast a larger, deeper shadow are the ones closest to you. That’s why only a few elements need to cast an intense shadow. Everything else should work in the background.
Gradients are part of the shadow and highlighting process in Soft UI design. Ideally, you’ll need to choose colors from the same palette, just toned down or brightened, depending on your needs. The gradient needs to be barely visible, but just enough to make the elements stand out.
For white gradients, like highlights, use a very delicate color somewhere between white and your background shade. For instance, consider this design from Marina Tericheva.
Consider the Little Details
Finally, remember that the neumorphism design principle is all about little details.
Choosing a font that visually matches the background is an excellent choice. However, you can also choose something more contrasting, as this will help information stand out.
Adding a little bit of the background into your fonts might be suitable too. For instance, if you have a green font and a grey background, add a little grey into the mix.
Extra elements in your design, like allowing a button to shift into a more recessed state after being clicked, are a great way to make the soft UI more engaging. Everything your end-user interacts with needs to feel smooth and perfectly unified.
The Problems with Soft UI Design
Just because a design process is trending – doesn’t mean it won’t have its issues.
Neumorphism is a fun way to make apps, operating systems, and websites feel more friendly and informal. However, this softer approach has a weak spot too.
When you’re dealing with a small margin of contrast and color where neumorphism works well, it’s hard to get the effect right every time. For instance, this all-yellow design for Dtail Studio may be overwhelming for some.
A slight deviation in saturation or a problem with your shadowing could render the entire effect of Neumorphism completely pointless.
Another major issue is accessibility. The soft UI design looks great for people who have a full visual range. However, visually impaired users might not see the same benefits. Anyone without perfect vision may see crucial objects disappearing into the background.
Your users don’t necessarily need significant vision problems to struggle with neumorphism, either. The design is all about softness that causes elements to almost blend together. People with low-quality screens that don’t have as many pixels to work with won’t see these elements.
Issues With Buttons and CTAs
Another major issue of neumorphism is that its subtlety can lead to problems with attracting clicks and conversions. Usability is the most important consideration of any UI design.
Unfortunately, when you focus on subtle elements throughout your entire interface, usability sometimes takes a hit.
Let’s consider buttons, for instance – they’re essential to any interface. To simplify the customer journey, these buttons need to be noticeable, and they need to shift into different states when your customers interact with them.
For the button experience to be excellent, users need to notice the design instantly. However, the heart of neumorphism revolves around the idea that nothing stands out too much.
This isn’t just an accessibility issue; it’s a problem for conversions too.
Neumorphism is soft on the eyes, with minimal color contrast and few color pops. This means that CTA buttons don’t stand out as much as they should. Buttons almost blend into the background, and the website struggles to pull attention to the areas that demand it most.
How to Experiment With Soft UI (Free Kits)
The key to unlocking the benefits of soft UI interfaces without getting lost in the negative points – is proper experimentation. Like any new design trends, professionals and artists will need to learn how to merge the elements of soft UI together in a way that doesn’t compromise usability.
Trends in UI design can’t focus exclusively on aesthetics, as a customer’s comfort will always be an essential part of the process.
If you want to start exploring, here are some of the best kits and freebies to get you started:
Neumorphism Button kit: A button kit available in dark and light mode to help you create the best buttons for your next project.
Neumorphic Elements Sketch file: A free file for creative use, available to help you embed the right elements into your Soft UI design.
Neumorphism UI kit: A modern Soft UI kit for Figma available in 3 color variables.
Neumorphic UI kit for Adobe XD: A light-style Neumorphic kit for the Adobe XD app.
The world of design and the trends that we use are constantly changing. Companies are always searching for the best ways to connect with their users. Often, this means focusing on an interface that really connects with your target audience and delivers the best possible results.
The soft UI design trend has its benefits and its downsides. On the one hand, the smooth appearance of every element on a combined screen can deliver a delightful aesthetic. Buttons feel less imposing, and elements are friendlier and easier to interact with.
On the other hand, neumorphism also makes it difficult to truly capture your audience’s attention in the places where it matters most. It suffers from accessibility issues and requires plenty of care and practice.
PPK looks at aspect-ratio, a CSS property for layout that, for the most part, does exactly what you would think it does. It’s getting more interesting as it’s behind a flag in Firefox and Safari now, so we’ll have universal support pretty darn soon. I liked how he called it a “weak declaration” which I’m fairly sure isn’t an official term but a good way to think about it.
Because you’ve explicitly set the height and width, that is what will be respected. The aspect-ratio is weak in that it will never override a dimension that is set in any other way.
And it’s not just height and width, it could be max-height that takes effect, so maybe the element follows the aspect ratio sometimes, but will always respect a max-* value and break the aspect ratio if it has to.
It’s so weak that not only can other CSS break the aspect ratio, but content inside the element can break it as well. For example, if you’ve got a ton of text inside an element where the height is only constrained by aspect-ratio, it actually won’t be constrained; the content will expand the element.
I think this is all… good. It feels intuitive. It feels like good default behavior that prevents unwanted side effects. If you really need to force an aspect ratio on a box with content, the padding trick still works for that. This is just a much nicer syntax that replaces the safe ways of doing the padding trick.
PPK’s article gets into aspect-ratio behavior in flexbox and grid, which definitely has some gotchas. For example, if you are doing align-content: stretch;—that’s one of those things that can break an aspect ratio. Like he said: weak declaration.
There are four keywords that are valid values for any CSS property (see the title). Of those, day to day, I’d say I see the inherit used the most. Perhaps because it’s been around the longest (I think?) but also because it makes logical sense (“please inherit your value from the next parent up that sets it”). You might see that with an override of a link color, for example.
/* General site styles */
a {
color: blue;
}
footer {
color: white;
}
footer a {
color: inherit;
}
That’s a decent and elegant way to handle the fact that you want the text and links in the footer to be the same color without having to set it twice.
The others behave differently though…
initial will reset the property back to the spec default.
unset is weird as heck. For a property that is inherited (e.g. color) it means inherit, and for a property that isn’t inherited (e.g. float) it means initial. That’s a brain twister for me such that I’ve never used it.
revert is similarly weird. Same deal for inherited properties, it means inherit. But for non-inherited properties it means to revert to the UA stylesheet. Kinnnnnda useful in that reverting display, for example, won’t make a element display: inline; but it will remain a sensible display: block;.
Chris Coyier argues we need a new value which he calls default. It reverts to the browser style sheet in all cases, even for inherited properties. Thus it is a stronger version of revert. I agree. This keyword would be actually useful.
Amen. We have four properties for fiddling with the cascade on individual properties, but none that allow us to blast everything back to the UA stylesheet defaults. If we had that, we’d have a very powerful tool for starting fresh with styles on any given element. In one sense: scoped styles!
PPK has a fifth value he thinks would be useful: cascade. The idea (I suppose) is it kinda acts like currentColor except for any property. Sort of like a free variable you don’t have to define that gives you access to what the cascaded value would have been, except you’re going to use it in some other context (like a calculation).