Archive

Archive for March, 2021

Chapter 7: Standards

March 11th, 2021 No comments

It was the year 1994 that the web came out of the shadow of academia and onto the everyone’s screens. In particular, it was the second half of the second week of December 1994 that capped off the year with three eventful days.

Members of the World Wide Web Consortium huddled around a table at MIT on Wednesday, December 14th. About two dozen people made it to the meeting, representatives from major tech companies, browser makers, and web-based startups. They were there to discuss open standards for the web.

When done properly, standards set a technical lodestar. Companies with competing interests and priorities can orient themselves around a common set of agreed upon documentation about how a technology should work. Consensus on shared standards creates interoperability; competition happens through user experience instead of technical infrastructure.

The World Wide Web Consortium, or W3C as it is more commonly referred to, had been on the mind of the web’s creator, Sir Tim Berners-Lee, as early as 1992. He had spoken with a rotating roster of experts and advisors about an official standards body for web technologies. The MIT Laboratory for Computer Science soon became his most enthusiastic ally. After years of work, Berners-Lee left his job at CERN in October of 1994 to run the consortium at MIT. He had no intention of being a dictator. He had strong opinions about the direction of the web, but he still preferred to listen.

W3C, 1994

On the agenda — after the table had been cleared with some basic introductions — was a long list of administrative details that needed to be worked out. The role of the consortium, the way it conducted itself, and its responsibilities to the wider web was little more than sketched out at the beginning of the meeting. Little by little, the 25 or so members walked through the list. By the end of the meeting, the group felt confident that the future of web standards was clear.

The next day, December 15th, Jim Clark and Marc Andreessen announced the recently renamed Netscape Navigator version 1.0. It had been out for several months in beta, but that Thursday marked a wider release. In a bid for a growing market, it was initially given away for free. Several months later, after the release of version 1.1, Netscape would be forced to walk that back. In either case, the browser was a commercial and technical success, improving on the speed, usability, and features of browsers that had come before it.

On Friday, December 16th, the W3C experienced its first setback. Berners-Lee never meant for MIT to be the exclusive site of the consortium. He planned for CERN, the birthplace of the web and home to some of its greatest advocates, to be a European host for the organization. On December 16th, however, CERN approved a massive budget for its Large Hadron Collider, forcing them to shift priorities. A refocused budget left little room for hypertext Internet experiments not directly contributing to the central project of particle physics.

CERN would no longer be the European host of the W3C. All was not lost. Months later, the W3C set up at France’s National Institute for Research in Computer Science and Control, or INRIA. By 1996, a third site at Japan’s Keio University would also be established.

Far from an outlier, this would not be the last setback the W3C ever faced, or that it would overcome.


In 1999, Berners-Lee published an autobiographical account of the web’s creation in a book entitled Weaving the Web. It is a concise and even history, a brisk walk through the major milestones of the web’s first decade. Throughout the book, he often returns to the subject of the W3C.

He frames the web consortium, first and foremost, as a matter of compromise. “It was becoming clear to me that running the consortium would always be a balancing act, between taking the time to stay as open as possible and advancing at the speed demanded by the onrush of technology.” Striking a balance between shared compatibility and shorter and shorter browser release cycles would become a primary objective of the W3C.

Web standards, he concedes, thrives through tension. Standards are developed amidst disagreement and hard-won bargains. Recalling a time just before the W3C’s creation, Berners-Lee notes how the standards process reflects the structure of the web. “It struck me that these tensions would make the consortium a proving ground for the relative merits of weblike and treelike societal structures,” he wrote, “I was eager to start the experiment.” A web consortium born of compromise and defined by tension, however, was not Berners-Lee’s first plan.

In March of 1992, Berners-Lee flew to San Diego to attend a meeting of the Internet Engineering Task Force, or IETF. Created in 1986, the IETF develops standards for the Internet, ranging from networking to routing to DNS. IETF standards are unenforceable and entirely voluntarily. They are not sanctioned by any world government or subject to any regulations. No entity is obligated to use them. Instead, the IETF relies on a simple conceit: interoperability helps everyone. It has been enough to sustain the organization for decades.

Because everything is voluntary, the IETF is managed by a labyrinthine set of rules and ritualistic processes that can be difficult to understand. There is no formal membership, though anyone can join (in its own words it has “no members and no dues”). Everyone is a volunteer, no one is paid. The group meets in person three times a year at shifting locations.

The IETF operates on a principle known as rough consensus (and, often times, running code). Rather than a formal voting process, disputed proposals need to come to some agreement where most, if not at all, of the members in a technology working group agree. Working group members decide when rough consensus has been met, and its criteria shifts form year to year and group to group. In some cases, the IETF has turned to humming to take the temperature of a room. “When, for example, we have face-to-face meetings… instead of a show of hands, sometimes the chair will ask for each side to hum on a particular question, either ‘for’ or ‘against’.”

It is against the backdrop of these idiosyncratic rules that Berners-Lee first came to the IETF in March of 1992. He hoped to set up a working group for each of the primary technologies of the web: HTTP, HTML, and the URI (which would later be renamed to URL through the IETF). In March he was told he would need another meeting, this one in June, to formally propose the working groups. Somewhere close to the end of 1993, a year and a half after he began, he had persuaded the IETF to set up all three.

The process of rough consensus can be slow. The web, by contrast, had redefined what fast could look like. New generations of browsers were coming out in months, not years. And this was before Netscape and Microsoft got involved.

The development of the web had spiraled outside Berners-Lee’s sphere of influence. Inline images — a feature maybe most responsible for the web’s success — was a product of a late night brainstorming session over snacks and soda in the basement of a university lab. Berners-Lee learned about it when everyone else did, when Marc Andreessen posted it to the www-talk mailing list.

Tension. Berners-Lee knew that it would come. He had hoped, for instance, that images might be treated differently (“Tim bawled me out in the summer of ’93 for adding images to the thing,” Andreessen would later say), but the web was not his. It was not anybody’s. He had designed it that way.

With all of its rules and rituals, the IETF did not seem like the right fit for web standards. In private discussions at universities and research labs, Berners-Lee had begun to explore a new path. Something like a consortium of stakeholders in the web — a collection of companies that create browsers and websites and software — that can come together to agree upon a rough consensus for themselves. By the end of 1993, his work on the W3C had already begun.


Dave Raggett, a seasoned researcher at Hewlett-Packard, had a different view of the web. He wasn’t from academia, and he wasn’t working on a browser (not yet anyway). He understood almost instinctively the utility of the web as commercial software. Something less like a digital phonebook and more like Apple’s wildly successful Hypercard application.

Unable to convince his bosses of the web’s promise, Raggett used the ten percent of time HP allowed for its employees to pursue independent research to begin working with the web. He anchored himself to the community, an active member of the www-talk mailing list and a regular presence at IETF meetings. In the fall of 1992, he had a chance to visit with Berners-Lee at CERN.

Yuri Rubinsky

It was around this time that he met Yuri Rubinsky, an enthusiastic advocate for Standard General Markup Language, or SGML, the language that HTML was originally based on. Rubinsky believed that the limitations of HTML could be solved by a stricter adherence to the SGML standard. He had begun a campaign to bring SGML to the web. Raggett agreed — but to a point. He was not yet ready to sever ties with HTML.

Each time Mosaic shipped a new version, or a new browser was released, the gap between the original HTML specification and the real world web widened. Raggett believed that a more comprehensive record of HTML was required. He began working on an enhanced version of HTML, and a browser to demo its capabilities. Its working title was HTML+.

Ragget’s work soon began to spill over to his home life. He’d spend most nights “at a large computer that occupied a fair portion of the dining room table, sharing its slightly sticky surface with paper, crayons, Lego bricks and bits of half-eaten cookies left by the children.” After a year of around the clock work, Raggett had a version of HTML+ ready to go in November of 1993. His improvements to the language were far from superficial. He had managed to add all of the little things that had made their way into browsers: tables, images with captions and figures, and advanced forms.

Several months later, in May of 1994, developers and web enthusiasts traveled from all over the world to come to what some attendees would half-jokingly refer to as the “Woodstock of the Web,” the first official web conference organized by CERN employee and web pioneer Robert Calliau. Of the 800 people clamoring to come, the space in Geneva could hold only 350. Many were meeting for the first time. “Everyone was milling about the lobby,” web historian Marc Weber would later describe, “electrified by the same sensation of meeting face-to-face actual people who had been just names on an email or on the www-talk [sic] mailing list.”

Members of the first conference

It came at a moment when the web stood on the precipice of ubiquity. Nobody from the Mosaic team had managed to make it (they had their own competing conference set for just a few months later), but there were already rumors about Mosaic alum Marc Andresseen’s new commercial browser that would later be called Netscape Navigator. Mosaic, meanwhile, had begun to license their browser for commercial use. An early version of Yahoo! was growing exponentially as more and more publications, like GNN, Wired, The New York Times, and The Wall Street Journal, came online.

Progress at the IETF, on the other hand, had been slow. It was too meticulous, too precise. In the meantime, browsers like Mosaic had begun to add whatever they wanted — particularly to HTML. Tags supported by Mosaic couldn’t be found anywhere else, and website creators were forced to chose between cutting-edge technology and compatibility with other browsers. Many were choosing the former.

HTML+ was the biggest topic of conversation at the conference. But another highlight was when Dan Connolly — a young, “red-haired, navy-cut Texan” who worked at the supercomputer manufacturer Convex — took the stage. He gave a talk called “Interoperability: Why Everyone Wins.” Later, and largely because of that talk, Connolly would be made chair of the IETF HTML Working Group.

In a prescient moment capturing the spirit of the room, Connolly described a future when the language of HTML fractured. When each browser implemented their own set of HTML tags in an effort to edge out the competition. The solution, he concluded, was an HTML standard that was able to evolve at the pace of browser development.

Ragget’s HTML+ made a strong case for becoming that standard. It was exhaustive, describing the new HTML used in browsers like Mosaic in near-perfect detail. “I was always the minimalist, you know, you can get it done with out that,” Connolly later said, “Raggett, on the other hand, wanted to expand everything.” The two struck an agreement. Raggett would continue to work through HTML+ while Connolly focused on a more narrow upgrade.

Connolly’s version would soon become HTML 2, and after a year of back and forth and rough consensus building at the IETF, it became an official standard. It didn’t have nearly the detail of HTML+, but Connolly was able to officially document features that browsers had been supporting for years.

Ragget’s proposal, renamed to HTML 3, was stuck. In an effort to accommodate an expanding web, it continued to grow in size. “To get consensus on a draft 150 pages long and about which everyone wanted to voice an opinion was optimistic – to say the least,” Raggett would later put it, rather bluntly. But by then, Raggett was already working at the W3C, where HTML 3 would soon become a reality.


Berners-Lee also spoke at the first web conference in Geneva, closing it out with a keynote address. He didn’t specifically mention the W3C. Instead, he focused on the role of web. “The people present were the ones now creating the Web,” he would later write of his speech, “and therefore were the only ones who could be sure that what the systems produced would be appropriate to a reasonable and fair society.”

In October of 1994, he embarked on his own part in making a more equitable and accessible future for the web. The World Wide Web Consortium was officially announced. Berners-Lee was joined by a handful of employees — a list that included both Dave Raggett and Dan Connolly. Two months later, in the second half of the second week of December of 1994, the members of the W3C met for the first time.

Before the meeting, Berners-Lee had a rough sketch of how the W3C would work. Any company or organization could join given that they pay the membership fee, a tiered pricing structure tied to the size of that company. Member organizations would send representatives to W3C meetings, to provide input into the process of creating standards. By limiting W3C proceedings to paying members, Berners-Lee hoped to focus and scope the conversations to real world implementations of web technologies.

Yet despite a closed membership, the W3C operates in the open whenever possible. Meeting notes and documentation are open to anybody in the public. Any code written as part of experiments in new standards is freely downloadable.

Gathered at MIT, the W3C members had to next decide how its standards would work. They decided on a process that stops just short of rough consensus. Though they are often called standards, the W3C does not create official standards for the web. The technical specifications created at the W3C are known, in their final form, as recommendations.

They are, in effect, proposals. They outline, in great detail, how exactly a technology works. But they leave enough open that it is up to browsers to figure out exactly how the implementation works. “The goal of the W3C is to ensure interpretability of the Web, and in the long range that’s realistic,” former head of communications at the W3C Sally Khudairi once described it, “but in the short range we’re not going to play Web cops for compliance… we can’t force members to implement things.”

Initial drafts create a feedback loop between the W3C and its members. They provide guidance on web technologies, but even as specifications are in the process of being drafted, browsers begin to introduce them and developers are encouraged to experiment with them. Each time issues are found, the draft is revised, until enough consensus has been reached. At that point, a draft becomes a recommendation.

There would always be tension, and Berners-Lee knew that well. The trick was not to try to resist it, but to create a process where it becomes an asset. Such was the intended effect of recommendations.

At the end of 1995, the IETF HTML working group was replaced by a newly created W3C HTML Editorial Review Board. HTML 3.2 would be the first HTML version released entirely by the W3C, based largely on Ragget’s HTML+.


There was a year in web development, 1997, when browsers broke away from the still-new recommendations of the W3C. Microsoft and Netscape began to release a new set of features separate and apart from agreed upon standards. They even had a name for them. They called them Dynamic HTML, or DHTML. And they almost split the web in two.

DHTML was originally celebrated. Dynamic meant fluid. A natural evolution from HTML’s initial inert state. The web, in other words, came alive.

Touting it’s capabilities, a feature in Wired in 1997 referred to DHTML as the “magic wand Web wizards have long sought.” In its enthusiasm for the new technology, it makes a small note that “Microsoft and Netscape, to their credit, have worked with the standards bodies,” specifically on the introduction of Cascading Style Sheets, or CSS, but that most features were being added “without much regard for compatibility.”

The truth on the ground was that using DHTML required targeting one browser or another, Netscape or Internet Explorer. Some developers chose to simply choose a path, slapping a banner at the bottom of their site that displayed “Best Viewed In…” one browser or another. Others ignored the technology entirely, hoping to avoid its tangled complexity.

Browsers had their reasons, of course. Developers and users were asking for things not included in the official HTML specification. As one Microsoft representative put it, “In order to drive new technologies into the standards bodies, you have to continue innovating… I’m responsible to my customers and so are the Netscape folks.”

A more dynamic web was not a bad thing, but a splintered web was untenable. For some developers, it would prove to be the final straw.


Following the release of HTML 3.2, and with the rapid advancement of browsers, the HTML Editorial Review Board was divided into three parts. Each was given a separate area of responsibility to make progress on, independent of the others.

Dr. Lauren Wood (Photo: XML Summer School)

Dr. Lauren Wood became chair of the Document Object Model Working Group. A former theoretical nuclear phycist, Wood was the Director of Product Technology at SoftQuad, a comapny founded by SGML advocate Yuri Rubinsky. While there, she helped work on the HoTMetaL HTML editor. The DOM spec created a standardized way for browsers to implement Dynamic HTML. “You need a way to tie your data and your programs together,” was how Wood described it, “and the Document Object Model is that glue.” Her work on the Document Object Model, and later XML, would have a long-lasting influence on the web.

The Cascading Style Sheets Working Group was chaired by Chris Lilley. Lilley’s background was in computer graphics, as a teacher and specialist in the Computer Graphics Unit at the University of Manchester. Lilley had worked at the IETF on the HTML 2 spec, as well as a specification for Portable Network Graphics (PNG), but this would mark his first time as a working group chair.

CSS was still a relative newcomer in 1997. It had been in the works for years, but had yet to have a major release. Lilley would work alongside the creators of CSS — Håkon Lie and Bert Bos — to create the first CSS standard.

The final working group was for HTML, left under the auspices of Dan Connolly, continuing his position from the IETF. Connolly had been around the web almost as long as Berners-Lee had. He was one of the people watching back in October of 1991, when Berners-Lee demoed the web for a small group of unimpressed people at a hypertext conference in San Antonio. In fact, it was at that conference that he first met the woman that would later become his wife.

After he returned home, he experimented with the web. He messaged Berners-Lee a month later. It was only three words:“You need a DTD.”

When Berners-Lee developed the language of HTML, he borrowed its convention from a predecessor, SGML. IBM developed Generalized Markup Language (GML) in the early 1970’s to make it easier for typists to create formatted books and reports. However, it quickly got out of control, as people would take shortcuts and use whatever version of the tags that they wanted.

That’s when they developed the Document Type Definition, or as Connolly called it, a DTD. DTDs are what added the “S” (Standardized) to GML. Using SGML, you can create a standardized set of instructions for your data, its scheme and its structure, to help computers understand how to interpret it. These instructions are a document type definition.

Beginning with version 2, Connolly added a type definition to HTML. It limited the language to a smaller set of agreed-upon tags. In practice, browsers treated this more as a loose definition, continuing to implement their own DHTML features and tags. But it was a first step.

In 1997, the HTML Working Group, now inside of the W3C, began to work on the fourth iteration of HTML. It expanded the language, adding to the specification far more advanced features, complex tables and forms, better accessibility, and a more defined relationship with CSS. But it also split HTML from a single schema into three different document type definitions for browsers to adopt.

The first, Frameset, was not typically used. The second, Transitional, was there to include the mistakes of the past. It expanded a larger subset of HTML that included non-standard, presentational HTML that browsers had used for years, such as and . This was set as a default for browsers.

The third DTD was called Strict. Under the Strict definition, HTML was pared down to only its standard, non-presentational features. It removed all of the unique tags introduced by Netscape and Microsoft, leaving only structured elements. If you use HTML today, it likely draws on the same base of tags.

The Strict definition drew a line in the sand. It said, this is HTML. And it finally gave a way for developers to code once for every browser.


In the August 1998 issue of Computerworld — tucked between large features on the impending doom of Y2K, the bristling potential of billing on the World Wide Web, and antitrust concerns about Microsoft — was a small announcement. Its headline read, ”Browser standards targeted.” It was about the creation of a new grassroots organization of web developers aimed at bringing web standards support to browsers. It was called the Web Standards Project.

Glenn Davis, co-creator of the project, was quoted in the announcement. “The problem is, with each generation of the browser, the browser manufacturers diverge farther from standards support.” Developers, forced to write different code for different browsers for years, had simply had enough. A few off-hand conversations in mailing lists had spiraled into a fully grown movement. At launch, 450 developers and designers had already signed up.

Davis was not new to the web, and he understood its challenges. His first experience on the web dated all the way back to 1994, just after Mosaic had first introduced inline images, when he created the gallery site Cool Site of the Day. Each day, he would feature a single homepage from an interesting or edgy or experimental site. For a still small community of web designers, it was an instant hit.

There was no criteria other than sites that Davis thought were worth featuring. “I was always looking for things that push the limits,” was how he would later define it. Davis helped to redefine the expectations of the early web, using the moniker coolas a shorthand to encompass many possibilities. Dot-com Design author and media professor **Megan Ankerson points out what “this ecosystem of cool sites gestured towards the sheer range of things the web could be: its temporal and spatial dislocations, its distinction from and extension of mainstream media, its promise as a vehicle for self-publishing, and the incredible blend of personal, mundane, and extraordinary.” For a time on the web, Davis was the arbiter of cool.

As time went on Davis transformed his site into Project Cool, a resource for creating websites. In the days of DHTML, Davis’ Project Cool tutorials provided constructive and practical techniques for making the most out of the web. And a good amount of his writing was devoted to explaining how to write code that was usable in both Netscape Navigator and Microsoft’s Internet Explorer. He eventually reached a breaking point, along with many others. At the end of 1997, Netscape and Microsoft both released their 4.0 browsers with spotty standards support. It was already clear that upcoming 5.0 releases were planning to lean even further into uneven and contradictory DHTML extensions.

Running out of patience, Davis helped set up a mailing list with George Olsen and Jeffrey Zeldman. The list started with two dozen people, but it gathered support quickly. The Web Standards Project, known as WaSP, officially launched from that list in August of 1998. It began with a few hundred members and announcement in magazines like Computer World. Within a few months, it would have tens of thousands of members.

The strategy for WaSP was to push browsers — publicly and privately — into web standards support. WaSP was not meant to be a hyperbolic name.” The W3C recommends standards. It cannot enforce them,” Zeldman once said of the organization’s strategy, “and it certainly is not about to throw public tantrums over non-compliance. So we do that job.”

A prominent designer and standards advocate, Zeldman would have an enduring influence on makers of the web. He would later run WaSP during some of its most influential years. His website and mailing list, A List Apart, would become a gathering place for designers who cared about web standards and using the latest web technologies.

WaSP would change focus several times during their decade and a half tenure. They pushed browsers to make better use of HTML and CSS. They taught developers how write standards-based code. They advocated for greater accessibility and tools that supported standards out of the box.

But their mission, published to their website on the first day of launch, would never falter. “Our goal is to support these core standards and encourage browser makers to do the same, thereby ensuring simple, affordable access to Web technologies for all.”

WaSP succeeded in their mission on a few occasions early on. Some browsers, notably Opera, had standards baked in at the beginning; their efforts were praised by WaSP. But the two browsers that collectively made up a majority of web use — Internet Explorer and Netscape Navigator — would need some work.

A four billion dollar sale to AOL in 1998 was not enough for Netscape to compete with Microsoft. After the release of Netscape 4.0, they doubled-down on bold strategy, choosing to release the entire browser’s code as open source under the Mozilla project. Everyday consumers could download it for free; coders were encouraged to contribute directly.

Members of the community soon noticed something in Mozilla. It had a new rendering engine, often referred to as Gecko. Unlike planned releases of Netscape 5, which had patchy standards support at best, Gecko supported a fairly complete version of HTML 4 and CSS.

WaSP diverted their formidable membership to the task of pushing Netscape to include Gecko in its next major release. One familiar WaSP tactic was known as roadblocking. Some of its members worked at publications like HotWired and CNet. WaSP would coordinate articles across several outlets all at once criticizing, for instance, Netscape’s neglect of standards in the face of a perfectly reasonable solution in Gecko. By doing so, they were often able to capture the attention of at least one news cycle.

WaSP also took more direct action. Members were asked to send emails to browsers, or sign petitions showing widespread support for standards. Overwhelming pressure from developers was occasionally enough to push browsers in the right direction.

In part because of WaSP, Netscape agreed to make Gecko part of version 5.0. Beta versions of Netscape 5 would indeed have standards-compliant HTML and CSS, but it was beset with issues elsewhere. It would take years for a release. By then, Microsoft’s dominion over the browser market would be near complete.

As one of the largest tech companies in the world, Microsoft was more insulated from grassroots pressure. The on-the-ground tactics of WaSP proved less successful when turned against the tech giant.

But inside the walls of Microsoft, WaSP had at least one faithful follower, developer Tantek Çelik. Çelik has tirelessly fought on the side of web standards as far back as his web career stretches. He would later become a member of the WaSP Steering Committee and a representative for a number of working groups at the W3C working directly on the development of standards.

Tantek Çelik (Photo: Tantek.com)

Çelik ran a team inside of Internet Explorer for Mac. Though it shared a name, branding, and general features with its far more ubiquitous Windows counterpart, IE for Mac ran on a separate codebase. Çelik’s team was largely left to its own devices in a colossal organization with other priorities working on a browser that not many people were using.

With the direction of the browser largely left up to him, Çelik began to reach out to web designers in San Francisco at the cutting edge of web technology. Through a stroke of luck he was connected to several members of the Web Standards Project. He’d visit with them and ask what they wanted to see in the Mac IE browser. “The answer: better standards support.”

They helped Çelik realize that his work on a smaller browser could be impactful. If he was able to support standards, as they were defined by the W3C, it could serve as a baseline for the code that the designers were writing. They had enough to worry about with buggy standards in IE for Windows and Netscape, in other words. They didn’t need to also worry about IE for Mac.

That was all that Çelik needed to hear. When Internet Explorer 5.0 for Mac launched in 2000, it had across the board support for web standards; HTML, PNG images, and most impressively, one of the most ambitious implementations of the new Cascading Style Sheets (CSS) specification.

It would take years for the Windows version to get anywhere close to the same kind of support. Even half a decade later, after Çelik left to work at the search engine Technorati, they were still playing catch-up.


Towards the end of the millennium, the W3C found themselves at a fork in the road. They looked to their still-recent past and saw it filled with contentious support for standards — Incompatible browsers with their own priorities. Then they looked the other way, to their towering future. They saw a web that was already evolving beyond the confines personal computers. One that would soon exist on TVs and in cell phones and on devices we that hadn’t been dreamed up yet in paradigms yet to be invented. Their past and their future were incompatible. And so, they reacted.

Yuri Rubinsky had an unusual talent for making connections. In his time as a standards advocate, developer, and executive at a major software company, he had managed to find time to connect some of the web’s most influential proponents. Sadly, Rubinsky died suddenly and at a young age in 1996, but his influence would not soon be forgotten. He carried with him an infectious energy and a knack for persuasion. His friend and colleague Peter Sharpe would say upon his death that in “talking to the people from all walks of life who knew Yuri, there was a common theme: Yuri had entered their lives and changed them forever.”

Rubinsky devoted his career to making technology more accessible. He believed that without equitable access, technology was not worth building. It motivated all of the work he did, including his longstanding advocacy of SGML.

SGML is a meta-language and “you use it to build your own computer languages for your own purposes.” If you hand a document over to a computer, SGML is how you can give that computer instructions on how to understand it. It provides a standardized way to describe the structure of data — the tags that it uses and the order it is expected in. The ownership of data, therefore, is not locked up and defined at some unknown level, it is given to everybody.

Rubinsky believed in that kind of universal access, a world in which machines talked to each other in perfect harmony, passing sets of data between them, structured, ordered, and formatted for its users. His company, SoftQuad, built software for SGML. He organized and spoke at conferences about it. He created SGML Open, a consortium not unlike the W3C. “SGML provides an internationally standardized, vendor-supported, multi-purpose, independent way of doing business,” was how he once described it, “If you aren’t using it today, you will be next year.” He was almost right.

He had a mission on the web as well. HTML is actually based on SGML, though it uses only a small part of it. Rubinsky was beginning to have conversations with members of the W3C, like Berners-Lee and Raggett, about bringing a more comprehensive version of SGML to the web. He was even writing a book called SGML on the Web before his death.

In the hallways of conferences and in threaded mailing lists, Rubinsky used his unique propensity for persuasion to bring people several people together on the subject, including Dan Connolly, Lauren Wood, Jon Bosak, James Clark, Tim Bray, and others. Eventually, those conversations moved into the W3C. They formed a formal working group and, in November of 1996, eXtensible Markup Language (XML) was formally announced, and then adopted as a W3C Recommendation. The announcement took place at an annual SGML conference in Boston, run by an organization where Rubinsky sat on the Board of Directors.

XML is SGML, minus a few things, renamed and repackaged as a web language. That means it goes far beyond the capabilities of HTML, giving developers a way to define their own structured data with completely unique tags (e.g., an tag in a recipe, or an tag in an article). Over the years, XML has become the backbone of widely used technologies, like RSS and MathML, as well as server-level APIs.

XML was appealing to the maintainers of HTML, a language that was beginning to feel somewhat complete. “When we published HTML 4, the group was then basically closed,” Steve Pemberton, chair of the HTML working group at the time, described the situation. “Six months later, though, when XML was up and running, people came up with the idea that maybe there should be an XML version of HTML.” The merging of HTML and XML became known as XHTML. Within a year, it was the W3C’s main focus.

The first iterations of XHTML, drafted in 1998, were not that different from what already existed in the HTML specifications. The only real difference was that it had stricter rules for authors to follow. But that small constraint opened up new possibilities for the future, and XHTML was initially celebrated. The Web Standards Project issued a press release on the day of its release lauding its capabilities, and developers began to make use of the stricter markup rules required, in line with the work Connolly had already done with Document Type Definitions.

XHTML represented a web with deeper meaning. Data would be owned by the web’s creators. And together, computers and programmers, could create a more connected and understandable web. That meaning was labeled semantics. The Semantic Web would become the W3C’s greatest ambition, and they would chase it for close to a decade.

W3C, 2000
W3C, 2000

Subsequent versions of XHTML would introduce even stricter rules, leaning harder into the structure of XML. Released in 2002, the XHTML 2.0 specification became the language’s harbinger. It removed backwards compatibility with older versions of HTML, even as Microsoft’s Internet Explorer — the leading browser by a wide margin at this point — refused to support it. “XHTML 2 was a beautiful specification of philosophical purity that had absolutely no resemblance to the real world,” said Bruce Lawson, an HTML evangelist for Opera at the time.

Rather than uniting standards under a common banner, XHTML, and the refusal of major browsers to fully implement it, threatened the split the web apart permanently. It would take something bold to push web standards in a new direction. But that was still years away.


The post Chapter 7: Standards appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

8 Domain Name Myths Every Web Designer Should Know

March 10th, 2021 No comments

A domain name is an essential element of every project, product, and company. It’s central to a brand and has a disproportionately large impact on user experience. Not only that, but it also impacts SEO and ultimately revenue.

Domain names are also one of the most commonly retailed elements in web technology, with most designers hoarding a small empire’s worth of domain names “just in case” the right side-project comes along.

Because so much of the information and advice on domain names is provided by companies selling domain names and is therefore not impartial, we wanted to bust some of the myths you’ll encounter.

Myth 1: Anyone Can Own a Domain Name

In fact, almost no one can own a domain name. As demonstrated by the (probably) annual renewal notices you receive, you are merely renting a domain name.

You pay a registrar, who registers the domain with ICANN (The Internet Corporation for Assigned Names and Numbers) — or an entity to whom ICANN has delegated the responsibility for a particular TLD.

Even when renting a domain, you do not have the right to use it; thousands of UK-based businesses have had .eu domains stripped from them as a result of being removed from the EU.

Myth 2: There’s a Perfect Domain For Every Project

Domains do not have inherent value; they acquire value over time.

25 years ago, if you were building a search engine, the ‘perfect’ domain might have been search.com, find.com, or perhaps look.com — the particularly cynical might have opted for webads.com. You almost certainly wouldn’t have registered google.com because it says nothing about search.

Any domain name can acquire value through longevity, SEO, and branding

google.com acquired its value through a simple, relentless branding strategy and a generous dollop of luck.

Any domain name can acquire value through longevity, SEO, and branding.

Myth 3: Your Domain Name Should Contain Keywords

If you’re at the point of registering a domain name, either your business is new, or your digital strategy is. In either case, you have hopefully carried out keyword research, but without a live site, your keyword research hasn’t been validated. In other words, you don’t know what your keywords are.

Even if you’re confident that you know exactly what your keywords should be at this time, your keywords may change. The pandemic has required most businesses to pivot to some degree. eatoutny.com isn’t much use if legal restrictions have forced you to switch to a delivery business — unless you’ve also registered eatinny.com.

Furthermore, in the area of ecommerce, customers tend to view keyword-heavy domain names as budget options because they are like generic-brand goods. It may be that your business will only ever be a budget option, but it’s not a wise business decision to restrict your options.

There is an SEO benefit to keywords in a domain, but it is minimal and will almost certainly vanish in the next few years — even for EMD (Exact Match Domains) — because it is too close to gaming the system.

Myth 4: You Don’t Need a .com

As frustrating as it may be to seek out a .com you’re happy with, nothing says “late to the party” like a .biz domain.

A .co extension is slightly better in some regions because the .co.** format is commonly used; .co.jp for example. However, .co tends to be typed as .com by users accustomed to the more common format.

nothing says “late to the party” like a .biz domain

It’s possible to opt for pun-based names using regionally specific TLDs like buy.it, or join.in. This kind of strategy will play havoc with your local search strategy because computers don’t understand puns; you’ll potentially do quite well in Italy or India, though.

If you’re registering a domain for a non-profit, then .org is perfectly acceptable. However, carefully consider whether a domain is worth the lost traffic if you can’t also register the .com (because people will type .com).

The one exception is industry-specific TLDs that communicate something about the domain’s contents to a target demographic. For example, .design is a great extension for designers, and .io is fine for an app if it targets developers (i.e., people who understand the joke). You should also register the .com if you can, and if you can’t, carefully consider whom you’re likely to be competing with for SERPs.

This is not to say that anything other than a .com is worthless, just worth less than the .com.

Myth 5: A Trademark Entitles You to Register a Domain

Trademark registration and domain registration are two entirely different processes, and one does not entitle you to the other. This has been legally challenged a few times and fails far more often than it succeeds.

Trademarks are rarely blanket registrations, which means the trademark owner needs to declare the industry in which it will operate; there was no enmity between Apple Inc. and Apple Corp Ltd. until the former moved into music publishing and no one could download the White Album onto their iPod.

There is, however, a limited value in registering a domain that has been trademarked elsewhere. Not least because you will be competing against their SEO, and if they’re big enough to trademark a name, they’ve probably grabbed the .com.

Myth 6: Premium Domains Are a Good Investment

Premium domains are domains that have been speculatively registered in the hope of attracting a huge resale fee. The process is commonly referred to as ‘domain squatting.’

Domain squatters bulk-register domains in the hope that one of them will be valuable to someone. As a result, they are forced to charge exorbitant fees to cover their losses; a premium domain will cost anything from 1000–100,000% of the actual registration cost.

Setting aside the cost — which would be better spent on marketing — premium domains often come with legacy issues, such as a troubled search engine history, that you do not want to inherit.

Myth 7: A Matching Handle Must be Available on Social Media

The business value of a social media account varies from company to company and from platform to platform. Even if it is valuable to you, numerous marketing strategies will accommodate a domain name: prepending with ‘use,’ or ‘get,’ or appending with ‘hq,’ for example.

More importantly, it’s unwise to allow a third-party to define your long-term brand identity; sure, Facebook is huge now, but then so was the T-Rex.

Myth 8: You Need a Domain Name

A domain name is an alias, nothing more. You don’t actually need a domain name — what you need is an IP address, which a domain name makes human-friendly.

Think of domain names as an accessibility issue; humans are less able to read IP addresses than computers, and domains bridge the gap. (See how helpful accessibility is?)

While a domain name is beneficial, question whether a sub-domain or even an IP address would do. Registering a domain is an exciting stage of a project that many people never get past, leaving themselves with a huge collection of domains that they pay an annual fee for, and never actually develop.

What Makes a Good Domain Name

Now we’ve dispelled some of the myths surrounding domain names, let’s look at the key characteristics shared by good domain names:

A Good Domain Name is Brandable

A brandable domain is non-generic. It’s the difference between a sticky-plaster and a band-aid. Unique is good, rare is acceptable, generic is a waste of money.

A Good Domain Name is Flexible

Keep it flexible. Don’t tie yourself to one market or one demographic. Your domain name needs to work now and fifty years in the future.

A Good Domain Name is Musical

Six to 12 characters and two to three syllables is the sweet spot. Names in that range have a musical rhythm our brains find it easier to retain and recall.

A Good Domain Name is Phonetic

There are 44 word sounds in the English language. Other languages have similar totals. If you use a domain name that is pronounced phonetically, it will be easy to communicate.

Source

The post 8 Domain Name Myths Every Web Designer Should Know first appeared on Webdesigner Depot.

Categories: Designing, Others Tags:

5 Tips for Propelling your Small Business Forward in an Uncertain Economy

March 8th, 2021 No comments

The business has always been a gamble, but finding ways to keep your company financially stable during a period of global economic hardship might feel like an impossible undertaking.

Small business owners are faced with restrictions, dwindling consumer bases, and reduced profits that threaten their entire livelihood on a daily basis; recovering from the sustained economic impact of the COVID-19 pandemic requires both personal and professional tactics. Taking care of yourself is just as important as tending to your business.

Self-care helps prevent burnout and ensures you can focus on your business’s growth without succumbing to overwhelming stress and anxiety. From the professional front, there are several things you can do to stimulate growth or, at the least, keep your revenue stable during these uncertain times. The following suggestions are designed to help you build your own personal recovery strategy; they focus on the greatest challenges that confront small business owners during economic instability, namely return on investment, customer acquisition and profitability. While it will take time for every company to find its place in the new landscape, planning ahead and integrating new strategies can reduce and possibly prevent significant loss.

Offer Greater Employee Flexibility

Many are scared about losing their jobs, and that fear can directly impact work performance. Your employees count on you to provide them with stability and assurance during periods of uncertainty. To accommodate them means to invest in their own lives and recognize your role in their wellbeing. The Centers for Disease Control and Prevention outlines the roles of businesses and employees during the COVID-19 pandemic and offers advice on ways to mitigate changes efficiently and effectively.

The bottom line is that when your workers feel secure, they can perform better. Someone who is afraid of being laid off any minute may not be able to do their job as they need to. Anxiety and stress can contribute to a poor work environment that impacts everyone, including your customers. Taking care of your employees is crucial because it helps create a stable foundation for your entire organization during this time.

Forecast Potential Losses

It’s one thing to know that your company is losing money; it’s another entirely to predict what areas of operations will be hit the hardest in the coming months. When you are able to pinpoint specific problems facing your business, you can begin to come up with preventative solutions faster. Rather than solely reacting to loss or negative change, you can implement proactive strategies. Consider the following areas most likely to be hit by an economic downslide:

  • Fewer sales
  • Customer cancellations
  • Supply chain disruption
  • Reduced employee counts due to illness or personal loss

To combat reduced sales, marketing, promotions and even lowering costs is important. Cancellations might not be fully preventable if you offer services linked to your customers’ own work. However, by opening a line of communication, clients might be willing to negotiate their payments and stay onboard. Sometimes, clients simply don’t realize they have an alternative to their current arrangement, and they don’t feel comfortable asking for a lower price. Supply chain disruption can be sudden, but you should also stay in touch with your manufacturers or other vendors. Is there a way to cut costs and acquire some supplies locally? A slightly higher cost for a closer manufacturer might be offset by an increase in sales due to greater availability and faster service.

Rebuild Your Website

Make sure that your website has a visible message about your company’s response to COVID-19 and any other hardships your customers might worry about. There should also be plenty of keyword optimization to make sure that your website is being shown to as many people in your target audience as possible. Keep in mind that in response to economic changes, business models must shift as well. This means that you need to update your site to reflect any changes, including the different needs of your audience.

Updating and strengthening your website and its content can have two benefits. First, you can position yourself as a confident, trustworthy figure amidst the uncertainty with SEO-friendly content. What are your consumers researching right now, and how can you tie that into your company’s solutions? Second, you could be able to leverage your updated site on social media to attract new customers. Consider offering more expansive delivery or pickup options as well. Traditional brick-and-mortar locations have taken to selling supplies and even offering curbside pickup for their goods to make shopping easier and safer for customers.

Get Your Personal Finances Sorted Out

When your business is at risk of losing a significant amount of money, you have to find as many ways to save as possible. This means turning to your own personal finances and looking for areas to cut back. Do you need to refinance a loan? Could you pay off any small debts to make room in your budget? If you find too many things are outstanding, a personal loan could help you get your finances in order. Personal loans can be in any amount you want, vary in interest rates, and have no obligation until you agree to the terms. You might be wondering why you would take out a loan if your company is suffering; although you aren’t using the borrowed funds for your company, they can help you cover the cost of bills, living expenses, and credit card payments to ensure you don’t fall behind if there’s a lapse in income.

Expand Your Services

If there is anything your company offers that could be provided remotely or adapted to a virtual model, now is the time to adjust. During any period of financial hardship, people will seek out solutions that are both affordable and accessible. The sooner you can meet a potential consumer’s need, the better, and nothing is faster these days than the internet.

Virtual assistance, either through video consultations or phone conferences, can supplement traditional face-to-face services. You can also look for ways to enhance a physical products’ value by offering clients access to online materials while they wait for a physical product to ship. Some audience research can help you get an idea of what your consumers need the most right now; adapting your model might not have been something you ever considered, but switching up your niche a bit can make all the difference.


Photo by Andrew Neel on Unsplash

Categories: Others Tags:

5 Ways to Be Generous with Your Email Subscribers

March 8th, 2021 No comments

“Tis better to give than receive.” You’ve heard this your whole life and now that you’re beginning to gather email subscribers or even if your list is already steadily humming, it’s wise to remember it.

Because email is one of the hottest marketing tools, it can be tempting to believe that the purpose is to “take.” I want you to stop and think again. Being generous with your email subscribers is a good idea and it’s the right thing to do.

Giving is not only a nice thing to do, but it can feel rewarding. It’s nice to help people out, but it’s also good business. It can also be rewarding. 

The brilliant psychologist and author Dr. Robert Cialdini says that one of the principles of persuasion is called reciprocity. People feel in some ways obligated to give back to those who give to them. A waiter leaving a mint receives, on average, an 3% increase in the tip. So what about leaving two mints for each guest? It goes up to a 14% increase!

It’s almost a universal law that if you are of service or have fulfilled a need, you’ll always be sought after. I want this idea to inspire a powerful question in your mind that you should approach positively. 

So how can you create something meaningful
for your email subscribers? 

Now, maybe there’s a way you can make it even more special? Can you personalize your gift? Or maybe you can follow up with them to further the connection?

I hope this article gets the creative juices flowing! It’s great to think outside the box, but to get you started here are 5 ways you can be generous with your email subscribers by giving something they’ll appreciate.

Give them something to listen to

The average worker gets 121 emails in their inbox, on average, every single day. That’s a lot on the eyes, on top of whatever else they may be reading. 

Why not consider producing and recording something for your email subscribers to listen to an audiobook, seminar, lecture, or other interesting material. With some creativity, you can create a download that is fun to listen to and informative.

Who knows? Maybe it will inspire your business to tap into new mediums like podcasting. Even if you don’t have any experience with audio, you can use freeware to create something compelling.

Think about what your readers would listen to while in the car or while running on the treadmill. That means the excitement should be palpable. You should try to be entertaining whenever possible. And remember that when it comes to audio, presentation is everything. 

Give them something they’ll actually read

Could you create a useful white paper or e-book for your email subscribers? Because you have no associated printing or mailing costs, it’s a way to offer something of real value that your subscribers won’t forget.

If you don’t think of yourself as a writer, try not to get hung up on labels. The best writers write a lot like someone talks. Some people feel they have to use big words to sound smart, but did you know it has the opposite effect to most readers

Consider writing it yourself. Just try to envision your subscribers sitting in front of you and remember to keep that conversational tone. Who knows them better than you?

Writing an e-book also doesn’t mean you have to write a huge number of pages. It could even be as short as a few dozen pages. Perhaps you make them available at regular intervals, even numbering them as volumes. Of course, you should let your subscribers know when a new edition is coming out and how they can get their free copy. 

If you simply don’t think you can do it, consider enlisting someone to write it for you. And whether you’re an ace or novice writer, it also helps to get someone to proofread your book. The same goes for the emails you send out! An extra set of eyes can make all the difference.

Give them a helpful infographic

A useful infographic can feel like a lifesaver when you see it. It can take a complex piece of information and translate it into a more accessible medium for your audience.

Brainstorm something your email subscribers and people in your industry would find especially beneficial. Try to offer unique, helpful information in an easy-to-understand way. Also, don’t forget to include your website and/or logo. A fantastic infographic has the potential to be shared on social media or forwarded to friends, so try to present the information better than anybody else could.

If you don’t have experience with graphic design, check out Canva or Easil. They’re intuitive to use and even have infographic templates so you can start immediately. By the way, once you get comfortable making infographics, you can make it an ongoing gift to those who subscribe to your list.

Give them a deal

Everybody loves to save a little coin, including your email subscribers. Since they gave you permission to send them emails, you should view that as a privilege. 

Make being a subscriber a little sweeter by giving a discount, free shipping, or maybe a free item thrown in if they use a secret promo code. You should make the deal something people will really value. 

Finally, tell your readers that their friends can sign up and reap the same benefits they do. All they have to do is forward the email to someone who would appreciate it. That’s why it’s a good idea to include a link to the signup in every newsletter.

Give thema great newsletter!

This one may seem obvious, but it can’t be stressed enough. You should appreciate how big of a deal it is when someone gives you their email address. It’s one of the most valuable ways to communicate with someone. So don’t throw away the potential. Give them a great newsletter!

Some extra tips to improve your deliverability

Sending emails regularly is one of the best ways to help your deliverability. Internet service providers try to determine who is sending spam or who is sending legitimate emails. They do this with a scoring system known as sender reputation.

So what do you do? Actually, send them the newsletters they signed up for in the first place! If you’re just starting a list, you must be consistent with sending. Don’t give up—stick to a schedule. Not sending emails for a time and then resuming looks suspicious because you’re behaving as a spammer does.

In your quest to give them a great email, please make sure they get it. Sending an informative, well-written email helps nobody if it goes directly to spam. One of the easiest ways to end up in the spam folder is to become negligent with your email hygiene.

I look at email hygiene as being somewhat similar to personal hygiene. It’s as much for you as it is for the other person. The fact is, people are going to change their email addresses. If you continue to send emails to inactive addresses, it’s going to hurt your sender’s reputation.

One of the cornerstones to email hygiene is email validation. Now, a good email verification service will be able to identify all the inactive email addresses, but also catch-alls, spam traps, role-based and disposable email addresses. 

If you’ve never validated your list, the best thing you can do is verify it in bulk. From then on, you should verify the list a minimum of twice a year. To keep your list in an optimal state, a reputable email verifier can also provide an API that can verify the addresses of people subscribing via your signup form.

You get what you put into it

There are two sides to the coin of having a great newsletter. First of all, your commitment to making it the best it can be. The other side of that is making sure it gets to the inbox of those who asked to get it. 

Show your subscribers you value them by sending them something meaningful and not taking them for granted. This not only means using email validation but also showing your appreciation in any way you can.


Photo by Adam Solomon on Unsplash

Categories: Others Tags:

Tips for Hiring a Freelance Marketer to Improve Your Site

March 8th, 2021 No comments

Are you thinking of hiring a freelance marketer to improve your website? 

There are many reasons why outsourcing marketing work to a freelancer makes sense. Perhaps your in-house marketing team has more work than they can handle, or maybe you just need some essential marketing tasks completed. 

Whatever the reason, keeping a few handy tips in mind will not only ensure that you hire the right freelance marketer for the job, but also that you get the most out of the experience.

Read on to discover our top 5 tips for hiring a freelance marketer to improve your site.

1. Determine What Your Marketing Goals Are

Before you begin searching for a freelance marketer, it’s important to work out what your marketing goals are.

While it’s tempting to leave this task entirely for the freelancer to figure out, it’s helpful to at least establish a preliminary understanding of which marketing goals you wish to achieve.

After all, you know your business better than anyone—from its strengths to its pain points. You will also generally know which areas your business needs to work on in terms of marketing… even if it is to create a clear marketing plan.

So why do you need them in the first place? Marketing goals will make it easier to track down the right freelance marketer who has the skills and qualities to help you achieve them. 

If you’re unsure of how to determine which marketing goals for your website to aim for, All Points Digital recommends using the following online tools and assessments to pinpoint how your website marketing is faring:

By analyzing the results of these tools and assessments, you will gain the insight you need to create relevant and achievable marketing goals for your site. 

2. Figure Out Which Tasks to Outsource

Now that you have some general marketing goals for your site in mind, it’s time to work out which tasks you can outsource to a freelance marketer.

Don’t rush this step—the clearer you can make your future freelance marketer’s job description, the better. Clear responsibilities and expectations will minimize potential misunderstandings and ensure that you’re working towards the same goals.

If you have an in-house marketing team, it’s a good idea to bring them into the conversation.
Needless to say, you don’t want to step on someone’s toes by outsourcing part of their job! Ask the marketing manager as well as individual team members which tasks they need a hand with, whether due to a lack of time or specialized expertise. 

If you don’t have an in-house marketing team, you can base your decision around your marketing goals. It’s also helpful to learn about which marketing tasks you may need to attend to. QuickSprout’s guide to essential marketing tasks worth outsourcing is a great place to start. Remember, unless you have a limitless marketing budget to work with, you’ll need to focus on a few tasks to start with.

3. Set Your Budget

Looking for a freelance marketer without first setting a budget is a recipe for disaster. If you don’t have at least a ballpark figure in mind, you may end up hiring a freelance marketer who charges more than you can afford. Another advantage of setting a budget is that you’ll be able to further narrow your search based on different freelancing rates. 

Another financial consideration you should think about is how you will pay the freelancer. For example, it’s possible to pay freelancers: 

  • Hourly
  • Per project
  • Per batch of projects
  • On a retainer
  • Per word (for writing tasks)

You should also think about when you will pay them. For example:

  • After each project is approved
  • At the end of the month
  • Every fortnight

Keep in mind that some freelance marketers and freelance marketplaces will only accept specific payment plans. So if you have specific preferences for how and/or when you will pay your freelancer, make sure the freelancer also agrees to them.

4. Begin Your Search

It’s finally time to begin your search for a freelancer… or is it?

You may want to first jot down a few key qualities to look for in a freelancer before taking the plunge. Or, you can dive right in, as you’ll quickly gain an understanding of what you’re looking for after reviewing a few dozen freelance marketing profiles.

Don’t worry if you don’t know where to begin your search. There are plenty of options at your disposal, such as:

  • Searching freelance marketplaces
  • Browsing LinkedIn
  • Getting referrals

You can work out which freelance marketers may be right for you by reviewing their portfolios, reading testimonials, and taking a deep dive into their website or online profile. Remember to cross-reference your marketing goals to ensure the freelancers you have your eye on can fulfill them!

Once you have narrowed down a few potential freelancers, you may wish to give them a trial project or ask them some more questions. 

If you’re using a freelance marketplace, you will need to communicate with the freelances within the platform’s interface. Otherwise, you can communicate via email or schedule a call. Most freelancers are happy to answer questions from potential clients, so make sure to ask any you have!

5. Create a Clear Brief for Your Freelancer

Congratulations! Hopefully, at this stage, you have hired your ideal freelance marketer. 

Before you leave them to their devices, you’ll need to create a clear brief for them. Even the most skilled freelance marketers can only do so much if they’re unaware of what you want from them. 

A clear brief should communicate what their responsibilities are as well as your expectations. It should state key details such as the scope of work, deadline, number of revisions, etc.

Briefs can take a number of forms depending on what tasks you want the freelancer to work on. If you need some guidance for creating one, there are plenty of free templates available online, such as this brief template for freelance writing projects.

6. Set Your Relationship with Your Freelancer Up for Success

Set your relationship with your freelancer up for success by keeping communication open. Many business owners assume that managing freelancers will be just like managing their in-house team. While there are many parallels between the two, it pays to read up on some specific tips for effectively managing freelancers.

Remember, just like in-house employees, freelancers are able to produce their best work when they are supported to succeed. 

Conclusion

Hiring a freelance marketer to improve your site takes a bit of groundwork to get just right. By following the above tips, you’ll gain the know-how to hire a talented freelance marketer who harnesses their expertise to improve your site.  


Photo by Austin Distel on Unsplash

Categories: Others Tags:

9 Recommended WordPress Plugins (2021 updated)

March 4th, 2021 No comments

If you plan to add social media posts to a website, add a few colorful, informative, and responsive charts. Or, simply spice things up with a few carefully placed animations, you may need to enlist the aid of a plugin to do it well.

If you really want to take your site or business to the next level, it only makes sense to seek out one of the best WordPress plugins in a given category. So, you won’t get just good results, but awesome results.

Which is presumably why you’re reading this article.

Since all useful WordPress plugins are obviously not created equal you might want to do some comparisons to see what might perform best for you.

Or you could simply check out the following selection of 9 top WordPress plugins. Select one that provides the functionality you’re looking for, and get on with it.

You will not be disappointed.

  1. Amelia WordPress Booking Plugin

Automated processes can save time, minimize or eliminate stress, give you error-free performance, and make all interested parties happier than was previously the case.

At least that is what Amelia can do for fitness centers, beauty parlors, training centers, consulting firms, photographers and other businesses as 30,000+ businesses already benefit from Amelia scheduling.

  • Amelia automates the appointment booking process, while at the same time giving clients and employees full control over their respective actions.
  • Clients can make, change, or cancel appointments 24/7.
  • Appointments are only made at times that are convenient for the parties concerned (usually a client and an employee).
  • Selling packages of appointment for a single price is possible
  • There is no limit to the number of clients, the number of appointments that can be booked, or the number of employees, plus Amelia can serve multiple locations.
  • This Enterprise-Level booking manager can schedule events as well as appointments.
  • Business owners can check overall status at any time. Clients do not have to login to WordPress to cancel or reschedule an appointment.

Click on the banner to learn more.

  1. wpDataTables

wpDataTables is a WordPress table and chart plugin that gives its users the easiest way to create responsive tables and charts from a variety of sources and in a variety of formats.

wpDataTables –

  • creates simple tables, data tables, and 3 types of charts
  • easily manages massive amounts of data and organizes the data into intuitive and interactive tables and charts
  • accepts data from Google spreadsheets, Excel files, CSV files, and other sources
  • generates real-time data directly from MySQL
  • features conditional formatting techniques for highlighting and color-coding key data

While wpDataTables can be used by anyone and in any industry, this popular plugin has proven to be particularly useful when working with financial statistics, operational statistics, large product inventories, complex analyses, and comparisons.

Click on the banner to review the full range of wpDataTables’ features and capabilities. 

  1. WPC Product Bundles for WooCommerce

Cross-selling is an established marketing practice in which products from different product lines are combined or bundled, a practice that can be difficult without a well-planned system as WPC Product Bundles for promotion offering, stock management and order packaging to keep the ball rolling.

WPC Product Bundles can bring about to your business –

  • Combine simple products, variable products, selected variations of a product to form a bundle (e.g., combine a t-shirt with jeans and shoes)
  • Display bundled products with an appalling interface of your preference: ddSlick, Select2, HTML tags or Radio Buttons
  • Use drag and drop to rearrange the order of bundled products
  • Auto-calculate or manually set regular and sale prices
  • Smartly manage inventory, tax, shipping charges, and order invoice

Plus, WPC Product Bundles can work with many other WPC plugins (ie. Smart Wishlist, Quick View, and Compare) to strengthen the user experience and sales boosting effects.

Click on the banner to learn more about the benefits of this top-rated product bundling plugin. 

  1. Slider Revolution

Slider Revolution’s new template library isn’t just for building sliders. It’s also for creating stunning and responsive hero sections and other web page sections and content elements.

  • Slider Revolution’s drag and drop intuitive editor will save you hours of work on every project
  • Everything you need is there for creating jaw-dropping designs
  • Royalty-free background images, videos, font icons, and more are at your fingertips

Click to learn more about this amazing plugin. 

  1. LayerSlider

The name LayerSlider doesn’t come close to describing what this amazing WordPress builder is capable of.

LayerSlider –

  • Is a multi-purpose tool for animation and content creation
  • Is perfect for giving old and run-of-the mill websites a new lease on life
  • Can be used to create engaging popups you can use to display important messages or store offers
  • Does not require coding. It’s all drag and drop.

Click to find out even more.

  1. Logic Hop – Content Personalization for WordPress

Personalization is more important than ever. Without it, your site shows the same message to every visitor. Every time they visit…

Logic Hop shows the right message to the right person and helps you increase conversions. Features like geolocation, dynamic text replacement  and integrations with WooCommerce and Gravity Forms make Logic Hop the best personalization tool on the market.

Try Logic Hop and see what personalization can do for you. 

  1. Heroic Inbox

Instead of relying on a slap-dash method of trying to manage a customer support staff’s email inbox, Heroic inbox makes it much easier for staff members to work together.

  • With a snappy UI and fast workflows to work with, your support staff can quickly achieve Inbox Zero status and maintain it
  • Heroic Inbox tracks the key metrics involved in inbox management, so you can always see how the staff is performing
  1. Ads Pro Plugin – Multi-Purpose WordPress Advertising Manager

“More and Better” is always a nice place to be, and Ads Pro, the best ad manager for WordPress puts your ad management operation firmly in that place with its –

  • 20+ ad display techniques
  • 25+ ready-to-use user friendly and responsive ad templates
  • Intuitive Frontend User Panel and Order Form and Statistics Module
  • 4 Payment methods and 3 Billing modules

Click on the banner to learn more about how ads Pro can help you. 

  1. Flow-Flow Social Feed

Flow-Flow is a friendly, fast, and powerful way to customize the design and behavior of your site’s social media feed.

  • You can add as many social feeds to a stream as you need
  • Flow-Flow is responsive, highly customizable, and no coding is required to set it up
  • A free Lite test drive version is available

Flow-Flow has been Envato’s best-selling social media plugin since 2014.

 *****

WordPress plugins are great tools for adding and extending functionality to WordPress and WordPress user sites. To get the greatest value for your money, we recommend that you always set your sites on getting the best plugin in its category.

As you can see from the above selection, best-in-class useful WordPress plugins can be reasonably priced. Any one of these is capable of turning a website into a conversion and money-making machine.

These essential plugins for WordPress are easy to set up and work with, and were designed to make life easy for WordPress website administrators.

Read More at 9 Recommended WordPress Plugins (2021 updated)

Categories: Designing, Others Tags:

How to Set Up Your WFH Space

March 4th, 2021 No comments

The pandemic changed the way we work. Office doors closed and workers shifted to working from home.

The truth is, even when the pandemic is over, many people will continue working out of their homes. For this reason, having a home office that is comfortable, fits all your needs and spurs productivity is important. 

We know you don’t want to constantly hear background noise on your video conference calls, or have your video feed drop in your meeting because of unstable internet. That’s why we’ve put together a few tips to help you create the best space, for your best remote work.

Source: webex.com
Categories: Others Tags: