Archive

Archive for September, 2020

Top 10 Tools That Every React Native Developer Should Use in 2020!

September 17th, 2020 No comments

There are numerous libraries as well as frameworks available to build cross-platform mobile applications.

However, React Native has always been given special preference because of its speed and efficiency. If you have a look at the Google Trend, you will be surprised to know that React Native has itself become a trendsetter for developing powerful mobile applications.

React Native is greatly growing in popularity among developers due to the number of tools that are available. Whether you are developing a new React Native application or optimizing the performance of your existing React Native application, using these tools will help you develop a robust as well as scalable React Native application.

Let’s take a look at some of the tools that will help you optimize your React Native app performance.

1. Nuclide

Nuclide is a package that is integrated with the Atom editor. It is one of the most customized IDE’s that helps developers to improve their code. It also provides various other features like context view, working sets, quick open, health statistics, a debugger, a task runner, remote development, and many others to make an application highly robust and scalable.

2. Expo

It is an open-source toolchain that developers can use to create an application f0or Android and iOS on React Native framework. It offers a range of services like slack community, development tools, adequate documentation, and community forums. It works on Serve, share, build, and publish principle. It can be used to access cross-platform APIs. A single codebase runs across multiple platforms, and there’s an accelerometer to respond to changes.

3. Bit for React Native

If you are planning to build a next-generation React Native component library for a team, Bit for React Native is highly suggested. It eliminates the problem of sharing and collaborating on User Interface components across multiple repositories. It is a great way to compose UI’s with shared components. The bit.dev platform can be used to host and organize components. Using this tool, your team can conveniently render, and update components from any project in Bit.dev. Several features like a rendered sandbox, auto component docs, etc. are also available. It is indeed a great tool that every React Native developer should know about in 2020.

4. Reactotron

Reactotron is a tool developed by Infinite Red and is available for free downloads. This cross-platform desktop application works with all leading operating systems, namely Windows, iOS, and Linux. The tool provides a real tracking feature that helps the developers to keep a watch on dispatch actions, application states, and a lot more

5. Atom

Atom is a cross-platform editing tool developed by GitHub, and its text editor works efficiently on various platforms like Android, iOS, Linux, and Windows. It is also enabled with other useful features like a built-in package manager, different themes and designs, smart autocompletion, and a lot more. Atom also provides multiple panes to split the interface, which makes working easier for developers. It is also very easy to browse and open single, multiple, and whole project files.

6. Vasern

Vasern is a comparatively new tool, yet it is the most powerful data storage for React Native. Vasern enhances React Native’s storage capacity greatly. It is currently in the alpha version and supports the React Native framework on the iOS platform. Vasern is lightweight, and it is an open-source data storage that can set up a local database in just a few minutes. It is still being worked on, and developments are being made constantly, but still, it is serving as one of the top tools for Native React developers, especially working on iOS, in 2020.

7. Detox

Detox is one of the best testing tools for Native React applications. Detox provides end to end testing mobile apps. It is developed by Wix and is creating waves amongst developers. The framework works not only cross-platform but also supports Native React projects and native projects like Android and iOS. Detox supports unit tests for smaller modular pieces of code. It also provides an integration test and regression test to retest the code, which has already been tested, yet they perform the same as before the changes were made to it. Detox is one of the leading tools to use for Native React developers in 2020.

8. Xcode

Xcode is the development environment tool for Apple and is used for iOS, macOS, etc. It is probably one of the best tools to bulletproof the Native React apps developed for iSO. Xcode can be used for building apps as well as for debugging and installing various iOS packages. It also enables a custom build configuration and is great for developers working on iOS platforms.

9. React-native CLI

React-native CLI is the official CLI for React Native. The tool is great for automating various tasks like linking, starting the packager, and building apps. React-native CLI also supports custom configurations and can be used for a lot of things.

10. Sublime Text

It is a text editor equipped with plenty of plug-ins that have been developed by the community. Sublime text supports various programming languages and is famous for code, markup, and prose. The sublime text offers various functionalities that help the developers create mindblowing mobile applications for cross-platform. It is a source code editor with a python API. It also supports simultaneous editing besides various programming languages. It is one of the top tools to use for Native React developers in 2020.

Final Words

I hope your purpose of landing on this blogpost has been served. These are some of the best React Native tools that all Native React developers should be aware of in 2020 to accelerate mobile app development. The use of tools depends entirely on the developer’s preference, but these tools are sure to make an impact upon them due to the functionalities they offer.

If you are not sure about how to use these tools, then I would like to suggest you hire React Native developers from a prominent React Native development company that has a team of well-versed React Native developers with profound knowledge of React Native tools.


Photo by Clément H on Unsplash

Categories: Others Tags:

Weaved Webs

September 16th, 2020 No comments

There is a bit of an irony with Jamstack.

The concept is simple: you put pre-rendered, static files on web hosting (a CDN) designed to do that well. That’s it. If you need to do more, anything you do from there is done with client-side JavaScript, which is likely talking to serverless functions because that’s the spiritual partner to Jamstack on the back end. I heard Guillermo Rauch say at Smashing Conf the other day that it isn’t exactly a “stack” in that it’s almost entirely non-prescriptive in what you do. While I like the word Jamstack, that also feels fair.

The irony is that while the concept is simple, that simplicity can be the cause of complexity.

Netlify, the company largely behind Jamstack, knows this. They know that without a back-end server with back-end languages, something like a basic contact form gets complicated. Instead of being in no-brainer solved-problem territory, we have to figure out another way to process that form. So, they solve that problem for you (among others, like auth and serverless functions). But there are tons of other companies that want to be that cog in your machine.

That’s just one potential complication. What do you use for a CMS or other data storage? What is your build process like? How do you see previews of content changes? How do you do auth? What if you need some fancy calendar widget? What if you want to sell something? Anything a website can do, Jamstack has an answer for — it’s just that combining all those answers can feel disjointed and potentially confusing.

Dave recently played with Eleventy + Tailwind + Netlify CMS (which is Jamstack-y) and said it felt like cattle herding:

So my little mashup, which was supposed to be just 3 technologies ended up exposing me to ~20 different technologies and had me digging into nth-level dependency source code after midnight. If there’s an allegory for what I don’t like about modern-day web development, this is it. You want to use three tools, but you have to know how to use twenty tools instead. If modules and components are like LEGO, then this is dumping out the entire bin on the floor just to find one tiny piece you need.

“The tangled webs we weave,” indeed.

In a conversation between Richard MacManus and Matt Mullenweg¹, Richard quotes Matt:

You can patch together a dozen services, each with its own account and billing, for hundreds of dollars a month, to get a similar result you’d have for a few dollars a month using WordPress on shared hosting,” he said. “And it would be more fragile, because the chain is only as strong as its weakest link. You are chaining together different toolsets, logins, billing, hosting… any part of it going down can break the entire flow.

If I was considering Jamstack for a particular project, and the grand total really was twelve services, I probably would reconsider, particularly if I could reach for a tool like WordPress and bring it down to one. There are plenty of other fair criticisms of Jamstack, particularly since it is early-days. The story of “CMS with Preview” isn’t particularly great, for example, which is a feature you don’t even think about with WordPress because, duh, obviously it has that.

And Jamstack can do some things that are very ahead of the game that I cherish. Git-based deployment? All websites should have that. Previews of my pull requests? Hot damn. Sub -100-millisecond first requests? Yes please. Not having to diddle with cache? Sweet. Catch up, other stacks.

I’m saying there are baby bear choices to be made here. You get there by doing what you’re probably already doing anyway: putting your adult pants on, thinking about what your project needs, and choosing the best option.

I have production WordPress sites. Like this one! It’s great!

I have production Jamstack sites. Like this one! It’s not a complicated web of services. It’s a static site generator with content in the GitHub repo deployed with Netlify. While CSS-Tricks can do about 100 things that this site can’t, it has a few tricks up its sleeve that CSS-Tricks can’t do, like accept pull requests on content.

I feel like I’ve chosen pretty well in all my cases.

  1. While Matt is clearly incentivized to defend the WordPress approach, it feels to me the opinions here are genuine; in part because Automattic invests in alternative stack approaches, and that WordPress and Jamstack are not mutually exclusive. I enjoyed responses to this, like Ohad Eder-Pressman’s open letter, which is also full of incentivized-but-genuine thoughts.

The post Weaved Webs appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

Lessons Learned from Sixty Days of Re-Animating Zombies with Hand-Coded CSS

September 16th, 2020 No comments

Caution: Terrible sense of humor ahead. We’ll talk about practical stuff, but the examples pretty much all involve zombies and silly jokes. You have been warned.

I’ll be linking to individual Pens as I discuss the lessons I learned, but if you’d like to get a sense of the entire project, check out 60 days of Animation on Undead Institute. I started this project to end on August 1st, 2020, coinciding with the publication of a book I wrote featuring CSS animation, humor, and zombies — because, obviously, zombies will destroy the world if you don’t brandish your web skills and stop the apocalypse. Nothing puts the hurt on the horde like a HTML element on the move!

I had a few rules for myself throughout the project.

  1. I would hand-code all CSS. (I’m a masochist.)
  2. The user would initiate all of the animation. (I hate coming upon an animation that’s already halfway through.)
  3. I would use JavaScript as little as possible and never for animation. (I only ended up using JavaScript once, and that was to start audio with the final animation. I have nothing against JavaScript, it’s just not what I wanted to do here.)

Lesson 1: Eighty days is a long time.

Uh, doesn’t the title say “sixty” days? Yes, but my original goal was to do eighty days and as day one approached with less than twenty animations prepared and a three day average for each production, I freaked out and switched to sixty days. That gave me both twenty more days till the beginning date and twenty fewer pieces to do.

Lesson 1A: Sixty days is still a long time.

That’s a lot of animation to do with a limited amount of time, ideas, and even more limited artistic skills. And while I thought of dropping to thirty days, I’m glad I didn’t. Sixty days stretched me and forced me to go deeper into how CSS animation — and by extension, CSS itself — works. I’m also proudest of many of the later pieces I did as my skills increased, and I had to be more innovative and think harder about how to make things interesting. Once you’ve used all the easy options, the actual work and best results begin. (And yes, it ended up being sixty-two days because I started on June 1 and wanted to do a final animation on August 1. Starting June 3 just felt icky and wrong.)

So, the real Lesson 1: stretch yourself.

Lesson 2: Interactive animations are hard, and even harder to make responsive.

If you want something to fly across the screen and connect with another element or appear to start another element’s move, you must use either all standard, inflexible units or all flexible units.

Three variables determine when and where an animated element will be during any animation: duration, velocity, and distance. The duration of the animation is set in the animation property and cannot be changed in relation to screen size. The animation timing function determines the velocity; screen size can’t change that either. Thus, if the distance varies with the screen size, the timing will be off everywhere except a specific screen width and height.

Look at Tank!. Run the animation at wide and narrow screen sizes. While I got the timing close, if you compare the two, you’ll see that the tank is in a different place relative to the zombies when the last zombies fall.

Showing the same brown take, side by side, where the tank on the left is further along than the tank on the right.

To avoid these timing issues, you can use fixed units and a large number, like 2000 or 5000 pixels or more, so that the animation will cover the width (or height) of the screen for all but the largest monitors.

Lesson 3: If you want a responsive animation, put everything in (one of the) viewport units.

Going halfsies on unit proportions (e.g. setting width and height in pixels, but location and movement with viewport units) will lead to unpredictable results. Don’t use both vw and vh either but one or the other; whichever will be the dominant orientation. Mixing vh and vw units will make your animation go “wonky” which I believe is the technical term.

Take Superbly Zomborrific, for instance. It mixes pixel, vw, and vh units. The premise is that the Super Zombie is flying upward as the “camera” follows. Super Zombie smashes into a ledge and falls as the camera continues, but you wouldn’t understand that if your screen was sufficiently tall.

Two animation frames, side by side where the left shows the flying green zombie hitting a building ceiling and the right shows the zombie leaving the frame after impact.

That also means that if you need something to come in from the top — like I did in Nobody Here But Us Humans —you must set the vw height high enough to ensure that the ninja zombie isn’t visible at most aspect ratios.

Lesson 3A: Use pixel units for movements within an SVG element.

All that said, transforming elements within an SVG element should not use viewport units. SVG tags are their own proportional universe. The SVG “pixel” will stay proportional within the SVG element to all the other SVG element children while viewport units will not. So transform with pixel units within an SVG element, but use viewport units everywhere else.

Lesson 4: SVGs scale horribly at runtime.

For animations, like Oops…, I made the SVG image of the zombie scale up to five times his size, but that makes the edges fuzzy. [Shakes fist at “scalable” vector graphics.]

/* Original code resulting in fuzzy edges */
.zombie {
  transform: scale(1);
  width: 15vw;
}

.toggle-checkbox:checked ~ .zombie {
  animation: 5s ease-in-out 0s reverseshrinkydink forwards;
}

@keyframes reverseshrinkydink {
  0% {
    transform: scale(1);
  }
  100% {
    transform: scale(5);
  }
}

I learned to set their dimensions to the final dimensions that would be in effect at the end of the animation, then use a scale transform to shrink them down to the size for the start of the animation.

/* Revised code */
.zombie {
  transform: scale(0.2);
  width: 75vw;
}

.toggle-checkbox:checked ~ .zombie {
  animation: 5s ease-in-out 0s reverseshrinkydink forwards;
}

@keyframes reverseshrinkydink {
  0% {
    transform: scale(0.2);
  }
  100% {
    transform: scale(1);
  }
}

In short, the revised code moves from a scaled-down version of the image up to the full width and height. The browser always renders at 1, making the edges crisp and clean at a scale of 1. So instead of scaling from 1 to 5, I scaled from 0.2 to 1.

The same animation frame of a scientist holding a coffee mug standing to the left of a growing zombie where the frame on the left shows the zombie with blurry edges and the frame on the right is clear.

Lesson 5: The axis Isn’t a universal truth.

An element’s axes stay in sync with the element, not the page. A 90-degree rotation before a translateX will change the direction of the translateX from horizontal to vertical. In Nobody Here But Us Humans… 2, I flipped the zombies using a 180-degree rotation. But positive Y values move the ninjas towards the top, and negative ones move them towards the bottom (the opposite of normal). Beware of how a rotation may affect transforms further down the line.

Showing the main character facing us in the foreground with 7 ninja characters hanging upside down from the ceiling against a light pink background.

Lesson 6. Separate complex animations into concentric elements to make easier adjustments.

When creating a complex animation that moves in multiple directions, adding wrapper divs, or rather parent elements, and animating each one individually will cut down on conflicting transforms, and prevent you from becoming a weepy mess.

For instance, in Space Cadet, I had three different transforms going on. The first is the zomb-o-naut’s moving in an up and down motion. The second is a movement across the screen. The third is a rotation. Rather than trying to do everything in a single transform, I added two wrapping elements and did one animation on each element (I also saved my hair… at least some of it.) This helped avoid the axis issues discussed in the last lesson because I performed the rotation on the innermost element, leaving its parent’s and grandparent’s axes in place.

Lesson 7: SVG and CSS transforms are the same.

Some paths and groups and other SVG elements will already have transforms defined on them. It could be from an optimization algorithm, or perhaps it’s just how the illustration software generates the code. If a path, group, or whatever element in an SVG already has an SVG transform on it, removing that transform will reset the element, often to a bizarre location or size compared to the rest of the drawing.

Since SVG and CSS transforms are the same, any CSS transform you do replaces the SVG transform, meaning your CSS transform will start from that bizarre location or size rather than the location or size that is set in the the SVG.

You can copy the transform from the SVG element to your CSS and set it as the starting position in CSS (updating it to the CSS syntax first, of course). You can then modify it in your CSS animation.

For instance, in Uhhh, Yeah…, my tribute to Office Space, Undead Lumbergh’s right upper arm (the #arm2 element) had a transform on it in the original SVG code.

<path id="arm2" fill="#91c1a3" fill-rule="nonzero" d="M0 171h9v9H0z" transform="translate(0 -343) scale(4 3.55)"/>
A side by side comparison of a zombie dressed in a blue button-up shirt and black suspenders while holding a coffee cup. On the left, the arm holding the coffee mugs the the correct position but the right shows the arm detached from the body.

Moving that transform to CSS like this:

<path id="arm2" fill="#91c1a3" fill-rule="nonzero" d="M0 171h9v9H0z"/>
#arm2 {
  transform: translate(0, -343px) scale(4, 3.55);
}

…I could then create an animation that doesn’t accidentally reset the location and scale:

.toggle-checkbox:checked ~ .z #arm2 { 
  animation: 6s ease-in-out 0.15s arm2move forwards;
}

@keyframes arm2move {
  0%, 100% {
    transform: translate(0, -343px) scale(4, 3.55);
  }
  40%, 60% {
    transform: translate(0, -403px) scale(4, 3.55);
  }
  50% {
    transform: translate(0, -408px) scale(4, 3.55);
  }
} 

This process is harder when the tool generating the SVG code attempts to “simplify” the transform into a matrix. While you can recreate the matrix transform by copying it into the CSS, it is a difficult task to do. You’re a better developer than me — which might be true anyway — if you can take a matrix transform and manipulate it to scale, rotate, or translate in the exact way you want.

Alternatively, you can recreate the matrix transform using translation, rotation, and scaling, but if the path is complex, the likelihood that you can recreate it in a timely manner without finding yourself in a straight jacket is low.

The last and probably easiest option is to wrap the element in a group () tag. Add a class or ID to it for easy CSS access and transform the group itself, thus separating out the transforms as discussed in the last lesson.

Lesson 8: Keep your sanity by using transform-origin when transforming part of an SVG

The CSS transform-origin property moves the point around which the transform happens. If you’re trying to rotate an arm — like I did in Clubbin’ It — your animation will look more natural if you rotate the arm from the center of the shoulder, but that path’s natural transform origin is in the upper-left. Use transform-origin to fix this for smoother, more natural feel… you know that really natural pixel art look…

Four sequential frames of an animation showing a caveman character facing left, holding a large wooden club, and raising it up from the bottom to behind his head.

Transforming the origin can also be useful when scaling, like I did in Mustachioed Oops, or when rotating mouth movements, such as the dinosaur’s jaw in Super Tasty. If you don’t change the origin, the transforms will use an origin point at the upper left corner of the SVG element.

Lesson 9: Sprite animations can be responsive

I ended up doing a lot of sprite animations for this project (i.e., where you use multiple, incremental frames and switch between them fast enough that the characters seem to move). I created the images in one wide file, added them as a background image to an element the size of a single frame, used background-size to set the background image to the width of the image, and hid the overflow. Then I used background-position and the animation timing function, step(), to walk through the images; for example: Post-Apocalyptic Celebrations.

Before the project, I always used inflexible images. I’d scale things down a little so that there would be at least a little responsive give, but I didn’t think you could make it a fully flexible width. However, if you use SVG as the background image you can then use viewport units to scale the element along with the changing screen size. The only problem is the background position. However, if you use viewport units for that, it will stay in sync. Check that out in Finally, Alone with my Sandwich…

CodePen Embed Fallback

Lesson 9A: Use viewport units to set the background size of an image when creating responsive sprite animation

As I’ve learned throughout this project, using a single type of unit is almost always the way to go. Initially, I’d set my sprite’s background size using percentages. The math was easy (100% * (number of steps + 1)) and it worked fine in most cases. In longer animations, however, the exact frame tracking could be off and parts of the wrong sprite frame might display. The problem grows as more frames are added to the sprite.

I’m not sure the exact reason this causes an issue, but I believe it’s because of rounding errors that compound over the length of the sprite sheet (the amount of the shift increases with the number of frames).

For my final animation, It Ain’t Over Till the Zombie Sings, I had a dinosaur open his mouth to reveal a zombie Viking singing (while lasers fired in the background plus there was dancing, accordions playing and zombies fired from cannons, of course). Yeah, I know how to throw a party… a nerd party.

The dinosaur and viking was one of the longest sprite animations I did for the project. But when I used percentages to set the background size, the tracking would be off at certain sizes in Safari. By the end of the animation, part of the dinosaur’s nose from a different frame would appear to the right and a similar part of the nose would be missing on the left.

A large green dinosaur behind a crowd of people, all facing and looking forward.
The dinosaur on the left is missing part of his left cheek and growing a new one next to his right cheek.

This was super frustrating to diagnose because it seemed to work fine in Chrome and I’d think I fixed it in Safari only to look at a slightly different screen size and see the frame off again. However, if I used consistent units — i.e. vw for background-size, frame width, and background-position — everything worked fine. Again, it comes down to working with consistent units!

Lesson 10: Invite people into the project.

A crowd of 32 pixel-art characters from the previous demos facing the screen.

While I learned tons of things during this process, I beat my head against the wall for most of it (often until the wall broke or my head did… I can’t tell). While that’s one way to do it, even if you’re hard-headed, you’ll still end up with a headache. Invite others into your project, be it for advice, to point out an obvious blind spot you missed, provide feedback, help with the project, or simply to encourage you to keep going when the scope is stupidly and arbitrarily large.

So let me put this lesson into practice. What are your thoughts? How will you stop the zombie hordes with CSS animation? What stupidly and arbitrarily large project will you take on to stretch yourself?


The post Lessons Learned from Sixty Days of Re-Animating Zombies with Hand-Coded CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

Inclusive Design 24

September 16th, 2020 No comments

Totally free.

No sign-up. No registration. All sessions are streamed live and publicly on the Inclusive Design 24 YouTube channel – see the entire playlist for the event.

Quite the lineup.

I’ve got a couple of other accessibility links burning a hole in my pocket as well:

Direct Link to ArticlePermalink


The post Inclusive Design 24 appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

Editing HTML Like A Boss In VS Code

September 16th, 2020 No comments

Here’s a seven minute video from Caleb Porzio that focuses on some of Emmet‘s HTML editing features. You might think of Emmet as that thing that expands abbreviations like table.stats>tr*3>td*3 into glorious, expanded, and perfect HTML. But Emmet has other HTML editing trickery up its sleeve. My favorite is “wrap with abbreviation” (which happens to be Cmd/Ctrl + Shift + A on CodePen), but there are more, like expanding your selection inward and outward and tag changing.

If you haven’t seen it, the Emmet 2 preview on CodePen is pretty neeeeat. It shows you what you’re about to expand into before you do it:

Direct Link to ArticlePermalink


The post Editing HTML Like A Boss In VS Code appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

10 Open Source Fonts That Are Actually Amazing | Free Fonts 2020

September 16th, 2020 No comments
free fonts 2020 manrope

There’s nothing like creating an amazing design that your client loves, while also saving some money in the process.

You don’t have to compromise on the quality of the fonts you choose, just because you’re not paying the big bucks for them.

Better yet, why pay for fonts at all when there are some really great ones out there that are ready for you to use for free?

10 Free Fonts That’ll Change Your Life

I’m going to introduce you to your new best friends, aka 10 open source fonts that’ll spice up all your designs in 2020.

You ready to do this? Cause I am.

Let’s jump right into it.

1. Manrope

We’re going to start today’s list of 10 free fonts with Manrope.

I’ve mentions this font before, but that’s only because it’s my all time favorite.

It’s modern, it’s sleek, it’s everything you want in a free font.

It’s versatility is what really hits home for me and that’s why I recommend that you start incorporating it into all your new designs.

2. Inter

You might’ve seen Inter take a step into the scene as of recent, and I truly believe it’s here to stay.

Inter has become a staple font for many, so don’t be the exception! Add this font to your collection and start adding it to your design projects right away.

3. Public Sans

I can’t lie, I love a good sans serif font.

But what’s special about this font is that it was developed for the US government and is all over their websites and is a huge part of their design.

It looks very similar to another open-source font that you may know of, Libre Franklin.

4. Alice

It’s time to step away from the sans serif, and into the serifs.

When I saw the type-face Alice, I knew it was going to have a new and special spot in my font collection.

Alice is a very unique serif font, which seems kinda old-fashioned, but at the same time, pretty modern.

You can find this font on Google-fonts!

5. Urbanist

Another one of my all time favorite free fonts is Urbanist.

This geometric sans serif is most definitely a modern font that can be used in a variety of different projects.

From logos, to headlines, this font is perfect to add to your colelction of fonts.

What are you waiting for? Go and download it now!

6. Evolventa

Were you surprised when you saw another modern sans serif?

Me either.

Evolventa is a Cyrillic extension of the open-source URW Gothic L font family.

7. Object Sans

If this font isn’t eye-catching, I don’t know what is.

If you’re looking for the perfect combination of Swiss neo-grotesks and geometric fonts, then Object Sans is the one for you.

This font is perfect to replace any of those pricey fonts, because it looks just as good as the rest of them.

8. Lunchtype

I love a good back-story to any font that I use, and Lunchtype has one of the best.

The designer who created this font created it during a lunch break on 100-day project.

We love a good lunch-break, and I can’t deny that that’s when some of my best ideas come to me.

Food is life, and so is an amazing font.

9. Work Sans

What’s cuter than a good font and a hedgehog?

I’ll answer that for you.

nothing.

Absolutely nothing.

10. Monoid

And finally, we’ve come to our last free font, which is Monoid.

Monoid is another great font that we know you’ll love and be using on the daily, if you code.

“The clever thing about Monoid is that it has font-awesome built into it, which they call Monoisome. This means when writing code, you can pop a few icons in there easily. Monoid looks just as great when you’re after highly readable website body text.”

Let us know in the comments which font was your favorite of this list and which ones you’ll be incoporating into your daily design life.

Until next time,

Stay creative folks!

Read More at 10 Open Source Fonts That Are Actually Amazing | Free Fonts 2020

Categories: Designing, Others Tags:

Chapter 4: Search

September 15th, 2020 No comments

Previously in web history…

After an influx of rapid browser development following the creation of the web, Mosaic becomes the popular choice. Recognizing the commercial potential of the web, a team at O’Reilly builds GNN, the first commercial website. With something to browse with, and something to browse for, more and more people begin to turn to the web. Many create small, personal sites of their own. The best the web has to offer becomes almost impossible to find.

eBay had had enough of these spiders. They were fending them off by the thousands. Their servers buzzed with nonstop activity; a relentless stream of trespassers. One aggressor, however, towered above the rest. Bidder’s Edge, which billed itself as an auction aggregator, would routinely crawl the pages of eBay to extract its content and list it on its own site alongside other auction listings.

The famed auction site had unsuccessfully tried blocking Bidder’s Edge in the past. Like an elaborate game of Whac-A-Mole, they would restrict the IP address of a Bidder’s Edge server, only to be breached once again by a proxy server with a new one. Technology had failed. Litigation was next.

eBay filed suit against Bidder’s Edge in December of 1999, citing a handful of causes. That included “an ancient trespass theory known to legal scholars as trespass to chattels, basically a trespass or interference with real property — objects, animals, or, in this case, servers.” eBay, in other words, was arguing that Bidder’s Edge was trespassing — in the most medieval sense of that word — on their servers. In order for it to constitute trespass to chattels, eBay had to prove that the trespassers were causing harm. That their servers were buckling under the load, they argued, was evidence of that harm.

eBay in 1999

Judge Ronald M. Whyte found that last bit compelling. Quite a bit of back and forth followed, in one of the strangest lawsuits of a new era that included the phrase “rude robots” entering the official court record. These robots — as opposed to the “polite” ones — ignored eBay’s requests to block spidering on their sites, and made every attempt to circumvent counter measures. They were, by the judge’s estimation, trespassing. Whyte granted an injunction to stop Bidder’s Edge from crawling eBay until it was all sorted out.

Several appeals and countersuits and counter-appeals later, the matter was settled. Bidder’s Edge paid eBay an undisclosed amount and promptly shut their doors. eBay had won this particular battle. They had gotten rid of the robots. But the actual war was already lost. The robots — rude or otherwise — were already here.


If not for Stanford University, web search may have been lost. It is the birthplace of Yahoo!, Google and Excite. It ran the servers that ran the code that ran the first search engines. The founders of both Yahoo! and Google are alumni. But many of the most prominent players in search were not in the computer science department. They were in the symbolic systems program.

Symbolic systems was created at Stanford in 1985 as a study of the “relationship between natural and artificial systems that represent, process, and act on information.” Its interdisciplinary approach is rooted at the intersection of several fields: linguistics, mathematics, semiotics, psychology, philosophy, and computer science.

These are the same fields of study one would find at the heart of artificial intelligence research in the second half of the 20?? century. But this isn’t the A.I. in its modern smart home manifestation, but in the more classical notion conceived by computer scientists as a roadmap to the future of computing technology. It is the understanding of machines as a way to augment the human mind. That parallel is not by accident. One of the most important areas of study at the symbolics systems program is artificial intelligence.

Numbered among the alumni of the program are several of the founders of Excite and Srinija Srinivasan, the fourth employee at Yahoo!. Her work in artificial intelligence led to a position at the ambitious A.I. research lab Cyc right out of college.

Marisa Mayer, an early employee at Google and, later, Yahoo!’s CEO, also drew on A.I. research during her time in the symbolic systems program. Her groundbreaking thesis project used natural language processing to help its users find the best flights through a simple conversation with a computer. “You look at how people learn, how people reason, and ask a computer to do the same things. It’s like studying the brain without the gore,” she would later say of the program.

Marissa Mayer in 1999

Search on the web stems from this one program at one institution at one brief moment in time. Not everyone involved in search engines studied that program — the founders of both Yahoo! and Google, for instance, were graduate students of computer science. But the ideology of search is deeply rooted in the tradition of artificial intelligence. The goal of search, after all, is to extract from the brain a question, and use machines to provide a suitable answer.

At Yahoo!, the principles of artificial intelligence acted as a guide, but it would be aided by human perspective. Web crawlers, like Excite, would bear the burden of users’ queries and attempt to map websites programmatically to provide intelligent results.

However, it would be at Google that A.I. would become an explicitly stated goal. Steven Levy, who wrote the authoritative book on the history of Google,https://bookshop.org/books/in-the-plex-how-google-thinks-works-and-shapes-our-lives/9781416596585 In the Plex, describes Google as a “vehicle to realize the dream of artificial intelligence in augmenting humanity.” Founders Larry Page and Sergey Brin would mention A.I. constantly. They even brought it up in their first press conference.

The difference would be a matter of approach. A tension that would come to dominate search for half a decade. The directory versus the crawler. The precision of human influence versus the completeness of machines. Surfers would be on one side and, on the other, spiders. Only one would survive.


The first spiders were crude. They felt around in the dark until they found the edge of the web. Then they returned home. Sometimes they gathered little bits of information about the websites they crawled. In the beginning, they gathered nothing at all.

One of the earliest web crawlers was developed at MIT by Matthew Gray. He used his World Wide Wanderer to go and find every website on the web. He wasn’t interested in the content of those sites, he merely wanted to count them up. In the summer of 1993, the first time he sent his crawler out, it got to 130. A year later, it would count 3,000. By 1995, that number grew to just shy of 30,000.

Like many of his peers in the search engine business, Gray was a disciple of information retrieval, a subset of computer science dedicated to knowledge sharing. In practice, information retrieval often involves a robot (also known as “spiders, crawlers, wanderers, and worms”) that crawls through digital documents and programmatically collects their contents. They are then parsed and stored in a centralized “index,” a shortcut that eliminates the need to go and crawl every document each time a search is made. Keeping that index up to date is a constant struggle, and robots need to be vigilant; going back out and re-crawling information on a near constant basis.

The World Wide Web posed a problematic puzzle. Rather than a predictable set of documents, a theoretically infinite number of websites could live on the web. These needed to be stored in a central index —which would somehow be kept up to date. And most importantly, the content of those sites needed to be connected to whatever somebody wanted to search, on the fly and in seconds. The challenge proved irresistible for some information retrieval researchers and academics. People like Jonathan Fletcher.

Fletcher, a former graduate and IT employee at the University of Stirling in Scotland, didn’t like how hard it was to find websites. At the time, people relied on manual lists, like the WWW Virtual Library maintained at CERN, or Mosaic’s list ofhttps://css-tricks.com/chapter-3-the-website/ “What’s New” that they updated daily. Fletcher wanted to handle it differently. “With a degree in computing science and an idea that there had to be a better way, I decided to write something that would go and look for me.”

He built Jumpstation in 1993, one of the earliest examples of a searchable index. His crawler would go out, following as many links as it could, and bring them back to a searchable, centralized database. Then it would start over. To solve for the issue of the web’s limitless vastness, Fletcher began by crawling only the titles and some metadata from each webpage. That kept his index relatively small, but but it also restricted search to the titles of pages.

Fletcher was not alone. After tinkering for several months, WebCrawler launched in April of 1994 out of the University of Washington. It holds the distinction of being the first search engine to crawl entire webpages and make them searchable. By November of that year, WebCrawler had served 1 million queries. At Carnegie Mellon, Michael Maudlin released his own spider-based search engine variant named for the Latin translation of wolf spider, Lycos. By 1995, it had indexed over a million webpages.

Search didn’t stay in universities long. Search engines had a unique utility for wayward web users on the hunt for the perfect site. Many users started their web sessions on a search engine. Netscape Navigator — the number one browser for new web users — connected users directly to search engines on their homepage. Getting listed by Netscape meant eyeballs. And eyeballs meant lucrative advertising deals.

In the second half of the 1990’s, a number of major players entered the search engine market. InfoSeek, initially a paid search option, was picked up by Disney, and soon became the default search engine for Netscape. AOL swooped in and purchased WebCrawler as part of a bold strategy to remain competitive on the web. Lycos was purchased by a venture capitalist who transformed it into a fully commercial enterprise.

Excite.com, another crawler started by Stanford alumni and a rising star in the search engine game for its depth and accuracy of results, was offered three million dollars not long after they launched. Its six co-founders lined up two couches, one across from another, and talked it out all night. They decided to stick with the product and bring in a new CEO. There would be many more millions to be made.

Excite in 1996

AltaVista, already a bit late to the game at the end of 1995, was created by the Digital Equipment Corporation. It was initially built to demonstrate the processing power of DEC computers. They quickly realized that their multithreaded crawler was able to index websites at a far quicker rate than their competitors. AltaVista would routinely deploy its crawlers — what one researcher referred to as a “brood of spiders” — to index thousands of sites at a time.

As a result, AltaVista was able to index virtually the entire web, nearly 10 million webpages at launch. By the following year, in 1996, they’d be indexing over 100 million. Because of the efficiency and performance of their machines, AltaVista was able to solve the scalability problem. Unlike some of their predecessors, they were able to make the full content of websites searchable, and they re-crawled sites every few weeks, a much more rapid pace than early competitors, who could take months to update their index. They set the standard for the depth and scope of web crawlers.

AltaVista in 1996

Never fully at rest, AltaVista used its search engine as a tool for innovation, experimenting with natural language processing, translation tools, and multi-lingual search. They were often ahead of their time, offering video and image search years before that would come to be an expected feature.

Those spiders that had not been swept up in the fervor couldn’t keep up. The universities hosting the first search engines were not at all pleased to see their internet connections bloated with traffic that wasn’t even related to the university. Most universities forced the first experimental search engines, like Jumpstation, to shut down. Except, that is, at Stanford.


Stanford’s history with technological innovation begins in the second half of the 20th century. The university was, at that point, teetering on the edge of becoming a second-tier institution. They had been losing ground and lucrative contracts to their competitors on the East Coast. Harvard and MIT became the sites of a groundswell of research in the wake of World War II. Stanford was being left behind.

In 1951, in a bid to reverse course on their downward trajectory, Dean of Engineering Frederick Terman brokered a deal with the city of Palo Alto. Stanford University agreed to annex 700 acres of land for a new industrial park that upstart companies in California could use. Stanford would get proximity to energetic innovation. The businesses that chose to move there would gain unique access to the Stanford student body for use on their product development. And the city of Palo Alto would get an influx of new taxes.

Hewlett-Packard was one of the first companies to move in. They ushered in a new era of computing-focused industry that would soon be known as Silicon Valley. The Stanford Research Park (later renamed Stanford Industrial Park) would eventually host Xerox during a time of rapid success and experimentation. Facebook would spend their nascent years there, growing into the behemoth it would become. At the center of it all was Stanford.

The research park transformed the university from one of stagnation to a site of entrepreneurship and cutting-edge technology. It put them at the heart of the tech industry. Stanford would embed itself — both logistically and financially — in the crucial technological developments of the second half of the 20?? century, including the internet and the World Wide Web.

The potential success of Yahoo!, therefore, did not go unnoticed.


Jerry Yang and David Filo were not supposed to be working on Yahoo!. They were, however, supposed to be working together. They had met years ago, when David was Jerry’s teaching assistant in the Stanford computer science program. Yang eventually joined Filo as a graduate student and — after building a strong rapport — they soon found themselves working on a project together.

As they crammed themselves into a university trailer to begin working through their doctoral project, their relationship become what Yang has often described as perfectly balanced. “We’re both extremely tolerant of each other, but extremely critical of everything else. We’re both extremely stubborn, but very unstubborn when it comes to just understanding where we need to go. We give each other the space we need, but also help each other when we need it.”

In 1994, Filo showed Yang the web. In just a single moment, their focus shifted. They pushed their intended computer science thesis to the side, procrastinating on it by immersing themselves into the depths of the World Wide Web. Days turned into weeks which turned into months of surfing the web and trading links. The two eventually decided to combine their lists in a single place, a website hosted on their Stanford internet connection. It was called Jerry and David’s Guide to the World Wide Web, launched first to Stanford students in 1993 and then to the world in January of 1994. As catchy as that name wasn’t, the idea (and traffic) took off as friends shared with other friends.

Jerry and David’s Guide was a directory. Like the virtual library started at CERN, Yang and Filo organized websites into various categories that they made up on the fly. Some of these categories had strange or salacious names. Others were exactly what you might expect. When one category got too big, they split it apart. It was ad-hoc and clumsy, but not without charm. Through their classifications, Yang and Filo had given their site a personality. Their personality. In later years, Yang would commonly refer to this as the “voice of Yahoo!”

That voice became a guide — as the site’s original name suggested — for new users of the web. Their web crawling competitors were far more adept at the art of indexing millions of sites at a time. Yang and Filo’s site featured only a small subset of the web. But it was, at least by their estimation, the best of what the web had to offer. It was the cool web. It was also a web far easier to navigate than ever before.

Jerry Yang (left) and David Filo (right) in 1995 (Yahoo, via Flickr)

At the end of 1994, Yang and Filo renamed their site to Yahoo! (an awkward forced acronym for Yet Another Hierarchical Officious Oracle). By then, they were getting almost a hundred thousand hits a day, sometimes temporarily taking down Stanford’s internet in the process. Most other universities would have closed down the site and told them to get back to work. But not Stanford. Stanford had spent decades preparing for on-campus businesses just like this one. They kept the server running, and encouraged its creators to stake their own path in Silicon Valley.

Throughout 1994, Netscape had included Yahoo! in their browser. There was a button in the toolbar labeled “Net Directory” that linked directly to Yahoo!. Marc Andreessen, believing in the site’s future, agreed to host their website on Netscape’s servers until they were able to get on steady ground.

Yahoo! homepage in Netscape Navigator, circa 1994

Yang and Filo rolled up their sleeves, and began talking to investors. It wouldn’t take long. By the spring of 1996, they would have a new CEO and hold their own record-setting IPO, outstripping even their gracious host, Netscape. By then, they became the most popular destination on the web by a wide margin.

In the meantime, the web had grown far beyond the grasp of two friends swapping links. They had managed to categorize tens of thousands of sites, but there were hundreds of thousands more to crawl. “I picture Jerry Yang as Charlie Chaplin in Modern Times,” one journalist described, “confronted with an endless stream of new work that is only increasing in speed.” The task of organizing sites would have to go to somebody else. Yang and Filo found help in a fellow Stanford alumni, someone they had met years ago while studying abroad together in Japan, Srinija Srinivasan, a graduate of the symbolic systems program. Many of the earliest hires at Yahoo! were given slightly absurd titles that always ended in “Yahoo.” Yang and Filo went by Chief Yahoos. Srinivasan’s job title was Ontological Yahoo.

That is a deliberate and precise job title, and it was not selected by accident. Ontology is the study of being, an attempt to break the world into its component parts. It has manifested in many traditions throughout history and the world, but it is most closely associated with the followers of Socrates, in the work of Plato, and later in the groundbreaking text Metaphysics, written by Aristotle. Ontology asks the question “What exists?”and uses it as a thought experiment to construct an ideology of being and essence.

As computers blinked into existence, ontology found a new meaning in the emerging field of artificial intelligence. It was adapted to fit the more formal hierarchical categorizations required for a machine to see the world; to think about the world. Ontology became a fundamental way to describe the way intelligent machines break things down into categories and share knowledge.

The dueling definitions of the ontology of metaphysics and computer science would have been familiar to Srinija Srinivasan from her time at Stanford. The combination of philosophy and artificial intelligence in her studies gave her a unique perspective on hierarchical classifications. It was this experience that she brought to her first job after college at the Cyc Project, an artificial intelligence research lab with a bold project: to teach a computer common sense.

Srinija Srinivasan (Getty Images/James D. Wilson)

At Yahoo!, her task was no less bold. When someone looked for something on the site, they didn’t want back a random list of relevant results. They wanted the result they were actually thinking about, but didn’t quite know how to describe. Yahoo! had to — in a manner of seconds — figure out what its users really wanted. Much like her work in artificial intelligence, Srinivasan needed to teach Yahoo! how to think about a query and infer the right results.

To do that, she would need to expand the voice of Yahoo! to thousands of more websites in dozens of categories and sub-categories without losing the point of view established by Jerry and David. She would need to scale that perspective. “This is not a perfunctory file-keeping exercise. This is defining the nature of being,” she once said of her project. “Categories and classifications are the basis for each of our worldviews.”

At a steady pace, she mapped an ontology of human experience onto the site. She began breaking up the makeshift categories she inherited from the site’s creators, re-constituting them into more concrete and findable indexes. She created new categories and destroyed old ones. She sub-divided existing subjects into new, more precise ones. She began cross-linking results so that they could live within multiple categories. Within a few months she had overhauled the site with a fresh hierarchy.

That hierarchical ontology, however, was merely a guideline. The strength of Yahoo!’s expansion lay in the 50 or so content managers she had hired in the meantime. They were known as surfers. Their job was to surf the web — and organize it.

Each surfer was coached in the methodology of Yahoo! but were left with a surprising amount of editorial freedom. They cultivated the directory with their own interests, meticulously deliberating over websites and where they belong. Each decision could be strenuous, and there were missteps and incorrectly categorized items along the way. But by allowing individual personality to dictate hierarchal choices, Yahoo! retained its voice.

They gathered as many sites as they could, adding hundreds each day. Yahoo! surfers did not reveal everything on the web to their site’s visitors. They showed them what was cool. And that meant everything to users grasping for the very first time what the web could do.


At the end of 1995, the Yahoo! staff was watching their traffic closely. Huddled around consoles, employees would check their logs again and again, looking for a drop in visitors. Yahoo! had been the destination for the “Internet Directory” button on Netscape for years. It had been the source of their growth and traffic. Netscape had made the decision, at the last minute (and seemingly at random), to drop Yahoo!, replacing them with the new kids on the block, Excite.com. Best case scenario: a manageable drop. Worst case: the demise of Yahoo!.

But the drop never came. A day went by, and then another. And then a week. And then a few weeks. And Yahoo! remained the most popular website. Tim Brady, one of Yahoo!’s first employees, describes the moment with earnest surprise. “It was like the floor was pulled out in a matter of two days, and we were still standing. We were looking around, waiting for things to collapse in a lot of ways. And we were just like, I guess we’re on our own now.”

Netscape wouldn’t keep their directory button exclusive for long. By 1996, they would begin allowing other search engines to be listed on their browser’s “search” feature. A user could click a button and a drop-down of options would appear, for a fee. Yahoo! bought themselves back in to the drop-down. They were joined by four other search engines, Lycos, InfoSeek, Excite, and AltaVista.

By that time, Yahoo! was the unrivaled leader. It had transformed its first mover advantage into a new strategy, one bolstered by a successful IPO and an influx of new investment. Yahoo! wanted to be much more than a simple search engine. Their site’s transformation would eventually be called a portal. It was a central location for every possible need on the web. Through a number of product expansions and aggressive acquisitions, Yahoo! released a new suite of branded digital products. Need to send an email? Try Yahoo! Mail. Looking to create website? There’s Yahoo! Geocities. Want to track your schedule? Use Yahoo! Calendar. And on and on the list went.

Yahoo! in 1996

Competitors rushed the fill the vacuum of the #2 slot. In April of 1996, Yahoo!, Lycos and Excite all went public to soaring stock prices. Infoseek had their initial offering only a few months later. Big deals collided with bold blueprints for the future. Excite began positioning itself as a more vibrant alternative to Yahoo! with more accurate search results from a larger slice of the web. Lycos, meanwhile, all but abounded the search engine that had brought them initial success to chase after the portal-based game plan that had been a windfall for Yahoo!.

The media dubbed the competition the “portal wars,” a fleeting moment in web history when millions of dollars poured into a single strategy. To be the biggest, best, centralized portal for web surfers. Any service that offered users a destination on the web was thrown into the arena. Nothing short of the future of the web (and a billion dollar advertising industry) was at stake.

In some ways, though, the portal wars were over before they started. When Excite announced a gigantic merger with @Home, an Internet Service Provider, to combine their services, not everyone thought it was a wise move. “AOL and Yahoo! were already in the lead,” one investor and cable industry veteran noted, “and there was no room for a number three portal.” AOL had just enough muscle and influence to elbow their way into the #2 slot, nipping at the heels of Yahoo!. Everyone else would have to go toe-to-toe with Goliath. None were ever able to pull it off.

Battling their way to market dominance, most search engines had simply lost track of search. Buried somewhere next to your email and stock ticker and sports feed was, in most cases, a second rate search engine you could use to find things — only not often and not well. That’s is why it was so refreshing when another search engine out of Stanford launched with just a single search box and two buttons, its bright and multicolored logo plastered across the top.


A few short years after it launched, Google was on the shortlist of most popular sites. In an interview with PBS Newshour in 2002, co-founder Larry Page described their long-term vision. “And, actually, the ultimate search engine, which would understand, you know, exactly what you wanted when you typed in a query, and it would give you the exact right thing back, in computer science we call that artificial intelligence.”

Google could have started anywhere. It could have started with anything. One employee recalls an early conversation with the site’s founders where he was told “we are not really interested in search. We are making an A.I.” Larry Page and Sergey Brin, the creators of Google, were not trying to create the web’s greatest search engine. They were trying to create the web’s most intelligent website. Search was only their most logical starting point.

Imprecise and clumsy, the spider-based search engines of 1996 faced an uphill battle. AltaVista had proved that the entirety of the web, tens of millions of webpages, could be indexed. But unless you knew your way around a few boolean logic commands, it was hard to get the computer to return the right results. The robots were not yet ready to infer, in Page’s words, “exactly what you wanted.”

Yahoo! had filled in these cracks of technology with their surfers. The surfers were able to course-correct the computers, designing their directory piece by piece rather than relying on an algorithm. Yahoo! became an arbiter of a certain kind of online chic; tastemakers reimagined for the information age. The surfers of Yahoo! set trends that would last for years. Your site would live or die by their hand. Machines couldn’t do that work on their own. If you wanted your machines to be intelligent, you needed people to guide them.

Page and Brin disagreed. They believed that computers could handle the problem just fine. And they aimed to prove it.

That unflappable confidence would come to define Google far more than their “don’t be evil” motto. In the beginning, their laser-focus on designing a different future for the web would leave them blind to the day-to-day grind of the present. On not one, but two occasions, checks made out to the company for hundreds of thousands of dollars were left in desk drawers or car trunks until somebody finally made the time to deposit them. And they often did things different. Google’s offices, for instances, were built to simulate a college dorm, an environment the founders felt most conducive to big ideas.

Google would eventually build a literal empire on top of a sophisticated, world-class infrastructure of their own design, fueled by the most elaborate and complex (and arguably invasive) advertising mechanism ever built. There are few companies that loom as large as Google. This one, like others, started at Stanford.


Even among the most renowned artificial intelligence experts, Terry Winograd, a computer scientist and Stanford professor, stands out in the crowd. He was also Larry Page’s advisor and mentor when he was a graduate student in the computer science department. Winograd has often recalled the unorthodox and unique proposals he would receive from Page for his thesis project, some of which involved “space tethers or solar kites.” “It was science fiction more than computer science,” he would later remark.

But for all of his fanciful flights of imagination, Page always returned to the World Wide Web. He found its hyperlink structure mesmerizing. Its one-way links — a crucial ingredient in the web’s success — had led to a colossal proliferation of new websites. In 1996, when Page first began looking at the web, there were tens of thousands of sites being added every week. The master stroke of the web was to enable links that only traveled in one direction. That allowed the web to be decentralized, but without a central database tracking links, it was nearly impossible to collect a list of all of the sites that linked to a particular webpage. Page wanted to build a graph of who was linking to who; an index he could use to cross-reference related websites.

Page understood that the hyperlink was a digital analog to academic citations. A key indicator of the value of a particular academic paper is the amount of times it has been cited. If a paper is cited often (by other high quality papers), it is easier to vouch for its reliability. The web works the same way. The more often your site is linked to (what’s known as a backlink), the more dependable and accurate it is likely to be.

Theoretically, you can determine the value of a website by adding up all of the other websites that link to it. That’s only one layer though. If 100 sites link back to you, but each of them has only ever been linked to one time, that’s far less valuable than if five sites that each have been linked to 100 times link back to you. So it’s not simply how many links you have, but the quality of those links. If you take both of those dimensions and aggregate sites using backlinks as a criteria, you can very quickly start to assemble a list of sites ordered by quality.

John Battelle describes the technical challenge facing Page in his own retelling of the Google story, The Search.

Page realized that a raw count of links to a page would be a useful guide to that page’s rank. He also saw that each link needed its own ranking, based on the link count of its originating page. But such an approach creates a difficult and recursive mathematical challenge — you not only have to count a particular page’s links, you also have to count the links attached to the links. The math gets complicated rather quickly.

Fortunately, Page already knew a math prodigy. Sergey Brin had proven his brilliance to the world a number of times before he began a doctoral program in the Stanford computer science department. Brin and Page had crossed paths on several occasions, a relationship that began on rocky ground but grew towards mutual respect. The mathematical puzzle at the center of Page’s idea was far too enticing for Brin to pass up.

He got to work on a solution. “Basically we convert the entire Web into a big equation, with several hundred million variables,” he would later explain, “which are the page ranks of all the Web pages, and billions of terms, which are the links. And we’re able to solve that equation.” Scott Hassan, the seldom talked about third co-founder of Google who developed their first web crawler, summed it up a bit more concisely, describing Google’s algorithm as an attempt to “surf the web backward!”

The result was PageRank — as in Larry Page, not webpage. Brin, Page, and Hassan developed an algorithm that could trace backlinks of a site to determine the quality of a particular webpage. The higher value of a site’s backlinks, the higher up the rankings it climbed. They had discovered what so many others had missed. If you trained a machine on the right source — backlinks — you could get remarkable results.

It was only after that that they began matching their rankings to search queries when they realized PageRank fit best in a search engine. They called their search engine Google. It was launched on Stanford’s internet connection in August of 1996.

Google in 1998

Google solved the relevancy problem that had plagued online search since its earliest days. Crawlers like Lycos, AltaVista and Excite were able to provide a list of webpages that matched a particular search. They just weren’t able to sort them right, so you had to go digging to find the result you wanted. Google’s rankings were immediately relevant. The first page of your search usually had what you needed. They were so confident in their results they added an “I’m Feeling Lucky” button which took users directly to the first result for their search.

Google’s growth in their early days was not unlike Yahoo!’s in theirs. They spread through word of mouth, from friends to friends of friends. By 1997, they had grown big enough to put a strain on the Stanford network, something Yang and Filo had done only a couple of years earlier. Stanford once again recognized possibility. It did not push Google off their servers. Instead, Stanford’s advisors pushed Page and Brin in a commercial direction.

Initially, the founders sought to sell or license their algorithm to other search engines. They took meetings with Yahoo!, Infoseek and Excite. No one could see the value. They were focused on portals. In a move that would soon sound absurd, they each passed up the opportunity to buy Google for a million dollars or less, and Page and Brin could not find a partner that recognized their vision.

One Stanford faculty member was able to connect them with a few investors, including Jeff Bezos and David Cheriton (which got them those first few checks that sat in a desk drawer for weeks). They formally incorporated in September of 1998, moving into a friend’s garage, bringing a few early employees along, including symbolics systems alumni Marissa Mayer.

Larry Page (left) and Sergey Brin (right) started Google in a friend’s garage.

Even backed by a million dollar investment, the Google founders maintained a philosophy of frugality, simplicity, and swiftness. Despite occasional urging from their investors, they resisted the portal strategy and remained focused on search. They continued tweaking their algorithm and working on the accuracy of their results. They focused on their machines. They wanted to take the words that someone searched for and turn them into something actually meaningful. If you weren’t able to find the thing you were looking for in the top three results, Google had failed.

Google was followed by a cloud of hype and positive buzz in the press. Writing in Newsweek, Steven Levy described Google as a “high-tech version of the Oracle of Delphi, positioning everyone a mouse click away from the answers to the most arcane questions — and delivering simple answers so efficiently that the process becomes addictive.” It was around this time that “googling” — a verb form of the site synonymous with search — entered the common vernacular. The portal wars were still waging, but Google was poking its head up as a calm, precise alternative to the noise.

At the end of 1998, they were serving up ten thousand searches a day. A year later, that would jump to seven million a day. But quietly, behind the scenes, they began assembling the pieces of an empire.

As the web grew, technologists and journalists predicted the end of Google; they would never be able to keep up. But they did, outlasting a dying roster of competitors. In 2001, Excite went bankrupt, Lycos closed down, and Disney suspended Infoseek. Google climbed up and replaced them. It wouldn’t be until 2006 that Google would finally overtake Yahoo! as the number one website. But by then, the company would transform into something else entirely.

After securing another round of investment in 1999, Google moved into their new headquarters and brought on an army of new employees. The list of fresh recruits included former engineers at AltaVista, and leading artificial intelligence expert Peter Norving. Google put an unprecedented focus on advancements in technology. Better servers. Faster spiders. Bigger indexes. The engineers inside Google invented a web infrastructure that had, up to that point, been only theoretical.

They trained their machines on new things, and new products. But regardless of the application, translation or email or pay-per-click advertising, they rested on the same premise. Machines can augment and re-imagine human intelligence, and they can do it at limitless scale. Google took the value proposition of artificial intelligence and brought it into the mainstream.

In 2001, Page and Brin brought in Silicon Valley veteran Eric Schmidt to run things as their CEO, a role he would occupy for a decade. He would oversee the company during its time of greatest growth and innovation. Google employee #4 Heather Cairns recalls his first days on the job. “He did this sort of public address with the company and he said, ‘I want you to know who your real competition is.’ He said, ‘It’s Microsoft.’ And everyone went, What?

Bill Gates would later say, “In the search engine business, Google blew away the early innovators, just blew them away.” There would come a time when Google and Microsoft would come face to face. Eric Schmidt was correct about where Google was going. But it would take years for Microsoft to recognize Google as a threat. In the second half of the 1990’s, they were too busy looking in their rearview mirror at another Silicon Valley company upstart that had swept the digital world. Microsoft’s coming war with Netscape would subsume the web for over half a decade.


The post Chapter 4: Search appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

What is the Value of Browser Diversity?

September 15th, 2020 No comments

In 2018, Rachel Nabors made the point that browser diversity is similar to biological ecosystem diversity. There are literal advantages to more diversity. That article was before the Edge engines were shut, and now the big shakeups at Mozilla have the topic of browser diversity on people’s minds again.

I really like Dave’s take on the matter. The diversity of browser engines makes web tech slow. Frustratingly slow, to many, but that slowness can bring value.

There’s a lot of value in slow thinking. You use the non-lizard side of your brain. You make more deliberate decisions. You prioritize design over instant gratification. You can check your gut instincts and validate your hypothesis before incurring mountains of technical debt.

I’d bet you a dollar that the less engines we have, the faster things get. Fast can be satisfying in the moment, but doesn’t make for the best brisket.

If we do see a major reduction in browser diversity, I think we lose the intentional slowness and the cooperation mechanisms we have in place. Who knows what will happen, but my hope is that just like iron can sharpen iron, maybe chromium can sharpen chromium.

Direct Link to ArticlePermalink


The post What is the Value of Browser Diversity? appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

Virtual Event Registrations with Wufoo Forms

September 15th, 2020 No comments

(This is a sponsored post.)

We’ve seen many events shift from in-person to online this year. That may have required a huge change to how you collect attendee registrations in the past, but with a paid Wufoo account and Zoom — along with a sprinkle of Zapier — it’s easier than ever to go virtual.

First, set up a Zoom call

It doesn’t have to start right now but can be scheduled in advance so we have something to connect to. In Zoom, that’s just a click of the giants “Schedule” button on the welcome screen once the app has been launched.

“New Meeting” will launch something right away, whereas “Schedule” will set a call up in advance to start at a fitter date and time.

Next, we need a registration form

We’re talking about Wufoo’s bread and butter: making online forms! We can create one from scratch, of course, but Wufoo makes it even easier with a set of pre-made options that are specifically designed for registration.

Any of these can be used as-is, or as a starting point to make your own.

Whether you start with a clean slate or a template, the key things you’re going to want to collect are:

  • Name
  • Email address

Seriously, that’s it. Anything else is icing on the cake that can be used to collect additional information about attendees, like their age, gender, shirt size, allergies, or whatever else you think will be helpful to make a better event.

Is this a paid event?

Many events are! If you want to charge a fee for the event, Wufoo integrates with Stripe, Square, PayPal and a slew of other payment gateways that make collecting payments rather trivial. Plus, transactions are protected by 256-bit SSL encryption that’s super secure and PCI-compliant.

Purchasing a single ticket is pretty straightforward, but let’s say you want to allow folks to purchase multiple tickets at a time or have multiple tiers of ticket pricing. Not a problem at all in Wufoo! For example, it’s possible to set prices by the answer provided in a field.

Now, when someone selects an option in the “Number of Tickets” field, a price will calculated.

Is there a maximum number of “seats” available?

You may be cool with an unlimited number of attendees. But if you need to limit the head count, check out Wufoo’s Max Quantity feature, which is like creating a pool of tickets that each registration subtracts from. This is especially useful to create a more “intimate” presentation for, say, workshops or group activities.

The “Max Quantities” feature can restrict the number of people who can register for a specific workshop.

Connect to Zoom

We’re using Zoom in this example, but Wufoo is capable of connecting to other video services, including join.me, GoToMeeting, and Cisco Webex.

Wufoo’s integrations come by way of Zapier. If you’re new to it, Zapier is this thing that basically connects apps together, establishing communication between their APIs so that they interact with each other when something happens. In this case, when someone signs up for the virtual event, we want to add them as a guest on the Zoom call.

So, go into your Zapier account (or set one up for free). From there, we’ll create a new “zap” which Zapier’s slang for a new app connection. That means we select Wufoo as the first app we want to use and Zoom as the app we want to connect to.

Once the apps have been chosen, Zapier provides options for what to “watch for” in Wufoo (a form submission, or “New Entry” in this case) and actions we want to take place in Zoom when that trigger happens (“Create Webinar Registrant” in this case).

Click the “Use Zap” button and Zapier will walk through the rest of the steps, including what form to use in this integration and which scheduled Zoom webinar to create registrants.

Watch the registrations roll in!

Well, yes. We are technically done at this point, but we’ll want to do some housecleaning before we can actually start collecting registrations:

  • Finish designing the form. Wufoo has a lot of nice design options, and even takes custom CSS to fine-tune the way things look.
  • Embed the form. The form needs to go somewhere if we want folks to use it. A Wufoo form can be embedded just about anywhere (hey, we use on on our own Contact page). Or, simply link it up and use the public URL Wufoo generates for the form.
  • Customize the receipt email. Once a transaction is made, the user will get a receipt emailed to their inbox. It’s a good idea to give it a little love so it’s personalized.
  • Create a reminder email. Sending a reminder a few days before the event is a nice way to give folks a heads up that the event is coming up. Wufoo also integrates with both Mailchimp and Campaign Monitor, both of which can be used in a zap that adds attendees to an email list and sends triggered and automated messages.

There you go, a registration flow for a virtual event that is powerful and doesn’t require any code! If you don’t have a Wufoo account already, get one now — it’s free and worth exploring all the interesting things it can do.

Direct Link to ArticlePermalink


The post Virtual Event Registrations with Wufoo Forms appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags:

Form design

September 15th, 2020 No comments

A very digestable guide from Geri Reid on building forms. Not the code, but the design and UX principles that should guide the code.

Working on a design system for a bank has taught [me] a lot about forms. I’ve watched testing in our labs. I’ve worked alongside experts from specialist accessibility organisations. I’ve seen forms tested by disabled people and users of assistive technology. I’ve also read a lot of research.

From all this learning I’ve formed my own forms best-practice guidelines.

I always think about one code-related thing when it comes to general form advice: all inputs need an attached label.

<label for="name">Name:</label>
<input type="text" id="name" name="name">

<!-- or -->

<label>
  Name:
  <input type="text" name="name">
</label>

It’s HTML 101 stuff, but so many forms fail to do it. I once heard a story about blind college-bound high school students who were unable to apply to college specifically because they couldn’t figure out what the inputs wanted on a form. They started second-guessing if they could do college on their own after that experience.

You know how The Onion prints the article “‘No Way To Prevent This,’ Says Only Nation Where This Regularly Happens” literally every single time there is a mass shooting? I feel like someone should make a website that publishes an article pointing to every single website that fails this one test with a headline like “‘No Way To Prevent This’, Says Website Where The Fix Would Be Typing A Handful Of Characters.”

Direct Link to ArticlePermalink


The post Form design appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Categories: Designing, Others Tags: