CSS Meditation #4: Select, style, adjust. Select, style, adjust. Select, sty…
originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
CSS Meditation #4: Select, style, adjust. Select, style, adjust. Select, sty…
originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Add some summer sizzle to your design projects with trendy website elements. Learn what’s trending and how to use these styles.
Prior to the World Wide Web, the act of writing remained consistent for centuries. Words were put on paper, and occasionally, people would read them. The tools might change — quills, printing presses, typewriters, pens, what have you — and an adventurous author may perhaps throw in imagery to compliment their copy.
We all know that the web shook things up. With its arrival, writing could become interactive and dynamic. As web development progressed, the creative possibilities of digital content grew — and continue to grow — exponentially. The line between web writing and web technologies is blurry these days, and by and large, I think that’s a good thing, though it brings its own challenges. As a sometimes-engineer-sometimes-journalist, I straddle those worlds more than most and have grown to view the overlap as the future.
Writing for the web is different from traditional forms of writing. It is not a one-size-fits-all process. I’d like to share the benefits of writing content in digital formats like MDX using a personal project of mine as an example. And, by the end, my hope is to convince you of the greater writing benefits of MDX over more traditional formats.
A Little About Markdown
At its most basic, MDX is Markdown with components in it. For those not in the know, Markdown is a lightweight markup language created by John Gruber in 2003, and it’s everywhere today. GitHub, Trello, Discord — all sorts of sites and services use it. It’s especially popular for authoring blog posts, which makes sense as blogging is very much the digital equivalent of journaling. The syntax doesn’t “get in the way,” and many content management systems support it.
Markdown’s goal is an “easy-to-read and easy-to-write plain text format” that can readily be converted into XHTML/HTML if needed. Since its inception, Markdown was supposed to facilitate a writing workflow that integrated the physical act of writing with digital publishing.
We’ll get to actual examples later, but for the sake of explanation, compare a block of text written in HTML to the same text written in Markdown.
HTML is a pretty legible format as it is:
<h2>Post Title</h2>
<p>This is an example block of text written in HTML. We can link things up like this, or format the code with <strong>bolding</strong> and <em>italics</em>. We can also make lists of items:</p>
<ul>
<li>Like this item<li>
<li>Or this one</li>
<li>Perhaos a third?</li>
</ul>
<img src="image.avif" alt="And who doesn't enjoy an image every now and then?">
But Markdown is somehow even less invasive:
## Post Title
This is an example block of text written in HTML. We can link things up like this or format the code with **bolding** and *italics*. We can also make lists of items:
- Like this item
- Or this one
- Perhaos a third?
I’ve become a Markdown disciple since I first learned to code. Its clean and relatively simple syntax and wide compatibilities make it no wonder that Markdown is as pervasive today as it is. Having structural semantics akin to HTML while preserving the flow of plain text writing is a good place to be.
However, it could be accused of being a bit too clean at times. If you want to communicate with words and images, you’re golden, but if you want to jazz things up, you’ll find yourself looking further afield for other options.
Gruber set out to create a “format for writing for the web,” and given its ongoing popularity, you have to say he succeeded, yet the web 20 years ago is a long way away from what it is today.
This is the all-important context for what I want to discuss about MDX because MDX is an offshoot of Markdown, only more capable of supporting richer forms of multimedia — and even user interaction. But before we get into that, we should also discuss the concept of web components because that’s the second significant piece that MDX brings to the table.
A Little About Components
The move towards richer multimedia websites and apps has led to a thriving ecosystem of web development frameworks and libraries, including React, Vue, Svelte, and Astro, to name a few. The idea that we can have reusable components that are not only interactive but also respond to each other has driven this growth and continues to push on evolving web platform features like web components.
MDX is like a bridge that connects Markdown with modern web tooling. Simply put, MDX weds Markdown’s simplicity with the creative possibilities of modern web frameworks.
By leaning into the overlaps rather than trying to abstract them away at all costs, we find untold potential for beautiful digital content.
A Case Study
My own experience with MDX took shape in a side project of mine: teeline.online. To cut a long story short, before I was a software engineer, I was a journalist, and part of my training involved learning a type of shorthand called Teeline. What it boils down to is ripping out as many superfluous letters as possible — I like to call this process “disemvowelment” — then using Teeline’s alphabet to write the remaining content. This has allowed people like me to write lots of words very quickly.
During my studies, I found online learning resources lacking, so as my engineering skills improved, I started working on the kind of site I’d have used when I was a student if it was available. Hence, teeline.online.
I built the teeling.online site with the Svelte framework for its components. The site’s centerpiece is a dataset of shorthand characters and combinations with which hundreds of outlines can be rendered, combined, and animated as SVG paths.
Likewise, Teeline’s “disemvowelment” script could be wired into a single component that I could then use as many times as I like.
Then, of course, as is only natural when working with components, I could combine them to show the Teeline evolution that converts longhand words into shorthand outlines.
The Markdown, meanwhile, looks as simple as this:
It’s not exactly the sort of complex codebase you might expect for an app. Meanwhile, the files themselves can sit in a nice, tidy directory of their own:
The syllabus is neatly filed away in its own folder. With a bit of metadata sprinkled in, I have everything I need to render an entire section of the site using routing. The setup feels like a fluid medium between worlds. If you want to write with words and pictures, you can. If an idea comes to mind for a component that would better express what you’re going for, you can go make it and drop it in.
In fairness, a “WordToOutline” component like this might not mean much to Teeline newcomers, though with such a clear connection between the Markdown and the rendered pages, it’s not much of a stretch to work out what it is. And, of course, there’s always the likes of services like Storybook that can be used to organize component libraries as they grow.
The raw form of multimedia content can be pretty unsightly — something that needs to be kept at arm’s length by content management systems. With MDX — and its ilk — the content feels rather friendly and legible.
I think you can start to see some of the benefits of an MDX setup like this. There are two key benefits in particulart that I think are worth calling out.
First and foremost, MDX doesn’t distract the writing and editorial flow of working with content. When we’re working with traditional code languages, even HTML, the code format is convoluted with things like opening and closing tags. And it’s even more convoluted if we need the added complexity of embedding components in the content.
MDX (and Markdown, for that matter) is much less verbose. Content is a first-class citizen that takes up way less space than typical markup, making it clear and legible. And where we need the complex affordance of components, those can be dropped in without disrupting that nice editorial experience.
Another key benefit of using MDX is reusability. If, for example, I want to display the same information as images instead, each image would have to be bespoke. But we all know how inefficient it is to maintain content in raster images — it requires making edits in a completely different application, which is highly inconvenient. With an old-school approach, if I update the design of the site, I’m left having to create dozens of images in the new style.
With MDX (or an equivalent like MDsveX), I only need to make the change once, and it updates everywhere. Having done the leg work of building reusable components, I can weave them throughout the syllabus as I see fit, safe in the knowledge that updates will roll out across the board — and do it without affecting the editorial experience whatsoever.
Consider the time it would take to create images or videos representing the same thing. Over time, using fixed assets like images becomes a form of technical — or perhaps editorial — debt that adds up over time, while a multimedia approach that leans into components proves to be faster and more flexible than vanilla methods.
I just made the point that working with reusable components in MDX allows Markdown content to become more robust without affecting the content’s legibility as we author it. Using Svelte’s version of MDX, MDsveX, I was able to combine the clean, readable conventions of Markdown with the rich, interactive potential of components.
It’s only right that all my gushing about MDX and its benefits be tempered with a reality check or two. Like anything else, MDX has its limitations, and your mileage with it will vary.
That said, I believe that those limitations are likely to show up when MDX is perhaps not the best choice for a particular project. There’s a sweet spot that MDX fills and it’s when we need to sprinkle in additional web functionality to the content. We get the best of two worlds: minimal markup and modern web features.
But if components aren’t needed, MDX is overkill when all you need is a clean way to write content that ports nicely into HTML to be consumed by whatever app or platform you use to display it on the web.
Without components, MDX is akin to caring for a skinned elbow with a cast; it’s way more than what’s needed in that situation, and the returns you get from Markdown’s legibility will diminish.
Similarly, if your technical needs go beyond components, you may be looking at a more complex architecture than what MDX can support, and you would be best leaning into what works best for content in the particular framework or stack you’re using.
Code doesn’t age as well as words or images do. An MDX-esque approach does sign you up for the maintenance work of dependency updates, refactoring, and — god forbid — framework migrations. I haven’t had to face the last of those realities yet, though I’d say the first two are well worth it. Indeed, they’re good habits to keep.
Writing with MDX continues to be a learning experience for me, but it’s already made a positive impact on my editorial work.
Specifically, I’ve found that MEX improves the quality of my writing. I think more laterally about how to convey ideas.
Is what I’m saying best conveyed in words, an image, or a data visualization? Perhaps an interactive game?
There is way more potential to enhance my words with componentry than I would get with Markdown alone, opening more avenues for what I can say and how I say it.
Of course, those components do not come for free. MDX does sign you up to build those, regardless of whether you have a set of predefined components included in your framework. At the same time, I’d argue that the opportunities MDX opens up for writing greatly outweigh having to build or maintain a few components.
If MDX had been around in the age of Leonardo Di Vinci, perhaps he may have reached for MDX in his journals. I know I’m taking a great leap of assumption here, but the complexity of what he was writing and trying to describe in technical terms with illustrations would have benefited greatly from MDX for everything from interactive demos of his ideas to a better writing experience overall.
Multimedia Writing
In many respects, MDX’s rich, varied way of approaching content is something that Markdown — and writing for the web in general — encourages already. We don’t think only in terms of words but of links, images, and semantic structure. MDX and its equivalents merely take the lid off the cookie jar so we can enhance our work.
Wouldn’t it be nice if… is a redundant turn of phrase on the web. There may be technical hurdles — or, in my case, skill and knowledge hurdles — but it’s a buzz to think about ways in which your thoughts can best manifest on screen.
At the same time, the simplicity of Markdown is so unintrusive. If someone wants to write content formatted in vanilla Markdown, it’s totally possible to do that without trading up to MDX.
Just having the possibility of bespoke multimedia content is enough to change the creative process. It leaves you using words because you want to, not because you have to.
Why describe the solar system when you can render an explorable one? Why have a picture of a proposed skyscraper when you can display a 3D model? Writing with MDX (or, more accurately, MDsveX) has changed my entire thought process. Potential answers to the question, How do I best get this across?, become more expansive.
Good things happen when worlds collide. New possibilities emerge when seemingly disparate things come together. Many content management systems shield writers — and writing — from code. To my mind, this is like shielding painters from wider color palettes, chefs from exotic ingredients, or sculptors from different types of tools.
Leaning into the overlap between writing and coding gets us closer to one of the web’s great joys: if you can imagine it, you can probably do it.
A couple of years ago, four JavaScript APIs that landed at the bottom of awareness in the State of JavaScript survey. I took an interest in those APIs because they have so much potential to be useful but don’t get the credit they deserve. Even after a quick search, I was amazed at how many new web APIs have been added to the ECMAScript specification that aren’t getting their dues and with a lack of awareness and browser support in browsers.
That situation can be a “catch-22”:
An API is interesting but lacks awareness due to incomplete support, and there is no immediate need to support it due to low awareness.
Most of these APIs are designed to power progressive web apps (PWA) and close the gap between web and native apps. Bear in mind that creating a PWA involves more than just adding a manifest file. Sure, it’s a PWA by definition, but it functions like a bookmark on your home screen in practice. In reality, we need several APIs to achieve a fully native app experience on the web. And the four APIs I’d like to shed light on are part of that PWA puzzle that brings to the web what we once thought was only possible in native apps.
You can see all these APIs in action in this demo as we go along.
1. Screen Orientation API
The Screen Orientation API can be used to sniff out the device’s current orientation. Once we know whether a user is browsing in a portrait or landscape orientation, we can use it to enhance the UX for mobile devices by changing the UI accordingly. We can also use it to lock the screen in a certain position, which is useful for displaying videos and other full-screen elements that benefit from a wider viewport.
Using the global screen
object, you can access various properties the screen uses to render a page, including the screen.orientation
object. It has two properties:
type
: The current screen orientation. It can be: "portrait-primary"
, "portrait-secondary"
, "landscape-primary"
, or "landscape-secondary"
.angle
: The current screen orientation angle. It can be any number from 0 to 360 degrees, but it’s normally set in multiples of 90 degrees (e.g., 0
, 90
, 180
, or 270
).On mobile devices, if the angle
is 0
degrees, the type
is most often going to evaluate to "portrait"
(vertical), but on desktop devices, it is typically "landscape"
(horizontal). This makes the type
property precise for knowing a device’s true position.
The screen.orientation
object also has two methods:
.lock()
: This is an async method that takes a type
value as an argument to lock the screen..unlock()
: This method unlocks the screen to its default orientation.And lastly, screen.orientation
counts with an "orientationchange"
event to know when the orientation has changed.
Let’s code a short demo using the Screen Orientation API to know the device’s orientation and lock it in its current position.
This can be our HTML boilerplate:
<main>
<p>
Orientation Type: <span class="orientation-type"></span>
<br />
Orientation Angle: <span class="orientation-angle"></span>
</p>
<button type="button" class="lock-button">Lock Screen</button>
<button type="button" class="unlock-button">Unlock Screen</button>
<button type="button" class="fullscreen-button">Go Full Screen</button>
</main>
On the JavaScript side, we inject the screen orientation type
and angle
properties into our HTML.
let currentOrientationType = document.querySelector(".orientation-type");
let currentOrientationAngle = document.querySelector(".orientation-angle");
currentOrientationType.textContent = screen.orientation.type;
currentOrientationAngle.textContent = screen.orientation.angle;
Now, we can see the device’s orientation and angle properties. On my laptop, they are "landscape-primary"
and 0°
.
If we listen to the window’s orientationchange
event, we can see how the values are updated each time the screen rotates.
window.addEventListener("orientationchange", () => {
currentOrientationType.textContent = screen.orientation.type;
currentOrientationAngle.textContent = screen.orientation.angle;
});
To lock the screen, we need to first be in full-screen mode, so we will use another extremely useful feature: the Fullscreen API. Nobody wants a webpage to pop into full-screen mode without their consent, so we need transient activation (i.e., a user click) from a DOM element to work.
The Fullscreen API has two methods:
Document.exitFullscreen()
is used from the global document object,Element.requestFullscreen()
makes the specified element and its descendants go full-screen.We want the entire page to be full-screen so we can invoke the method from the root element at the document.documentElement
object:
const fullscreenButton = document.querySelector(".fullscreen-button");
fullscreenButton.addEventListener("click", async () => {
// If it is already in full-screen, exit to normal view
if (document.fullscreenElement) {
await document.exitFullscreen();
} else {
await document.documentElement.requestFullscreen();
}
});
Next, we can lock the screen in its current orientation:
const lockButton = document.querySelector(".lock-button");
lockButton.addEventListener("click", async () => {
try {
await screen.orientation.lock(screen.orientation.type);
} catch (error) {
console.error(error);
}
});
And do the opposite with the unlock button:
const unlockButton = document.querySelector(".unlock-button");
unlockButton.addEventListener("click", () => {
screen.orientation.unlock();
});
Yes! We can indeed check page orientation via the orientation
media feature in a CSS media query. However, media queries compute the current orientation by checking if the width is “bigger than the height” for landscape or “smaller” for portrait. By contrast,
The Screen Orientation API checks for the screen rendering the page regardless of the viewport dimensions, making it resistant to inconsistencies that may crop up with page resizing.
You may have noticed how PWAs like Instagram and X force the screen to be in portrait mode even when the native system orientation is unlocked. It is important to notice that this behavior isn’t achieved through the Screen Orientation API, but by setting the orientation
property on the manifest.json
file to the desired orientation type.
2. Device Orientation API
Another API I’d like to poke at is the Device Orientation API. It provides access to a device’s gyroscope sensors to read the device’s orientation in space; something used all the time in mobile apps, mainly games. The API makes this happen with a deviceorientation
event that triggers each time the device moves. It has the following properties:
event.alpha
: Orientation along the Z-axis, ranging from 0 to 360 degrees.event.beta
: Orientation along the X-axis, ranging from -180 to 180 degrees.event.gamma
: Orientation along the Y-axis, ranging from -90 to 90 degrees.In this case, we will make a 3D cube with CSS that can be rotated with your device! The full instructions I used to make the initial CSS cube are credited to David DeSandro and can be found in his introduction to 3D transforms.
To rotate the cube, we change its CSS transform
properties according to the device orientation data:
const currentAlpha = document.querySelector(".currentAlpha");
const currentBeta = document.querySelector(".currentBeta");
const currentGamma = document.querySelector(".currentGamma");
const cube = document.querySelector(".cube");
window.addEventListener("deviceorientation", (event) => {
currentAlpha.textContent = event.alpha;
currentBeta.textContent = event.beta;
currentGamma.textContent = event.gamma;
cube.style.transform = rotateX(${event.beta}deg) rotateY(${event.gamma}deg) rotateZ(${event.alpha}deg)
;
});
This is the result:
3. Vibration API
Let’s turn our attention to the Vibration API, which, unsurprisingly, allows access to a device’s vibrating mechanism. This comes in handy when we need to alert users with in-app notifications, like when a process is finished or a message is received. That said, we have to use it sparingly; no one wants their phone blowing up with notifications.
There’s just one method that the Vibration API gives us, and it’s all we need: navigator.vibrate()
.
vibrate()
is available globally from the navigator
object and takes an argument for how long a vibration lasts in milliseconds. It can be either a number or an array of numbers representing a patron of vibrations and pauses.
navigator.vibrate(200); // vibrate 200ms
navigator.vibrate([200, 100, 200]); // vibrate 200ms, wait 100, and vibrate 200ms.
Let’s make a quick demo where the user inputs how many milliseconds they want their device to vibrate and buttons to start and stop the vibration, starting with the markup:
<main>
<form>
<label for="milliseconds-input">Milliseconds:</label>
<input type="number" id="milliseconds-input" value="0" />
</form>
<button class="vibrate-button">Vibrate</button>
<button class="stop-vibrate-button">Stop</button>
</main>
We’ll add an event listener for a click and invoke the vibrate()
method:
const vibrateButton = document.querySelector(".vibrate-button");
const millisecondsInput = document.querySelector("#milliseconds-input");
vibrateButton.addEventListener("click", () => {
navigator.vibrate(millisecondsInput.value);
});
To stop vibrating, we override the current vibration with a zero-millisecond vibration.
const stopVibrateButton = document.querySelector(".stop-vibrate-button");
stopVibrateButton.addEventListener("click", () => {
navigator.vibrate(0);
});
4. Contact Picker API
In the past, it used to be that only native apps could connect to a device’s “contacts”. But now we have the fourth and final API I want to look at: the Contact Picker API.
The API grants web apps access to the device’s contact lists. Specifically, we get the contacts.select()
async method available through the navigator
object, which takes the following two arguments:
properties
: This is an array containing the information we want to fetch from a contact card, e.g., "name"
, "address"
, "email"
, "tel"
, and "icon"
.options
: This is an object that can only contain the multiple
boolean property to define whether or not the user can select one or multiple contacts at a time.I’m afraid that browser support is next to zilch on this one, limited to Chrome Android, Samsung Internet, and Android’s native web browser at the time I’m writing this.
We will make another demo to select and display the user’s contacts on the page. Again, starting with the HTML:
<main>
<button class="get-contacts">Get Contacts</button>
<p>Contacts:</p>
<ul class="contact-list">
<!-- We’ll inject a list of contacts -->
</ul>
</main>
Then, in JavaScript, we first construct our elements from the DOM and choose which properties we want to pick from the contacts.
const getContactsButton = document.querySelector(".get-contacts");
const contactList = document.querySelector(".contact-list");
const props = ["name", "tel", "icon"];
const options = {multiple: true};
Now, we asynchronously pick the contacts when the user clicks the getContactsButton
.
const getContacts = async () => {
try {
const contacts = await navigator.contacts.select(props, options);
} catch (error) {
console.error(error);
}
};
getContactsButton.addEventListener("click", getContacts);
Using DOM manipulation, we can then append a list item to each contact and an icon to the contactList
element.
const appendContacts = (contacts) => {
contacts.forEach(({name, tel, icon}) => {
const contactElement = document.createElement("li");
contactElement.innerText = ${name}: ${tel}
;
contactList.appendChild(contactElement);
});
};
const getContacts = async () => {
try {
const contacts = await navigator.contacts.select(props, options);
appendContacts(contacts);
} catch (error) {
console.error(error);
}
};
getContactsButton.addEventListener("click", getContacts);
Appending an image is a little tricky since we will need to convert it into a URL and append it for each item in the list.
const getIcon = (icon) => {
if (icon.length > 0) {
const imageUrl = URL.createObjectURL(icon[0]);
const imageElement = document.createElement("img");
imageElement.src = imageUrl;
return imageElement;
}
};
const appendContacts = (contacts) => {
contacts.forEach(({name, tel, icon}) => {
const contactElement = document.createElement("li");
contactElement.innerText = ${name}: ${tel}
;
contactList.appendChild(contactElement);
const imageElement = getIcon(icon);
contactElement.appendChild(imageElement);
});
};
const getContacts = async () => {
try {
const contacts = await navigator.contacts.select(props, options);
appendContacts(contacts);
} catch (error) {
console.error(error);
}
};
getContactsButton.addEventListener("click", getContacts);
And here’s the outcome:
Note: The Contact Picker API will only work if the context is secure, i.e., the page is served over https://
or wss://
URLs.
Conclusion
There we go, four web APIs that I believe would empower us to build more useful and robust PWAs but have slipped under the radar for many of us. This is, of course, due to inconsistent browser support, so I hope this article can bring awareness to new APIs so we have a better chance to see them in future browser updates.
Aren’t they interesting? We saw how much control we have with the orientation of a device and its screen as well as the level of access we get to access a device’s hardware features, i.e. vibration, and information from other apps to use in our own UI.
But as I said much earlier, there’s a sort of infinite loop where a lack of awareness begets a lack of browser support. So, while the four APIs we covered are super interesting, your mileage will inevitably vary when it comes to using them in a production environment. Please tread cautiously and refer to Caniuse for the latest support information, or check for your own devices using WebAPI Check.
Welcome to our roundup of the best new fonts we’ve found online in the last month. This month, there are notably fewer revivals and serifs and a lot more chunky sans serifs than usual. Enjoy!
If you ask anyone working in the design or printing sector about the use of recycled paper, you will most likely receive a response that is somewhat different from this one: printing on recycled paper is beneficial to the environment. The discourse rarely moves beyond this point to discuss the myriad of ways in which the utilization of recycled paper in commercial printing benefits all parties concerned, including everything. This is even though this is undeniably true.
If you are interested in reducing your overall carbon footprint or aligning yourself more closely with the interests and values of your consumers, selecting recycled business cards for commercial print projects such as direct mail, catalogs, brochures, sales and marketing collateral, and other similar projects offers a range of benefits.
Let’s take a quick look at five benefits of choosing recycled paper for commercial printing to help you better understand how and why it is important.
Did you know that the Environmental Protection Agency (EPA) identifies landfills as the single largest source of methane emissions into the atmosphere and that the breakdown of paper is the major source of methane produced by landfills?
Because methane can trap more than twenty times the amount of heat that carbon dioxide does, it is one of the primary contributors to climate change. Therefore, it is essential to take measures to reduce methane emissions to ensure the health of the ecosystem.
However, selecting a recycled business card helps slow the rate at which landfills are filling up, which in turn lessens the quantity of damaging greenhouse gases that are generated by them. About eighty percent of paper thrown ends up in a landfill without ever being recycled.
In addition, this highlights the significance of recycling used paper and paper goods rather than simply disposing of them in the garbage can closest to you.
When recycled paper is compared to virgin paper, it typically has a higher opacity, which offers up some fascinating opportunities for cost savings and increases the opportunity for creative expression. In other words, the opacity of business cards is its ability to block the passage of light from one side to the other, and it is essential to consider this ability when printing projects such as books or pamphlets.
Since recycled business cards have a better opacity than virgin paper, it is possible to print on a lighter paper stock without compromising the quality of the print. Both advantages are present here:
Additionally, recycled fibers have a higher opacity than virgin cards, making them a more versatile option for designers and printers.
The manufacture of recycled paper utilizes around 26% less energy than the creation of virgin fiber, even though you might believe that the recycling process requires more resources than the production of conventional cards due to operations such as de-inking, shredding, and pulping.
Selecting recycled business cards results in about forty percent less wastewater being produced compared to virgin paper. This helps alleviate the strain placed on water treatment facilities and decreases the environmental effects caused by the transportation and disposal of wastewater.
However, the trade-off is that recycled business cards can be a bit more expensive than conventional cards. That’s why partnering with a supplier that only harvests from sustainably managed forests—one that plants new trees to replace the ones that are harvested for card production—can be a happy medium between controlling costs and making responsible choices for the long-term health of our environment.
According to a recent article published in Forbes, an increasing number of consumer demographics are placing sustainability and environmental responsibility at the top of their priority list when it comes to purchasing items or connecting with brands.
Using recycled business cards for print projects can be a powerful differentiator and strong selling point, and partnering with a paper provider that recognizes and values this shift in consumer sentiment is essential to align with what consumers want. The continued development of a more environmentally conscious consumer means that recycled paper can be used for print projects.
Using recycled paper in commercial printing helps minimize the number of trees that are cut down, which in turn contributes to the preservation of our forests. This may be a clear explanation.
However, there is more to it than that. A healthy forest system that is not excessively harvested to produce paper results in less soil erosion, the preservation of biodiversity, the maintenance of habitats for wildlife, and a reduction in the amount of greenhouse gasses released into the environment during the harvesting process.
Additionally, recycled business cards created from post-consumer material can be reused numerous times, which increases the environmental value of selecting a recycled fiber and reduces the impact of forest degradation.
In this regard, purchasing business cards produced from trees that originate from a managed forest can also reduce forest degradation and preserve a vibrant and healthy environment. Working with a card provider that sources its products from managed forests not only helps to create employment opportunities but also promotes the economies of the communities in which managed forests are located.
Featured image by rivage on Unsplash
The post How Can Recycled Business Cards Boost Your Brand In 2024? appeared first on noupe.
AI has influenced all industries, and the creative realm is no exception, including writing. While AI tools promise efficiency and a break from the drudgery of repetitive tasks, they also bring up a big question: Are we sacrificing creativity for convenience?
It’s essential to explore how AI tools might reshape the writing craft, and not always for the better. There’s something special about a piece crafted by a human essay writer that AI can’t replicate. For students seeking help with their essays, a human writer’s personal touch, nuanced understanding, and creative flair are irreplaceable. AI might be able to generate a complete essay on a given topic rapidly, but can it engage a reader’s emotions or offer original ideas as effectively? Let’s dive into the heart of this discussion.
You’ve probably noticed how some content nowadays feels a bit… off. That’s because they’re crafted by AI, which only repeats what’s already out there. The reliability of information is another concern. A 2023 study revealed that heavy reliance on AI for writing tasks reduces the accuracy of the results by 25.1%. How does AI write? AI tools analyze huge chunks of existing texts to produce content. While this can make writing faster, it also means the content can end up looking and sounding the same. Where’s the fun in reading something that feels like deja vu?
When writers lean too much on AI, they risk dulling their ability to think originally and expressively. For students, this is particularly risky. This is why AI is bad for education. Schools are supposed to be playgrounds for the mind, places where you can experiment with ideas and find your voice. If AI does too much of the work, students might miss out on developing these crucial skills.
AI tools are gaining popularity fast. In early 2023, ChatGPT set a record by reaching over 100 million monthly users in just two months after launch. While it has some undeniable advantages, this surge in the popularity of AI tools has caused certain challenges as well.
AI is all about algorithms, which means it loves patterns. But great writing isn’t just about sticking to patterns—it’s about breaking them sometimes. AI’s tendency to standardize could mean your next essay sounds like everyone else’s. Remember, the most memorable pieces of writing are those that reflect a unique perspective, something that AI just can’t mimic.
And it’s not just your voice that’s at risk. As more people use AI tools, there’s a tendency for all writing to start sounding similar. Think about it: if everyone uses the same tool, doesn’t it make sense that everyone’s output might start to look the same? This doesn’t just stifle individuality; it could flatten the entire landscape of literary styles, turning vibrant variety into boring uniformity.
Here’s a not-so-fun fact: when AI takes over the brainy bits of writing, you might disengage from the process. How does AI affect writers? It’s easy to become passive, watching as the AI assembles your thoughts. This is especially bad news for students, who need to flex their mental muscles by tackling complex writing tasks. Critical thinking is like a muscle—if you don’t use it, you lose it.
When AI scripts the show, the range of voices in literature might begin to narrow. Diversity in writing isn’t just about different themes or genres; it’s about different ways of seeing the world. If AI keeps us locked into a certain way of writing, we might start missing out on those fresh, exciting perspectives that come from real human experiences.
Looking ahead, the impact of AI on professional writing careers looks equally concerning. How is AI affecting education? Will future writers need to conform to AI standards to succeed? If so, we might see a drop in the quality and variety of professional writing as the push for AI efficiency overtakes the need for human creativity and insight.
Let’s take a look at a real example. In some classrooms, AI is bad for education because students use AI to polish their essays. At first glance, these essays look perfect. However, over time, teachers notice something troubling—the students’ work begins to lack depth and originality. They’re not learning to craft arguments or express unique ideas. Instead, they’re just learning to edit what AI produces. It’s a bit like painting by numbers: the end result might look good, but the process isn’t creative.
For learners, the stakes are high. Why is AI bad for students? Learning to write creatively is not just an academic exercise—it’s a way to learn how to think, argue, and persuade. If AI starts doing too much of this work, students could end up with a cookie-cutter education that fails to inspire or challenge them. What’s the point of learning to write if you’re not learning to think?
To keep AI in its rightful place as a tool rather than a replacement, writers need to focus on developing their own skills alongside the technology. Use AI to handle the repetitive parts of writing, like checking grammar, but make sure the ideas and the voice are unmistakably yours.
Schools and colleges have a crucial role to play in combating the negative effects of AI in education. They should encourage curricula that value creativity and individuality over the ability to use tools. Besides, online essay writer services can also help, offering students nuanced writing support. It’s about teaching students how to use technology wisely, enhancing their skills without overshadowing them.
Lastly, never underestimate the importance of the human touch. Writers need to stay in the driver’s seat, using AI as a navigator rather than letting it take the wheel. This means always being ready to question, modify, and ultimately oversee the content that AI helps produce, ensuring that each piece of writing reflects true human thought and creativity.
As we look toward the future, it’s clear that AI will change many aspects of our lives, but here’s the good news: AI will not replace writers. The essence of writing—conveying emotion, capturing the human experience, and sparking imagination—is inherently human. No matter how advanced AI gets, the depth, emotion, and personal touch you bring to your writing are yours alone. So use AI, but make every piece you write your own.
Featured Image by Parker Byrd on Unsplash
The post Creativity Crisis: Why Is AI Bad for Original Thinking in Writing? appeared first on noupe.
The modern digital landscape continues to reshape due to new artificial intelligence technologies. Its usage is already quite common in the user experience: customers interact with chatbots and virtual assistants, receive personalized recommendations, etc. That is possible due to the effective UX design resulting from AI-driven analytics.
Artificial intelligence assists experts during different stages of design thinking. However, 97% of professionals used AI mainly to process information gathered from users.
You’ll find answers to the above-mentioned questions here. Thus, let’s get started!
Users of the digital world utilize various apps, software, and services on a regular basis. Customer satisfaction directly influences the company’s metrics, such as ROI, customer, retention, etc. AI-driven analytics can be very helpful in providing evidence-based solutions. However, that requires artificial intelligence to undergo several data-processing stages.
Analytics requires data for processing. Thus, initially collect useful information on users, which falls into different categories:
Such volumes of data a company accumulates from different sources are not just mobile apps and websites. The Internet of Things devices provide relevant information too.
With enough data on users, artificial intelligence processes it. The goal is to define any patterns, trends, correlations, and anomalies. Such activity can show specific behavioral tendencies that are common within the audience.
These are the insights that UX designers can use. They show what actions users perform the most and in what way. Meanwhile, experts can improve the existing user interface to deliver a better experience.
To enhance this process, UX designers often collaborate with experts in LLM data analytics to interpret complex user behaviors and interactions. Incorporating data analytics allows for a more sophisticated analysis of large datasets, leading to more effective and user-centric design improvements.
The audience consists of unique individuals who share some similar features. Their differentiation into separate categories makes it easier to match their needs. Such a task requires lots of processing hours for humans, but not for AI.
As a result, designers can bring new features and interface solutions for smartphone users. Meanwhile, computer owners’ with their issues won’t be missed, and experts can approach and solve them in a tailored manner.
Predicting a user’s behavior requires taking into account multiple parameters. That is what artificial intelligence can successfully deal with. Through data analysis, it develops predictive models that may forecast the way users will interact. Such insights are useful to designers as they can:
A/B testing is a common practice that allows comparing one UX design with another. Quite often, this is a long-term process that helps to understand user behavior better. AI optimization of testing saves company resources, allowing designers to focus on improving the user experience itself.
As artificial intelligence never sleeps, it can evaluate incoming data in real-time. That greatly benefits designers of UX in multiple ways:
Artificial intelligence greatly boosts the interaction between humans and computers. Natural language processing involves comprehension of written, spoken, and even sign languages. AI understands not just the meaning of words, but also their style, context, and emotions. Such data allows designers to reproduce human-like communication via virtual assistants and chatbots. As a result, users obtain an elevated experience with a personalized approach.
Experts come up with UX designs that are effective and convenient to use. Meanwhile, AI is capable of interpreting complex data and delivering new solutions that:
Artificial intelligence tackles aspects of user experience that have been less studied before. That results in new approaches to creating top-notch UX design.
AI-powered tools already exist and help with design tasks. They automated various minor processes and steps that made the entire process easier. With some time they will become even better at understanding goals and will provide more precise solutions.
Figma, Adobe Firefly and Illustrator, Sketch, Axure RP, and other software offer automated design assistance as built-in features or plugins. Thus, designers can deliver high-quality UX with less effort.
Modern user experience design focuses on the elevation of personalization. An AI-driven approach greatly enhances this process, and it is capable of understanding and covering most audience preferences. That is the result of data processing on user purchasing behavior, browsing history, demographic details, etc.
Besides a satisfactory experience, the personalized design enhances conversion rates, positive reviews, and brand recognition.
Digital products and services always face challenges in remaining accessible to every user. Common interfaces are easy to navigate, but not for individuals with disabilities. Their experience is completely different. Therefore, modern AI-driven UX design has become more inclusive.
Artificial intelligence tools recognize visual and audio content and then interpret it for a user. That leads to the creation of inclusive UX designs that are easy to navigate. They also assist users with visual, auditory, cognitive, or motor impairments to interact with interfaces in the most effective ways:
Most websites, applications, and services utilize a common graphical interface design. However, AI made it possible to successfully implement voice commands in navigation. It requires processing spoken language, to comprehend the meaning correctly, regardless of poor pronunciation, dialects, grammar mistakes, etc.
Machine learning algorithms facilitate the improvement of language recognition accuracy. You can already encounter VUI in smart speakers, IoT devices, automotive systems, and virtual assistants.
To ensure that VUIs are as intuitive and user-friendly as their graphical counterparts, businesses increasingly turn to specialized ui ux design services. These ui ux design services focus on creating seamless, engaging voice interactions that cater to diverse user needs and preferences.
Artificial intelligence successfully offers and implements its solutions to enhance the user experience via innovative designs. Nevertheless, it is still far from being perfect. The use of AI has various concerns and issues that require human intervention.
Teaching AI is a huge challenge that requires significant resources. First, you need enough professionals to provide valuable content for learning. Next, these designers must have some skills and understanding of machine learning. Then, with AI analytics, it is possible to obtain some results.
As for the quality of the final product, it may vary depending on algorithms, learning data, and implementation.
Artificial intelligence is still a new technology for many experts. Making a shift to unknown or poorly understood tools doesn’t provide confidence. It requires time to foster the mindset of collaboration between user experience designers and AI-driven solutions.
Another reason to resist changes is the fear of job displacement. That reduces the willingness among experts to cooperate and teach artificial intelligence how to solve different UX tasks.
Machine learning requires data to learn, which is collected from users. Therefore, companies that develop artificial intelligence solutions store large volumes of information, which requires strong protection. That leads to the lack of trust in privacy and security measures that AI-driven design tools utilize.
AI-driven UX design requires developing a completely different workflow. It requires time for experts to learn how to utilize the tool effectively. Moreover, it may lack compatibility with existing software. As the implementation of AI leads to reduced work efficiency for a while, companies are less interested in such technologies.
Creativity is a strength of the human mind. AI-driven analytics still struggle to produce creative outcomes of enough quality. That is due to the limits of machine learning algorithms. They can absorb professional techniques and methods of UX design, but they cannot come up with original ideas. Therefore, AI requires collaboration with humans to provide decent results.
The training process for AI is very complicated. It requires filtering the incoming information to avoid mimicking of inappropriate human experience. Thus, bias and discriminatory outcomes may occur as a result of artificial intelligence processing. To avoid that, designers need additional effort to teach AI about equity, fairness, diversity, etiquette, etc.
Artificial intelligence continues to evolve and become better. With its bulk analytics, it can highlight patterns in user behavior and address issues appropriately. That is what we humans may not notice. AI-driven user experience design allows experts to meet the needs of the audience, even though there are some challenges. As artificial intelligence will improve significantly in the future, let’s be prepared to use it in our favor.
Featured Image by Pavel Danilyuk on Pexels
The post AI-Driven Analytics for User Experience Design appeared first on noupe.
Container queries are often considered a modern approach to responsive web design where traditional media queries have long been the gold standard — the reason being that we can create layouts made with elements that respond to, say, the width of their containers rather than the width of the viewport.
.parent {
container-name: hero-banner;
container-type: inline-size;
/* or container: hero-banner / inline-size; */
}
}
.child {
display: flex;
flex-direction: column;
}
/* When the container is greater than 60 characters... */
@container hero-banner (width > 60ch) {
/* Change the flex direction of the .child element. */
.child {
flex-direction: row;
}
}
.cards {
container-name: card-grid;
container-type: inline-size;
/* Shorthand */
container: card-grid / inline-size;
}
This example registers a new container named card-grid
that can be queried by its inline-size
, which is a fancy way of saying its “width” when we’re working in a horizontal writing mode. It’s a logical property. Otherwise, “inline” would refer to the container’s “height” in a vertical writing mode.
container-name
property is used to register an element as a container that applies styles to other elements based on the container’s size and styles.container-type
property is used to register an element as a container that can apply styles to other elements when it meets certain conditions.container
property is a shorthand that combines the container-name
and container-type
properties into a single declaration.container-name
property is optional. An unnamed container will match any container query that does not target a specific container, meaning it could match multiple conditions.container-type
property is required if we want to query a container by its size
or inline-size
. The size
refers to the container’s inline or block direction, whichever is larger. The inline-size
refers to the container’s width in the default horizontal writing mode.container-type
property’s default value is normal
. And by “normal” that means all elements are containers by default, only they are called Style Containers and can only be queried by their applied styles. For example, we can query a container’s background-color
value and apply styles to other elements when the value is a certain color value.background-color
when it is a certain size — but we can change the background-color
of any element inside the container. “You cannot style what you query” is a way to think about it.@container my-container (width > 60ch) {
article {
flex-direction: row;
}
}
@container
at-rule property informs the browser that we are working with a container query rather than, say, a media query (i.e., @media
).my-container
part in there refers to the container’s name, as declared in the container’s container-name
property.article
element represents an item in the container, whether it’s a direct child of the container or a further ancestor. Either way, the element must be in the container and it will get styles applied to it when the queried condition is matched.width
can be queried with when the container-type
property is set to either size
or inline-size
. That’s because size
can query the element’s width
or height
; meanwhile, inline-size
can only refer to the width
.width
(i.e., inline-size
), there’s an element’s aspect-ratio
, block-size
(i.e., height
), and orientation (e.g. portrait
and landscape
).>
) and “less than” (<
), but there is also “equals” (=
) and combinations of the three, such as “more than or equal to” (>=
) and “less than or equal to” (<=
).and
, or
, and not
.container-name
container-name: none | <custom-ident>+;
none
: The element does not have a container name. This is true by default, so you will likely never use this value, as its purpose is purely to set the property’s default behavior.
: This is the name of the container, which can be anything, except for words that are reserved for other functions, including default
, none
, at
, no
, and or
. Note that the names are not wrapped in quotes.none
none
or an ordered list of identifierscontainer-type
container-type: normal | size | inline-size;
normal
: This indicates that the element is a container that can be queried by its styles rather than size. All elements are technically containers by default, so we don’t even need to explicitly assign a container-type
to define a style container.size
: This is if we want to query a container by its size, whether we’re talking about the inline or block direction.inline-size
: This allows us to query a container by its inline size, which is equivalent to width
in a standard horizontal writing mode. This is perhaps the most commonly used value, as we can establish responsive designs based on element size rather than the size of the viewport as we would normally do with media queries.normal
container
container: <'container-name'> [ / <'container-type'> ]?
If is omitted, it is reset to its initial value of
normal
which defines a style container instead of a size container. In other words, all elements are style containers by default, unless we explicitly set the container-type
property value to either size
or inline-size
which allows us to query a container’s size dimensions.
none
/ normal
Unit | Name | Equivalent to… |
---|---|---|
cqw |
Container query width | 1% of the queried container’s width |
cqh |
Container query height | 1% of the queried container’s height |
Unit | Name | Equivalent to… |
---|---|---|
cqi |
Container query inline size | 1% of the queried container’s inline size, which is its width in a horizontal writing mode. |
cqb |
Container query block size | 1% of the queried container’s inline size, which is its height in a horizontal writing mode. |
Unit | Name | Equivalent to… |
---|---|---|
cqmin |
Container query minimum size | The value of cqi or cqb , whichever is smaller. |
cqmax |
Container query maximum size | The value of cqi or cqb , whichever is larger. |
Container Style Queries is another piece of the CSS Container Queries puzzle. Instead of querying a container by its size
or inline-size
, we can query a container’s CSS styles. And when the container’s styles meet the queried condition, we can apply styles to other elements. This is the sort of “conditional” styling we’ve wanted on the web for a long time: If these styles match over here, then apply these other styles over there.
CSS Container Style Queries are only available as an experimental feature in modern web browsers at the time of this writing, and even then, style queries are only capable of evaluating CSS custom properties (i.e., variables).
The feature is still considered experimental at the time of this writing and is not supported by any browser, unless enabled through feature flags.
This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.
Chrome | Firefox | IE | Edge | Safari |
---|---|---|---|---|
128 | No | No | 125 | TP |
Android Chrome | Android Firefox | Android | iOS Safari |
---|---|---|---|
125 | No | 125 | No |
article {
container-name: card;
}
That’s really it! Actually, we don’t even need the container-name
property unless we need to target it specifically. Otherwise, we can skip registering a container altogether.
And if you’re wondering why there’s no container-type
declaration, that’s because all elements are already considered containers. It’s a lot like how all elements are position: relative
by default; there’s no need to declare it. The only reason we would declare a container-type
is if we want a CSS Container Size Query instead of a CSS Container Style Query.
So, really, there is no need to register a container style query because all elements are already style containers right out of the box! The only reason we’d declare container-name
, then, is simply to help select a specific container by name when writing a style query.
@container style(--bg-color: #000) {
p { color: #fff; }
}
In this example, we’re querying any matching container (because all elements are style containers by default).
Notice how the syntax it’s a lot like a traditional media query? The biggest difference is that we are writing @container
instead of @media
. The other difference is that we’re calling a style()
function that holds the matching style condition. This way, a style query is differentiated from a size query, although there is no corresponding size()
function.
In this instance, we’re checking if a certain custom property named --bg-color
is set to black (#000
). If the variable’s value matches that condition, then we’re setting paragraph (p
) text color
to white (#fff
).
.card-wrapper {
--bg-color: #000;
}
.card {
@container style(--bg-color: #000) {
/* Custom CSS */
}
}
@container style(--featured: true) {
article {
grid-column: 1 / -1;
}
@container style(--theme: dark) {
article {
--bg-color: #000;
--text: #fff;
}
}
}
CSS Container Queries are defined in the CSS Containment Module Level 3 specification, which is currently in Editor’s Draft status at the time of this writing.
Browser support for CSS Container Size Queries is great. It’s just style queries that are lacking support at the time of this writing.
This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.
Chrome | Firefox | IE | Edge | Safari |
---|---|---|---|---|
106 | 110 | No | 106 | 16.0 |
Android Chrome | Android Firefox | Android | iOS Safari |
---|---|---|---|
125 | 126 | 125 | 16.0 |
In this example, a “card” component changes its layout based on the amount of available space in its container.
This example is a lot like those little panels for signing up for an email newsletter. Notice how the layout changes three times according to how much available space is in the container. This is what makes CSS Container Queries so powerful: you can quite literally drop this panel into any project and the layout will respond as it should, as it’s based on the space it is in rather than the size of the browser’s viewport.
This component displays a series of “steps” much like a timeline. In wider containers, the stepper displays steps horizontally. But if the container becomes small enough, the stepper shifts things around so that the steps are vertically stacked.
Sometimes we like to decorate buttons with an icon to accentuate the button’s label with a little more meaning and context. And sometimes we don’t know just how wide that button will be in any given context, which makes it tough to know when exactly to hide the icon or re-arrange the button’s styles when space becomes limited. In this example, an icon is displayed to the right edge of the button as long as there’s room to fit it beside the button label. If room runs out, the button becomes a square tile that stacks the icons above the label. Notice how the border-radius
is set in container query units, 4cqi
, which is equal to 4% of the container’s inline-size (i.e. width) and results in rounder edges as the button grows in size.
Pagination is a great example of a component that benefits from CSS Container Queries because, depending on the amount of space we have, we can choose to display links to individual pages, or hide them in favor of only two buttons, one to paginate to older content and one to paginate to newer content.
Article
on
Oct 4, 2022
Article
on
Dec 16, 2019
Article
on
Jun 11, 2021
Article
on
Apr 6, 2017
Article
on
Jul 1, 2015
Article
on
Aug 29, 2022
Article
on
May 17, 2021
Article
on
Oct 9, 2019
Article
on
Dec 2, 2020
Article
on
Nov 12, 2020
Article
on
Jun 15, 2021
Article
on
Sep 21, 2022
Article
on
Dec 13, 2022
Article
on
Jan 19, 2022
Article
on
Jun 21, 2021
Article
on
Sep 1, 2022
Article
on
Dec 2, 2020
Article
on
May 4, 2020
Article
on
Nov 7, 2022
Article
on
May 22, 2024
Almanac
on
May 22, 2024
Almanac
on
May 22, 2024
Almanac
on
May 22, 2024
Article
on
May 1, 2024
Article
on
May 23, 2024
CSS Container Queries originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Arranging content in an easily accessible way is the backbone of any user-friendly website. A good website will present that information well while conveying a coherent brand identity. A great site will go one step further to create an emotional response in the user.