CSS Meditation #7: Nobody is perf-ect.
originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
CSS Meditation #7: Nobody is perf-ect.
originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
CSS Meditation #6: The color space is always calc(rgb(0 255 0)+er)
on the other side of the fence.
originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
CSS Meditation #5: :where(:is(.my-mind))
originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
CSS Meditation #4: Select, style, adjust. Select, style, adjust. Select, sty…
originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.
Add some summer sizzle to your design projects with trendy website elements. Learn what’s trending and how to use these styles.
Prior to the World Wide Web, the act of writing remained consistent for centuries. Words were put on paper, and occasionally, people would read them. The tools might change — quills, printing presses, typewriters, pens, what have you — and an adventurous author may perhaps throw in imagery to compliment their copy.
We all know that the web shook things up. With its arrival, writing could become interactive and dynamic. As web development progressed, the creative possibilities of digital content grew — and continue to grow — exponentially. The line between web writing and web technologies is blurry these days, and by and large, I think that’s a good thing, though it brings its own challenges. As a sometimes-engineer-sometimes-journalist, I straddle those worlds more than most and have grown to view the overlap as the future.
Writing for the web is different from traditional forms of writing. It is not a one-size-fits-all process. I’d like to share the benefits of writing content in digital formats like MDX using a personal project of mine as an example. And, by the end, my hope is to convince you of the greater writing benefits of MDX over more traditional formats.
A Little About Markdown
At its most basic, MDX is Markdown with components in it. For those not in the know, Markdown is a lightweight markup language created by John Gruber in 2003, and it’s everywhere today. GitHub, Trello, Discord — all sorts of sites and services use it. It’s especially popular for authoring blog posts, which makes sense as blogging is very much the digital equivalent of journaling. The syntax doesn’t “get in the way,” and many content management systems support it.
Markdown’s goal is an “easy-to-read and easy-to-write plain text format” that can readily be converted into XHTML/HTML if needed. Since its inception, Markdown was supposed to facilitate a writing workflow that integrated the physical act of writing with digital publishing.
We’ll get to actual examples later, but for the sake of explanation, compare a block of text written in HTML to the same text written in Markdown.
HTML is a pretty legible format as it is:
<h2>Post Title</h2>
<p>This is an example block of text written in HTML. We can link things up like this, or format the code with <strong>bolding</strong> and <em>italics</em>. We can also make lists of items:</p>
<ul>
<li>Like this item<li>
<li>Or this one</li>
<li>Perhaos a third?</li>
</ul>
<img src="image.avif" alt="And who doesn't enjoy an image every now and then?">
But Markdown is somehow even less invasive:
## Post Title
This is an example block of text written in HTML. We can link things up like this or format the code with **bolding** and *italics*. We can also make lists of items:
- Like this item
- Or this one
- Perhaos a third?
I’ve become a Markdown disciple since I first learned to code. Its clean and relatively simple syntax and wide compatibilities make it no wonder that Markdown is as pervasive today as it is. Having structural semantics akin to HTML while preserving the flow of plain text writing is a good place to be.
However, it could be accused of being a bit too clean at times. If you want to communicate with words and images, you’re golden, but if you want to jazz things up, you’ll find yourself looking further afield for other options.
Gruber set out to create a “format for writing for the web,” and given its ongoing popularity, you have to say he succeeded, yet the web 20 years ago is a long way away from what it is today.
This is the all-important context for what I want to discuss about MDX because MDX is an offshoot of Markdown, only more capable of supporting richer forms of multimedia — and even user interaction. But before we get into that, we should also discuss the concept of web components because that’s the second significant piece that MDX brings to the table.
A Little About Components
The move towards richer multimedia websites and apps has led to a thriving ecosystem of web development frameworks and libraries, including React, Vue, Svelte, and Astro, to name a few. The idea that we can have reusable components that are not only interactive but also respond to each other has driven this growth and continues to push on evolving web platform features like web components.
MDX is like a bridge that connects Markdown with modern web tooling. Simply put, MDX weds Markdown’s simplicity with the creative possibilities of modern web frameworks.
By leaning into the overlaps rather than trying to abstract them away at all costs, we find untold potential for beautiful digital content.
A Case Study
My own experience with MDX took shape in a side project of mine: teeline.online. To cut a long story short, before I was a software engineer, I was a journalist, and part of my training involved learning a type of shorthand called Teeline. What it boils down to is ripping out as many superfluous letters as possible — I like to call this process “disemvowelment” — then using Teeline’s alphabet to write the remaining content. This has allowed people like me to write lots of words very quickly.
During my studies, I found online learning resources lacking, so as my engineering skills improved, I started working on the kind of site I’d have used when I was a student if it was available. Hence, teeline.online.
I built the teeling.online site with the Svelte framework for its components. The site’s centerpiece is a dataset of shorthand characters and combinations with which hundreds of outlines can be rendered, combined, and animated as SVG paths.
Likewise, Teeline’s “disemvowelment” script could be wired into a single component that I could then use as many times as I like.
Then, of course, as is only natural when working with components, I could combine them to show the Teeline evolution that converts longhand words into shorthand outlines.
The Markdown, meanwhile, looks as simple as this:
It’s not exactly the sort of complex codebase you might expect for an app. Meanwhile, the files themselves can sit in a nice, tidy directory of their own:
The syllabus is neatly filed away in its own folder. With a bit of metadata sprinkled in, I have everything I need to render an entire section of the site using routing. The setup feels like a fluid medium between worlds. If you want to write with words and pictures, you can. If an idea comes to mind for a component that would better express what you’re going for, you can go make it and drop it in.
In fairness, a “WordToOutline” component like this might not mean much to Teeline newcomers, though with such a clear connection between the Markdown and the rendered pages, it’s not much of a stretch to work out what it is. And, of course, there’s always the likes of services like Storybook that can be used to organize component libraries as they grow.
The raw form of multimedia content can be pretty unsightly — something that needs to be kept at arm’s length by content management systems. With MDX — and its ilk — the content feels rather friendly and legible.
I think you can start to see some of the benefits of an MDX setup like this. There are two key benefits in particulart that I think are worth calling out.
First and foremost, MDX doesn’t distract the writing and editorial flow of working with content. When we’re working with traditional code languages, even HTML, the code format is convoluted with things like opening and closing tags. And it’s even more convoluted if we need the added complexity of embedding components in the content.
MDX (and Markdown, for that matter) is much less verbose. Content is a first-class citizen that takes up way less space than typical markup, making it clear and legible. And where we need the complex affordance of components, those can be dropped in without disrupting that nice editorial experience.
Another key benefit of using MDX is reusability. If, for example, I want to display the same information as images instead, each image would have to be bespoke. But we all know how inefficient it is to maintain content in raster images — it requires making edits in a completely different application, which is highly inconvenient. With an old-school approach, if I update the design of the site, I’m left having to create dozens of images in the new style.
With MDX (or an equivalent like MDsveX), I only need to make the change once, and it updates everywhere. Having done the leg work of building reusable components, I can weave them throughout the syllabus as I see fit, safe in the knowledge that updates will roll out across the board — and do it without affecting the editorial experience whatsoever.
Consider the time it would take to create images or videos representing the same thing. Over time, using fixed assets like images becomes a form of technical — or perhaps editorial — debt that adds up over time, while a multimedia approach that leans into components proves to be faster and more flexible than vanilla methods.
I just made the point that working with reusable components in MDX allows Markdown content to become more robust without affecting the content’s legibility as we author it. Using Svelte’s version of MDX, MDsveX, I was able to combine the clean, readable conventions of Markdown with the rich, interactive potential of components.
It’s only right that all my gushing about MDX and its benefits be tempered with a reality check or two. Like anything else, MDX has its limitations, and your mileage with it will vary.
That said, I believe that those limitations are likely to show up when MDX is perhaps not the best choice for a particular project. There’s a sweet spot that MDX fills and it’s when we need to sprinkle in additional web functionality to the content. We get the best of two worlds: minimal markup and modern web features.
But if components aren’t needed, MDX is overkill when all you need is a clean way to write content that ports nicely into HTML to be consumed by whatever app or platform you use to display it on the web.
Without components, MDX is akin to caring for a skinned elbow with a cast; it’s way more than what’s needed in that situation, and the returns you get from Markdown’s legibility will diminish.
Similarly, if your technical needs go beyond components, you may be looking at a more complex architecture than what MDX can support, and you would be best leaning into what works best for content in the particular framework or stack you’re using.
Code doesn’t age as well as words or images do. An MDX-esque approach does sign you up for the maintenance work of dependency updates, refactoring, and — god forbid — framework migrations. I haven’t had to face the last of those realities yet, though I’d say the first two are well worth it. Indeed, they’re good habits to keep.
Writing with MDX continues to be a learning experience for me, but it’s already made a positive impact on my editorial work.
Specifically, I’ve found that MEX improves the quality of my writing. I think more laterally about how to convey ideas.
Is what I’m saying best conveyed in words, an image, or a data visualization? Perhaps an interactive game?
There is way more potential to enhance my words with componentry than I would get with Markdown alone, opening more avenues for what I can say and how I say it.
Of course, those components do not come for free. MDX does sign you up to build those, regardless of whether you have a set of predefined components included in your framework. At the same time, I’d argue that the opportunities MDX opens up for writing greatly outweigh having to build or maintain a few components.
If MDX had been around in the age of Leonardo Di Vinci, perhaps he may have reached for MDX in his journals. I know I’m taking a great leap of assumption here, but the complexity of what he was writing and trying to describe in technical terms with illustrations would have benefited greatly from MDX for everything from interactive demos of his ideas to a better writing experience overall.
Multimedia Writing
In many respects, MDX’s rich, varied way of approaching content is something that Markdown — and writing for the web in general — encourages already. We don’t think only in terms of words but of links, images, and semantic structure. MDX and its equivalents merely take the lid off the cookie jar so we can enhance our work.
Wouldn’t it be nice if… is a redundant turn of phrase on the web. There may be technical hurdles — or, in my case, skill and knowledge hurdles — but it’s a buzz to think about ways in which your thoughts can best manifest on screen.
At the same time, the simplicity of Markdown is so unintrusive. If someone wants to write content formatted in vanilla Markdown, it’s totally possible to do that without trading up to MDX.
Just having the possibility of bespoke multimedia content is enough to change the creative process. It leaves you using words because you want to, not because you have to.
Why describe the solar system when you can render an explorable one? Why have a picture of a proposed skyscraper when you can display a 3D model? Writing with MDX (or, more accurately, MDsveX) has changed my entire thought process. Potential answers to the question, How do I best get this across?, become more expansive.
Good things happen when worlds collide. New possibilities emerge when seemingly disparate things come together. Many content management systems shield writers — and writing — from code. To my mind, this is like shielding painters from wider color palettes, chefs from exotic ingredients, or sculptors from different types of tools.
Leaning into the overlap between writing and coding gets us closer to one of the web’s great joys: if you can imagine it, you can probably do it.
A couple of years ago, four JavaScript APIs that landed at the bottom of awareness in the State of JavaScript survey. I took an interest in those APIs because they have so much potential to be useful but don’t get the credit they deserve. Even after a quick search, I was amazed at how many new web APIs have been added to the ECMAScript specification that aren’t getting their dues and with a lack of awareness and browser support in browsers.
That situation can be a “catch-22”:
An API is interesting but lacks awareness due to incomplete support, and there is no immediate need to support it due to low awareness.
Most of these APIs are designed to power progressive web apps (PWA) and close the gap between web and native apps. Bear in mind that creating a PWA involves more than just adding a manifest file. Sure, it’s a PWA by definition, but it functions like a bookmark on your home screen in practice. In reality, we need several APIs to achieve a fully native app experience on the web. And the four APIs I’d like to shed light on are part of that PWA puzzle that brings to the web what we once thought was only possible in native apps.
You can see all these APIs in action in this demo as we go along.
1. Screen Orientation API
The Screen Orientation API can be used to sniff out the device’s current orientation. Once we know whether a user is browsing in a portrait or landscape orientation, we can use it to enhance the UX for mobile devices by changing the UI accordingly. We can also use it to lock the screen in a certain position, which is useful for displaying videos and other full-screen elements that benefit from a wider viewport.
Using the global screen
object, you can access various properties the screen uses to render a page, including the screen.orientation
object. It has two properties:
type
: The current screen orientation. It can be: "portrait-primary"
, "portrait-secondary"
, "landscape-primary"
, or "landscape-secondary"
.angle
: The current screen orientation angle. It can be any number from 0 to 360 degrees, but it’s normally set in multiples of 90 degrees (e.g., 0
, 90
, 180
, or 270
).On mobile devices, if the angle
is 0
degrees, the type
is most often going to evaluate to "portrait"
(vertical), but on desktop devices, it is typically "landscape"
(horizontal). This makes the type
property precise for knowing a device’s true position.
The screen.orientation
object also has two methods:
.lock()
: This is an async method that takes a type
value as an argument to lock the screen..unlock()
: This method unlocks the screen to its default orientation.And lastly, screen.orientation
counts with an "orientationchange"
event to know when the orientation has changed.
Let’s code a short demo using the Screen Orientation API to know the device’s orientation and lock it in its current position.
This can be our HTML boilerplate:
<main>
<p>
Orientation Type: <span class="orientation-type"></span>
<br />
Orientation Angle: <span class="orientation-angle"></span>
</p>
<button type="button" class="lock-button">Lock Screen</button>
<button type="button" class="unlock-button">Unlock Screen</button>
<button type="button" class="fullscreen-button">Go Full Screen</button>
</main>
On the JavaScript side, we inject the screen orientation type
and angle
properties into our HTML.
let currentOrientationType = document.querySelector(".orientation-type");
let currentOrientationAngle = document.querySelector(".orientation-angle");
currentOrientationType.textContent = screen.orientation.type;
currentOrientationAngle.textContent = screen.orientation.angle;
Now, we can see the device’s orientation and angle properties. On my laptop, they are "landscape-primary"
and 0°
.
If we listen to the window’s orientationchange
event, we can see how the values are updated each time the screen rotates.
window.addEventListener("orientationchange", () => {
currentOrientationType.textContent = screen.orientation.type;
currentOrientationAngle.textContent = screen.orientation.angle;
});
To lock the screen, we need to first be in full-screen mode, so we will use another extremely useful feature: the Fullscreen API. Nobody wants a webpage to pop into full-screen mode without their consent, so we need transient activation (i.e., a user click) from a DOM element to work.
The Fullscreen API has two methods:
Document.exitFullscreen()
is used from the global document object,Element.requestFullscreen()
makes the specified element and its descendants go full-screen.We want the entire page to be full-screen so we can invoke the method from the root element at the document.documentElement
object:
const fullscreenButton = document.querySelector(".fullscreen-button");
fullscreenButton.addEventListener("click", async () => {
// If it is already in full-screen, exit to normal view
if (document.fullscreenElement) {
await document.exitFullscreen();
} else {
await document.documentElement.requestFullscreen();
}
});
Next, we can lock the screen in its current orientation:
const lockButton = document.querySelector(".lock-button");
lockButton.addEventListener("click", async () => {
try {
await screen.orientation.lock(screen.orientation.type);
} catch (error) {
console.error(error);
}
});
And do the opposite with the unlock button:
const unlockButton = document.querySelector(".unlock-button");
unlockButton.addEventListener("click", () => {
screen.orientation.unlock();
});
Yes! We can indeed check page orientation via the orientation
media feature in a CSS media query. However, media queries compute the current orientation by checking if the width is “bigger than the height” for landscape or “smaller” for portrait. By contrast,
The Screen Orientation API checks for the screen rendering the page regardless of the viewport dimensions, making it resistant to inconsistencies that may crop up with page resizing.
You may have noticed how PWAs like Instagram and X force the screen to be in portrait mode even when the native system orientation is unlocked. It is important to notice that this behavior isn’t achieved through the Screen Orientation API, but by setting the orientation
property on the manifest.json
file to the desired orientation type.
2. Device Orientation API
Another API I’d like to poke at is the Device Orientation API. It provides access to a device’s gyroscope sensors to read the device’s orientation in space; something used all the time in mobile apps, mainly games. The API makes this happen with a deviceorientation
event that triggers each time the device moves. It has the following properties:
event.alpha
: Orientation along the Z-axis, ranging from 0 to 360 degrees.event.beta
: Orientation along the X-axis, ranging from -180 to 180 degrees.event.gamma
: Orientation along the Y-axis, ranging from -90 to 90 degrees.In this case, we will make a 3D cube with CSS that can be rotated with your device! The full instructions I used to make the initial CSS cube are credited to David DeSandro and can be found in his introduction to 3D transforms.
To rotate the cube, we change its CSS transform
properties according to the device orientation data:
const currentAlpha = document.querySelector(".currentAlpha");
const currentBeta = document.querySelector(".currentBeta");
const currentGamma = document.querySelector(".currentGamma");
const cube = document.querySelector(".cube");
window.addEventListener("deviceorientation", (event) => {
currentAlpha.textContent = event.alpha;
currentBeta.textContent = event.beta;
currentGamma.textContent = event.gamma;
cube.style.transform = rotateX(${event.beta}deg) rotateY(${event.gamma}deg) rotateZ(${event.alpha}deg)
;
});
This is the result:
3. Vibration API
Let’s turn our attention to the Vibration API, which, unsurprisingly, allows access to a device’s vibrating mechanism. This comes in handy when we need to alert users with in-app notifications, like when a process is finished or a message is received. That said, we have to use it sparingly; no one wants their phone blowing up with notifications.
There’s just one method that the Vibration API gives us, and it’s all we need: navigator.vibrate()
.
vibrate()
is available globally from the navigator
object and takes an argument for how long a vibration lasts in milliseconds. It can be either a number or an array of numbers representing a patron of vibrations and pauses.
navigator.vibrate(200); // vibrate 200ms
navigator.vibrate([200, 100, 200]); // vibrate 200ms, wait 100, and vibrate 200ms.
Let’s make a quick demo where the user inputs how many milliseconds they want their device to vibrate and buttons to start and stop the vibration, starting with the markup:
<main>
<form>
<label for="milliseconds-input">Milliseconds:</label>
<input type="number" id="milliseconds-input" value="0" />
</form>
<button class="vibrate-button">Vibrate</button>
<button class="stop-vibrate-button">Stop</button>
</main>
We’ll add an event listener for a click and invoke the vibrate()
method:
const vibrateButton = document.querySelector(".vibrate-button");
const millisecondsInput = document.querySelector("#milliseconds-input");
vibrateButton.addEventListener("click", () => {
navigator.vibrate(millisecondsInput.value);
});
To stop vibrating, we override the current vibration with a zero-millisecond vibration.
const stopVibrateButton = document.querySelector(".stop-vibrate-button");
stopVibrateButton.addEventListener("click", () => {
navigator.vibrate(0);
});
4. Contact Picker API
In the past, it used to be that only native apps could connect to a device’s “contacts”. But now we have the fourth and final API I want to look at: the Contact Picker API.
The API grants web apps access to the device’s contact lists. Specifically, we get the contacts.select()
async method available through the navigator
object, which takes the following two arguments:
properties
: This is an array containing the information we want to fetch from a contact card, e.g., "name"
, "address"
, "email"
, "tel"
, and "icon"
.options
: This is an object that can only contain the multiple
boolean property to define whether or not the user can select one or multiple contacts at a time.I’m afraid that browser support is next to zilch on this one, limited to Chrome Android, Samsung Internet, and Android’s native web browser at the time I’m writing this.
We will make another demo to select and display the user’s contacts on the page. Again, starting with the HTML:
<main>
<button class="get-contacts">Get Contacts</button>
<p>Contacts:</p>
<ul class="contact-list">
<!-- We’ll inject a list of contacts -->
</ul>
</main>
Then, in JavaScript, we first construct our elements from the DOM and choose which properties we want to pick from the contacts.
const getContactsButton = document.querySelector(".get-contacts");
const contactList = document.querySelector(".contact-list");
const props = ["name", "tel", "icon"];
const options = {multiple: true};
Now, we asynchronously pick the contacts when the user clicks the getContactsButton
.
const getContacts = async () => {
try {
const contacts = await navigator.contacts.select(props, options);
} catch (error) {
console.error(error);
}
};
getContactsButton.addEventListener("click", getContacts);
Using DOM manipulation, we can then append a list item to each contact and an icon to the contactList
element.
const appendContacts = (contacts) => {
contacts.forEach(({name, tel, icon}) => {
const contactElement = document.createElement("li");
contactElement.innerText = ${name}: ${tel}
;
contactList.appendChild(contactElement);
});
};
const getContacts = async () => {
try {
const contacts = await navigator.contacts.select(props, options);
appendContacts(contacts);
} catch (error) {
console.error(error);
}
};
getContactsButton.addEventListener("click", getContacts);
Appending an image is a little tricky since we will need to convert it into a URL and append it for each item in the list.
const getIcon = (icon) => {
if (icon.length > 0) {
const imageUrl = URL.createObjectURL(icon[0]);
const imageElement = document.createElement("img");
imageElement.src = imageUrl;
return imageElement;
}
};
const appendContacts = (contacts) => {
contacts.forEach(({name, tel, icon}) => {
const contactElement = document.createElement("li");
contactElement.innerText = ${name}: ${tel}
;
contactList.appendChild(contactElement);
const imageElement = getIcon(icon);
contactElement.appendChild(imageElement);
});
};
const getContacts = async () => {
try {
const contacts = await navigator.contacts.select(props, options);
appendContacts(contacts);
} catch (error) {
console.error(error);
}
};
getContactsButton.addEventListener("click", getContacts);
And here’s the outcome:
Note: The Contact Picker API will only work if the context is secure, i.e., the page is served over https://
or wss://
URLs.
Conclusion
There we go, four web APIs that I believe would empower us to build more useful and robust PWAs but have slipped under the radar for many of us. This is, of course, due to inconsistent browser support, so I hope this article can bring awareness to new APIs so we have a better chance to see them in future browser updates.
Aren’t they interesting? We saw how much control we have with the orientation of a device and its screen as well as the level of access we get to access a device’s hardware features, i.e. vibration, and information from other apps to use in our own UI.
But as I said much earlier, there’s a sort of infinite loop where a lack of awareness begets a lack of browser support. So, while the four APIs we covered are super interesting, your mileage will inevitably vary when it comes to using them in a production environment. Please tread cautiously and refer to Caniuse for the latest support information, or check for your own devices using WebAPI Check.
Welcome to our roundup of the best new fonts we’ve found online in the last month. This month, there are notably fewer revivals and serifs and a lot more chunky sans serifs than usual. Enjoy!
If you ask anyone working in the design or printing sector about the use of recycled paper, you will most likely receive a response that is somewhat different from this one: printing on recycled paper is beneficial to the environment. The discourse rarely moves beyond this point to discuss the myriad of ways in which the utilization of recycled paper in commercial printing benefits all parties concerned, including everything. This is even though this is undeniably true.
If you are interested in reducing your overall carbon footprint or aligning yourself more closely with the interests and values of your consumers, selecting recycled business cards for commercial print projects such as direct mail, catalogs, brochures, sales and marketing collateral, and other similar projects offers a range of benefits.
Let’s take a quick look at five benefits of choosing recycled paper for commercial printing to help you better understand how and why it is important.
Did you know that the Environmental Protection Agency (EPA) identifies landfills as the single largest source of methane emissions into the atmosphere and that the breakdown of paper is the major source of methane produced by landfills?
Because methane can trap more than twenty times the amount of heat that carbon dioxide does, it is one of the primary contributors to climate change. Therefore, it is essential to take measures to reduce methane emissions to ensure the health of the ecosystem.
However, selecting a recycled business card helps slow the rate at which landfills are filling up, which in turn lessens the quantity of damaging greenhouse gases that are generated by them. About eighty percent of paper thrown ends up in a landfill without ever being recycled.
In addition, this highlights the significance of recycling used paper and paper goods rather than simply disposing of them in the garbage can closest to you.
When recycled paper is compared to virgin paper, it typically has a higher opacity, which offers up some fascinating opportunities for cost savings and increases the opportunity for creative expression. In other words, the opacity of business cards is its ability to block the passage of light from one side to the other, and it is essential to consider this ability when printing projects such as books or pamphlets.
Since recycled business cards have a better opacity than virgin paper, it is possible to print on a lighter paper stock without compromising the quality of the print. Both advantages are present here:
Additionally, recycled fibers have a higher opacity than virgin cards, making them a more versatile option for designers and printers.
The manufacture of recycled paper utilizes around 26% less energy than the creation of virgin fiber, even though you might believe that the recycling process requires more resources than the production of conventional cards due to operations such as de-inking, shredding, and pulping.
Selecting recycled business cards results in about forty percent less wastewater being produced compared to virgin paper. This helps alleviate the strain placed on water treatment facilities and decreases the environmental effects caused by the transportation and disposal of wastewater.
However, the trade-off is that recycled business cards can be a bit more expensive than conventional cards. That’s why partnering with a supplier that only harvests from sustainably managed forests—one that plants new trees to replace the ones that are harvested for card production—can be a happy medium between controlling costs and making responsible choices for the long-term health of our environment.
According to a recent article published in Forbes, an increasing number of consumer demographics are placing sustainability and environmental responsibility at the top of their priority list when it comes to purchasing items or connecting with brands.
Using recycled business cards for print projects can be a powerful differentiator and strong selling point, and partnering with a paper provider that recognizes and values this shift in consumer sentiment is essential to align with what consumers want. The continued development of a more environmentally conscious consumer means that recycled paper can be used for print projects.
Using recycled paper in commercial printing helps minimize the number of trees that are cut down, which in turn contributes to the preservation of our forests. This may be a clear explanation.
However, there is more to it than that. A healthy forest system that is not excessively harvested to produce paper results in less soil erosion, the preservation of biodiversity, the maintenance of habitats for wildlife, and a reduction in the amount of greenhouse gasses released into the environment during the harvesting process.
Additionally, recycled business cards created from post-consumer material can be reused numerous times, which increases the environmental value of selecting a recycled fiber and reduces the impact of forest degradation.
In this regard, purchasing business cards produced from trees that originate from a managed forest can also reduce forest degradation and preserve a vibrant and healthy environment. Working with a card provider that sources its products from managed forests not only helps to create employment opportunities but also promotes the economies of the communities in which managed forests are located.
Featured image by rivage on Unsplash
The post How Can Recycled Business Cards Boost Your Brand In 2024? appeared first on noupe.
AI has influenced all industries, and the creative realm is no exception, including writing. While AI tools promise efficiency and a break from the drudgery of repetitive tasks, they also bring up a big question: Are we sacrificing creativity for convenience?
It’s essential to explore how AI tools might reshape the writing craft, and not always for the better. There’s something special about a piece crafted by a human essay writer that AI can’t replicate. For students seeking help with their essays, a human writer’s personal touch, nuanced understanding, and creative flair are irreplaceable. AI might be able to generate a complete essay on a given topic rapidly, but can it engage a reader’s emotions or offer original ideas as effectively? Let’s dive into the heart of this discussion.
You’ve probably noticed how some content nowadays feels a bit… off. That’s because they’re crafted by AI, which only repeats what’s already out there. The reliability of information is another concern. A 2023 study revealed that heavy reliance on AI for writing tasks reduces the accuracy of the results by 25.1%. How does AI write? AI tools analyze huge chunks of existing texts to produce content. While this can make writing faster, it also means the content can end up looking and sounding the same. Where’s the fun in reading something that feels like deja vu?
When writers lean too much on AI, they risk dulling their ability to think originally and expressively. For students, this is particularly risky. This is why AI is bad for education. Schools are supposed to be playgrounds for the mind, places where you can experiment with ideas and find your voice. If AI does too much of the work, students might miss out on developing these crucial skills.
AI tools are gaining popularity fast. In early 2023, ChatGPT set a record by reaching over 100 million monthly users in just two months after launch. While it has some undeniable advantages, this surge in the popularity of AI tools has caused certain challenges as well.
AI is all about algorithms, which means it loves patterns. But great writing isn’t just about sticking to patterns—it’s about breaking them sometimes. AI’s tendency to standardize could mean your next essay sounds like everyone else’s. Remember, the most memorable pieces of writing are those that reflect a unique perspective, something that AI just can’t mimic.
And it’s not just your voice that’s at risk. As more people use AI tools, there’s a tendency for all writing to start sounding similar. Think about it: if everyone uses the same tool, doesn’t it make sense that everyone’s output might start to look the same? This doesn’t just stifle individuality; it could flatten the entire landscape of literary styles, turning vibrant variety into boring uniformity.
Here’s a not-so-fun fact: when AI takes over the brainy bits of writing, you might disengage from the process. How does AI affect writers? It’s easy to become passive, watching as the AI assembles your thoughts. This is especially bad news for students, who need to flex their mental muscles by tackling complex writing tasks. Critical thinking is like a muscle—if you don’t use it, you lose it.
When AI scripts the show, the range of voices in literature might begin to narrow. Diversity in writing isn’t just about different themes or genres; it’s about different ways of seeing the world. If AI keeps us locked into a certain way of writing, we might start missing out on those fresh, exciting perspectives that come from real human experiences.
Looking ahead, the impact of AI on professional writing careers looks equally concerning. How is AI affecting education? Will future writers need to conform to AI standards to succeed? If so, we might see a drop in the quality and variety of professional writing as the push for AI efficiency overtakes the need for human creativity and insight.
Let’s take a look at a real example. In some classrooms, AI is bad for education because students use AI to polish their essays. At first glance, these essays look perfect. However, over time, teachers notice something troubling—the students’ work begins to lack depth and originality. They’re not learning to craft arguments or express unique ideas. Instead, they’re just learning to edit what AI produces. It’s a bit like painting by numbers: the end result might look good, but the process isn’t creative.
For learners, the stakes are high. Why is AI bad for students? Learning to write creatively is not just an academic exercise—it’s a way to learn how to think, argue, and persuade. If AI starts doing too much of this work, students could end up with a cookie-cutter education that fails to inspire or challenge them. What’s the point of learning to write if you’re not learning to think?
To keep AI in its rightful place as a tool rather than a replacement, writers need to focus on developing their own skills alongside the technology. Use AI to handle the repetitive parts of writing, like checking grammar, but make sure the ideas and the voice are unmistakably yours.
Schools and colleges have a crucial role to play in combating the negative effects of AI in education. They should encourage curricula that value creativity and individuality over the ability to use tools. Besides, online essay writer services can also help, offering students nuanced writing support. It’s about teaching students how to use technology wisely, enhancing their skills without overshadowing them.
Lastly, never underestimate the importance of the human touch. Writers need to stay in the driver’s seat, using AI as a navigator rather than letting it take the wheel. This means always being ready to question, modify, and ultimately oversee the content that AI helps produce, ensuring that each piece of writing reflects true human thought and creativity.
As we look toward the future, it’s clear that AI will change many aspects of our lives, but here’s the good news: AI will not replace writers. The essence of writing—conveying emotion, capturing the human experience, and sparking imagination—is inherently human. No matter how advanced AI gets, the depth, emotion, and personal touch you bring to your writing are yours alone. So use AI, but make every piece you write your own.
Featured Image by Parker Byrd on Unsplash
The post Creativity Crisis: Why Is AI Bad for Original Thinking in Writing? appeared first on noupe.