Creating a company website is crucial for building a strong online presence and attracting potential customers. It’s usually the first point of contact for most people and it’s essential to make a good first impression.
A well-designed website would showcase your products or services and also help to build trust and credibility with your target audience.
However, creating a website that looks professional and visually appealing can be a challenge, especially for small businesses and start-ups that struggle to meet their needs and budget.
Whether you’re a small business owner, a start-up, or a freelancer, this guide will help you create a website that looks great and effectively represents your brand online.
Choosing a Platform
When it comes to creating a website, there are a variety of platforms to choose from, and the platform of choice would depend on your exact needs and budget.
Some popular platforms include WordPress, Squarespace, and Wix. Each platform has its set of pros and cons, with some being better suited for creating a website on a tight budget than others.
WordPress for example is a free open-source platform that would be a good pick for creating a website on a tight budget. It offers thousands of free or paid templates and plugins that can be customized to meet your needs.
Squarespace on the other hand is a paid platform that offers a wide range of design templates but can be more expensive. It’s an alternative for less tech-savvy enthusiasts who also want an easy-to-use website builder.
Wix is a user-friendly platform that offers a wide range of design options but is also more expensive. It’s a great option for those who want more design freedom, templates, and design elements.
Design
Design is an important element in creating an attractive website. An appealing website will attract visitors and would also keep them engaged. However, creating a visually appealing website on a tight budget can be challenging.
One of the ways to create a visually appealing website on a tight budget is to use a pre-made design template. Many website-building platforms offer a wide range of design templates that can be customized to meet your needs.
Another way to create an attractive website on a tight budget is to use free images and graphics. Many sites offer free images and graphics that can be used on a website.
Marketing
Marketing a website is crucial for driving traffic and increasing visibility. However, traditional marketing methods such as print and television ads can be costly. There are many ways to market a website.
Social media is a powerful tool that can be used to promote a website, increase brand awareness and connect with potential customers. Creating profiles on popular social media platforms such as Facebook, Instagram, and Twitter is free and can be used to promote a website. Search engine optimization (SEO) is also a cost-effective way to drive traffic to a website.
Optimizing a website for search engines can help it rank higher in search engine results, making it more visible to potential customers. There are many free tools and resources available that can help with SEO.
Overall, marketing a website on a tight budget is possible with the right approach. By using social media and search engine optimization, businesses can increase visibility and drive traffic to their website without breaking the bank.
Content
High-quality relevant content is essential for creating a great-looking website. However, creating high-quality content on a tight budget can be a challenge.
Another way would be to source content from other sites since many sites allow you to use their content on your website, as long as you credit the source.
Use Free Icons
Icons are a great way to add visual interest to a website and make it more user-friendly. However, purchasing icons can be costly.
One way to use icons on a website without breaking the bank is to use free icons. Many sites offer free icons that can be used on a website, popular sites like icons8, Flaticon, and Freepik.
When using free icons, make sure to check the license and follow the terms of use. A great way to use icons on your website is by using heart icons for example to indicate a favorite button or a love button.
Optimization
Optimizing a website is crucial for ensuring that it loads quickly and is easily found by search engines. However, optimizing a website on a tight budget can be a challenge.
One way to optimize a website without breaking the bank would be to use free tools, such as Google Analytics and Google Search Console. These tools can provide valuable insights into how visitors are interacting with your website and can help you identify areas that need improvement.
Another way is to use compression and caching. Compressing images and using caching can help reduce the size of your website and make it load faster.
Additionally, using a content delivery network (CDN) can also help to improve the load time of your website. Overall, there are many ways to optimize a website, and it’s important to find the best approach for your specific needs and budget.
Maintenance
Maintaining a website is crucial for ensuring that it runs smoothly and continues to meet the needs of the business. However, maintaining a website can be costly, especially if it requires regular updates, security measures, and bug fixes.
There are many ways to maintain a website within a budget. One way is to use a Content Management System (CMS) that makes it easy to update and maintain a website without needing technical skills.
Another way is to use a website hosting service that provides regular backups, security measures, and technical support. Additionally, businesses can also opt for a website maintenance plan from a third-party service provider.
Keeping a website up-to-date and secure is also important for maintaining its functionality and protecting against security threats. This can be done by regularly updating the website’s software and plugins, and implementing security measures such as SSL certificates and firewalls.
My previous post was a broad overview of SvelteKit where we saw what a great tool it is for web development. This post will fork off what we did there and dive into every developer’s favorite topic: caching. So, be sure to give my last post a read if you haven’t already. The code for this post is available on GitHub, as well as a live demo.
This post is all about data handling. We’ll add some rudimentary search functionality that will modify the page’s query string (using built-in SvelteKit features), and re-trigger the page’s loader. But, rather than just re-query our (imaginary) database, we’ll add some caching so re-searching prior searches (or using the back button) will show previously retrieved data, quickly, from cache. We’ll look at how to control the length of time the cached data stays valid and, more importantly, how to manually invalidate all cached values. As icing on the cake, we’ll look at how we can manually update the data on the current screen, client-side, after a mutation, while still purging the cache.
This will be a longer, more difficult post than most of what I usually write since we’re covering harder topics. This post will essentially show you how to implement common features of popular data utilities like react-query; but instead of pulling in an external library, we’ll only be using the web platform and SvelteKit features.
Unfortunately, the web platform’s features are a bit lower level, so we’ll be doing a bit more work than you might be used to. The upside is we won’t need any external libraries, which will help keep bundle sizes nice and small. Please don’t use the approaches I’m going to show you unless you have a good reason to. Caching is easy to get wrong, and as you’ll see, there’s a bit of complexity that’ll result in your application code. Hopefully your data store is fast, and your UI is fine allowing SvelteKit to just always request the data it needs for any given page. If it is, leave it alone. Enjoy the simplicity. But this post will show you some tricks for when that stops being the case.
Speaking of react-query, it was just released for Svelte! So if you find yourself leaning on manual caching techniques a lot, be sure to check that project out, and see if it might help.
Setting up
Before we start, let’s make a few small changes to the code we had before. This will give us an excuse to see some other SvelteKit features and, more importantly, set us up for success.
First, let’s move our data loading from our loader in +page.server.js to an API route. We’ll create a +server.js file in routes/api/todos, and then add a GET function. This means we’ll now be able to fetch (using the default GET verb) to the /api/todos path. We’ll add the same data loading code as before.
import { json } from "@sveltejs/kit";
import { getTodos } from "$lib/data/todoData";
export async function GET({ url, setHeaders, request }) {
const search = url.searchParams.get("search") || "";
const todos = await getTodos(search);
return json(todos);
}
Next, let’s take the page loader we had, and simply rename the file from +page.server.js to +page.js (or .ts if you’ve scaffolded your project to use TypeScript). This changes our loader to be a “universal” loader rather than a server loader. The SvelteKit docs explain the difference, but a universal loader runs on both the server and also the client. One advantage for us is that the fetch call into our new endpoint will run right from our browser (after the initial load), using the browser’s native fetch function. We’ll add standard HTTP caching in a bit, but for now, all we’ll do is call the endpoint.
Yep, forms can target directly to our normal page loaders. Now we can add a search term in the search box, hit Enter, and a “search” term will be appended to the URL’s query string, which will re-run our loader and search our to-do items.
Let’s also increase the delay in our todoData.js file in /lib/data. This will make it easy to see when data are and are not cached as we work through this post.
We’ll look at manual invalidation shortly, but all this function says is to cache these API calls for 60 seconds. Set this to whatever you want, and depending on your use case, stale-while-revalidate might also be worth looking into.
And just like that, our queries are caching.
Note make sure you un-check the checkbox that disables caching in dev tools.
Remember, if your initial navigation on the app is the list page, those search results will be cached internally to SvelteKit, so don’t expect to see anything in DevTools when returning to that search.
What is cached, and where
Our very first, server-rendered load of our app (assuming we start at the /list page) will be fetched on the server. SvelteKit will serialize and send this data down to our client. What’s more, it will observe the Cache-Control header on the response, and will know to use this cached data for that endpoint call within the cache window (which we set to 60 seconds in put example).
After that initial load, when you start searching on the page, you should see network requests from your browser to the /api/todos list. As you search for things you’ve already searched for (within the last 60 seconds), the responses should load immediately since they’re cached.
What’s especially cool with this approach is that, since this is caching via the browser’s native caching, these calls could (depending on how you manage the cache busting we’ll be looking at) continue to cache even if you reload the page (unlike the initial server-side load, which always calls the endpoint fresh, even if it did it within the last 60 seconds).
Obviously data can change anytime, so we need a way to purge this cache manually, which we’ll look at next.
Cache invalidation
Right now, data will be cached for 60 seconds. No matter what, after a minute, fresh data will be pulled from our datastore. You might want a shorter or longer time period, but what happens if you mutate some data and want to clear your cache immediately so your next query will be up to date? We’ll solve this by adding a query-busting value to the URL we send to our new /todos endpoint.
Let’s store this cache busting value in a cookie. That value can be set on the server but still read on the client. Let’s look at some sample code.
We can create a +layout.server.js file at the very root of our routes folder. This will run on application startup, and is a perfect place to set an initial cookie value.
You may have noticed the isDataRequest value. Remember, layouts will re-run anytime client code calls invalidate(), or anytime we run a server action (assuming we don’t turn off default behavior). isDataRequest indicates those re-runs, and so we only set the cookie if that’s false; otherwise, we send along what’s already there.
The httpOnly: false flag is also significant. This allows our client code to read these cookie values in document.cookie. This would normally be a security concern, but in our case these are meaningless numbers that allow us to cache or cache bust.
Reading cache values
Our universal loader is what calls our /todos endpoint. This runs on the server or the client, and we need to read that cache value we just set up no matter where we are. It’s relatively easy if we’re on the server: we can call await parent() to get the data from parent layouts. But on the client, we’ll need to use some gross code to parse document.cookie:
export function getCookieLookup() {
if (typeof document !== "object") {
return {};
}
return document.cookie.split("; ").reduce((lookup, v) => {
const parts = v.split("=");
lookup[parts[0]] = parts[1];
return lookup;
}, {});
}
const getCurrentCookieValue = name => {
const cookies = getCookieLookup();
return cookies[name] ?? "";
};
Fortunately, we only need it once.
Sending out the cache value
But now we need to send this value to our /todos endpoint.
getCurrentCookieValue('todos-cache') has a check in it to see if we’re on the client (by checking the type of document), and returns nothing if we are, at which point we know we’re on the server. Then it uses the value from our layout.
Busting the cache
But how do we actually update that cache busting value when we need to? Since it’s stored in a cookie, we can call it like this from any server action:
It’s all downhill from here; we’ve done the hard work. We’ve covered the various web platform primitives we need, as well as where they go. Now let’s have some fun and write application code to tie it all together.
For reasons that’ll become clear in a bit, let’s start by adding an editing functionality to our /list page. We’ll add this second table row for each todo:
And, of course, we’ll need to add a form action for our /list page. Actions can only go in .server pages, so we’ll add a +page.server.js in our /list folder. (Yes, a +page.server.js file can co-exist next to a +page.js file.)
We’re grabbing the form data, forcing a delay, updating our todo, and then, most importantly, clearing our cache bust cookie.
Let’s give this a shot. Reload your page, then edit one of the to-do items. You should see the table value update after a moment. If you look in the Network tab in DevToold, you’ll see a fetch to the /todos endpoint, which returns your new data. Simple, and works by default.
Immediate updates
What if we want to avoid that fetch that happens after we update our to-do item, and instead, update the modified item right on the screen?
This isn’t just a matter of performance. If you search for “post” and then remove the word “post” from any of the to-do items in the list, they’ll vanish from the list after the edit since they’re no longer in that page’s search results. You could make the UX better with some tasteful animation for the exiting to-do, but let’s say we wanted to not re-run that page’s load function but still clear the cache and update the modified to-do so the user can see the edit. SvelteKit makes that possible — let’s see how!
First, let’s make one little change to our loader. Instead of returning our to-do items, let’s return a writeable store containing our to-dos.
return {
todos: writable(todos),
};
Before, we were accessing our to-dos on the data prop, which we do not own and cannot update. But Svelte lets us return our data in their own store (assuming we’re using a universal loader, which we are). We just need to make one more tweak to our /list page.
Instead of this:
{#each todos as t}
…we need to do this since todos is itself now a store.:
{#each $todos as t}
Now our data loads as before. But since todos is a writeable store, we can update it.
First, let’s provide a function to our use:enhance attribute:
This will run before a submit. Let’s write that next:
function executeSave({ data }) {
const id = data.get("id");
const title = data.get("title");
return async () => {
todos.update(list =>
list.map(todo => {
if (todo.id == id) {
return Object.assign({}, todo, { title });
} else {
return todo;
}
})
);
};
}
This function provides a data object with our form data. We return an async function that will run after our edit is done. The docs explain all of this, but by doing this, we shut off SvelteKit’s default form handling that would have re-run our loader. This is exactly what we want! (We could easily get that default behavior back, as the docs explain.)
We now call update on our todos array since it’s a store. And that’s that. After editing a to-do item, our changes show up immediately and our cache is cleared (as before, since we set a new cookie value in our editTodo form action). So, if we search and then navigate back to this page, we’ll get fresh data from our loader, which will correctly exclude any updated to-do items that were updated.
We can set cookies in any server load function (or server action), not just the root layout. So, if some data are only used underneath a single layout, or even a single page, you could set that cookie value there. Moreoever, if you’re not doing the trick I just showed manually updating on-screen data, and instead want your loader to re-run after a mutation, then you could always set a new cookie value right in that load function without any check against isDataRequest. It’ll set initially, and then anytime you run a server action that page layout will automatically invalidate and re-call your loader, re-setting the cache bust string before your universal loader is called.
Writing a reload function
Let’s wrap-up by building one last feature: a reload button. Let’s give users a button that will clear cache and then reload the current query.
In a real project you probably wouldn’t copy/paste the same code to set the same cookie in the same way in multiple places, but for this post we’ll optimize for simplicity and readability.
We could call this done and move on, but let’s improve this solution a bit. Specifically, let’s provide feedback on the page to tell the user the reload is happening. Also, by default, SvelteKit actions invalidate everything. Every layout, page, etc. in the current page’s hierarchy would reload. There might be some data that’s loaded once in the root layout that we don’t need to invalidate or re-load.
So, let’s focus things a bit, and only reload our to-dos when we call this function.
We’re setting a new reloading variable to true at the start of this action. And then, in order to override the default behavior of invalidating everything, we return an async function. This function will run when our server action is finished (which just sets a new cookie).
Without this async function returned, SvelteKit would invalidate everything. Since we’re providing this function, it will invalidate nothing, so it’s up to us to tell it what to reload. We do this with the invalidate function. We call it with a value of reload:todos. This function returns a promise, which resolves when the invalidation is complete, at which point we set reloading back to false.
Lastly, we need to sync our loader up with this new reload:todos invalidation value. We do that in our loader with the depends function:
export async function load({ fetch, url, setHeaders, depends }) {
depends('reload:todos');
// rest is the same
And that’s that. depends and invalidate are incredibly useful functions. What’s cool is that invalidate doesn’t just take arbitrary values we provide like we did. We can also provide a URL, which SvelteKit will track, and invalidate any loaders that depend on that URL. To that end, if you’re wondering whether we could skip the call to depends and invalidate our /api/todos endpoint altogether, you can, but you have to provide the exact URL, including the search term (and our cache value). So, you could either put together the URL for the current search, or match on the path name, like this:
invalidate(url => url.pathname == "/api/todos");
Personally, I find the solution that uses depends more explicit and simple. But see the docs for more info, of course, and decide for yourself.
If you’d like to see the reload button in action, the code for it is in this branch of the repo.
Parting thoughts
This was a long post, but hopefully not overwhelming. We dove into various ways we can cache data when using SvelteKit. Much of this was just a matter of using web platform primitives to add the correct cache, and cookie values, knowledge of which will serve you in web development in general, beyond just SvelteKit.
Moreover, this is something you absolutely do not need all the time. Arguably, you should only reach for these sort of advanced features when you actually need them. If your datastore is serving up data quickly and efficiently, and you’re not dealing with any kind of scaling problems, there’s no sense in bloating your application code with needless complexity doing the things we talked about here.
As always, write clear, clean, simple code, and optimize when necessary. The purpose of this post was to provide you those optimization tools for when you truly need them. I hope you enjoyed it!
Since Next.js 13 release, there’s been some debate about how stable the shiny new features packed in the announcement are. On “What’s New in Next.js 13?” we have covered the release announced and established that though carrying some interesting experiments, Next.js 13 is definitely stable. And since then, most of us have seen a very clear landscape when it comes to the new and components, and even the (still beta) @next/font; these are all good to go, instant profit. Turbopack, as clearly stated in the announcement, is still alpha: aimed strictly for development builds and still heavily under development. Whether you can or can’t use it in your daily routine depends on your stack, as there are integrations and optimizations still somewhere on the way. This article’s scope is strictly about the main character of the announcement: the new App Directory architecture (AppDir, for short).
Because the App directory is the one that keeps bringing questions due to it being partnered with an important evolution in the React ecosystem — React Server Components — and with edge runtimes. It clearly is the shape of the future of our Next.js apps. It is experimental though, and its roadmap is not something we can consider will be done in the next few weeks. So, should you use it in production now? What advantages can you get out of it, and what are the pitfalls you may find yourself climbing out of? As always, the answer in software development is the same: it depends.
What Is The App Directory Anyway?
It is the new strategy for handling routes and rendering views in Next.js. It is made possible by a couple of different features tied together, and it is built to make the most out of React concurrent features (yes, we are talking about React Suspense). It brings, though, a big paradigm shift in how you think about components and pages in a Next.js app. This new way of building your app has a lot of very welcomed improvements to your architecture. Here’s a short, non-exhaustive list:
Partial Routing.
Route Groups.
Parallel Routes.
Intercepting Routes.
Server Components vs. Client Components.
Suspense Boundaries.
And much more, check the features overview in the new documentation.
A Quick Comparison
When it comes to the current routing and rendering architecture (in the Pages directory), developers were required to think of data fetching per route.
getStaticPaths + getStaticProps: Server-Side Pre-Rendered or Static Site Generated.
Historically, it hadn’t yet been possible to choose the rendering strategy on a per-page basis. Most apps were either going full Server-Side Rendering or full Static Site Generation. Next.js created enough abstractions that made it a standard to think of routes individually within its architecture.
Once the app reaches the browser, hydration kicks in, and it’s possible to have routes collectively sharing data by wrapping our _app component in a React Context Provider. This gave us tools to hoist data to the top of our rendering tree and cascade it down toward the leaves of our app.
import { type AppProps } from 'next/app';
export default function MyApp({ Component, pageProps }: AppProps) {
return (
<SomeProvider>
<Component {...pageProps} />
</SomeProvider>
}
The ability to render and organize required data per route made this approach an almost good tool for when data absolutely needed to be available globally in the app. And while this strategy will allow data to spread throughout the app, wrapping everything in a Context Provider bundles hydration to the root of your app. It is not possible anymore to render any branches on that tree (any route within that Provider context) on the server.
Here, enters the Layout Pattern. By creating wrappers around pages, we could opt in or out of rendering strategies per route again instead of doing it once with an app-wide decision. Read more on how to manage states in the Pages Directory on the article “State Management in Next.js” and on the Next.js documentation.
The Layout Pattern proved to be a great solution. Being able to granularly define rendering strategies is a very welcomed feature. So the App directory comes in to put the layout pattern front and center. As a first-class citizen of Next.js architecture, it enables enormous improvements in terms of performance, security, and data handling.
With React concurrent features, it’s now possible to stream components to the browser and let each one handle its own data. So rendering strategy is even more granular now — instead of page-wide, it’s component-based. Layouts are nested by default, which makes it more clear to the developer what impacts each page based on the file-system architecture. And on top of all that, it is mandatory to explicitly turn a component client-side (via the “use client” directive) in order to use a Context.
Building Blocks Of The App Directory
This architecture is built around the Layout Per Page Architecture. Now, there is no _app, neither is there a _document component. They have both been replaced by the root layout.jsx component. As you would expect, that’s a special layout that will wrap up your entire application.
export function RootLayout({ children }: { children: React.ReactNode }) {
return (
<html lang="en">
<body>
{children}
</body>
</html>
}
The root layout is our way to manipulate the HTML returned by the server to the entire app at once. It is a server component, and it does not render again upon navigation. This means any data or state in a layout will persist throughout the lifecycle of the app.
While the root layout is a special component for our entire app, we can also have root components for other building blocks:
loading.jsx: to define the Suspense Boundary of an entire route;
error.jsx: to define the Error Boundary of our entire route;
template.jsx: similar to the layout, but re-renders on every navigation. Especially useful to handle state between routes, such as in or out transitions.
All of those components and conventions are nested by default. This means that /about will be nested within the wrappers of / automatically.
Finally, we are also required to have a page.jsx for every route as it will define the main component to render for that URL segment (as known as the place you put your components!). These are obviously not nested by default and will only show in our DOM when there’s an exact match to the URL segment they correspond to.
There is much more to the architecture (and even more coming!), but this should be enough to get your mental model right before considering migrating from the Pages directory to the App directory in production. Make sure to check on the official upgrade guide as well.
Server Components In A Nutshell
React Server Components allow the app to leverage infrastructure towards better performance and overall user experience. For example, the immediate improvement is on bundle size since RSC won’t carry over their dependencies to the final bundle. Because they’re rendered in the server, any kind of parsing, formatting, or component library will remain on the server code. Secondly, thanks to their asynchronous nature, Server Components are streamed to the client. This allows the rendered HTML to be progressively enhanced on the browser.
So, Server Components lead to a more predictable, cacheable, and constant size of your final bundle breaking the linear correlation between app size and bundle size. This immediately puts RSC as a best practice versus traditional React components (which are now referred to as client components to ease disambiguation).
On Server Components, fetching data is also quite flexible and, in my opinion, feels closer to vanilla JavaScript — which always smooths the learning curve. For example, understanding the JavaScript runtime makes it possible to define data-fetching as either parallel or sequential and thus have more fine-grained control on the resource loading waterfall.
Parallel Data Fetching, waiting for all:
import TodoList from './todo-list'
async function getUser(userId) {
const res = await fetch(`https://<some-api>/user/${userId}`);
return res.json()
}
async function getTodos(userId) {
const res = await fetch(`https://<some-api>/todos/${userId}/list`);
return res.json()
}
export default async function Page({ params: { userId } }) {
// Initiate both requests in parallel.
const userResponse = getUser(userId)
const = getTodos(username)
// Wait for the promises to resolve.
const [user, todos] = await Promise.all([userResponse, todosResponse])
return (
<>
<h1>{user.name}</h1>
<TodoList list={todos}></TodoList>
</>
)
}
Parallel, waiting for one request, streaming the other:
async function getUser(userId) {
const res = await fetch(`https://<some-api>/user/${userId}`);
return res.json()
}
async function getTodos(userId) {
const res = await fetch(`https://<some-api>/todos/${userId}/list`);
return res.json()
}
export default async function Page({ params: { userId } }) {
// Initiate both requests in parallel.
const userResponse = getUser(userId)
const todosResponse = getTodos(userId)
// Wait only for the user.
const user = await userResponse
return (
<>
<h1>{user.name}</h1>
<Suspense fallback={<div>Fetching todos...</div>}>
<TodoList listPromise={todosResponse}></TodoList>
</Suspense>
</>
)
}
async function TodoList ({ listPromise }) {
// Wait for the album's promise to resolve.
const todos = await listPromise;
return (
<ul>
{todos.map(({ id, name }) => (
<li key={id}>{name}</li>
))}
</ul>
);
}
In this case, receives an in-flight Promise and needs to await it before rendering. The app will render the suspense fallback component until it’s all done.
Sequential Data Fetching fires one request at a time and awaits for each:
async function getUser(username) {
const res = await fetch(`https://<some-api>/user/${userId}`);
return res.json()
}
async function getTodos(username) {
const res = await fetch(`https://<some-api>/todos/${userId}/list`);
return res.json()
}
export default async function Page({ params: { userId } }) {
const user = await getUser(userId)
return (
<>
<h1>{user.name}</h1>
<Suspense fallback={<div>Fetching todos...</div>}>
<TodoList userId={userId} />
</Suspense>
</>
)
}
async function TodoList ({ userId }) {
const todos = await getTodos(userId);
return (
<ul>
{todos.map(({ id, name }) => (
<li key={id}>{name}</li>
))}
</ul>
);
}
Now, Page will fetch and wait on getUser, then it will start rendering. Once it reaches , it will fetch and wait on getTodos. This is still more granular than what we are used to it with the Pages directory.
Important things to note:
Requests fired within the same component scope will be fired in parallel (more about this at Extended Fetch API below).
Same requests fired within the same server runtime will be deduplicated (only one is actually happening, the one with the shortest cache expiration).
For requests that won’t use fetch (such as third-party libraries like SDKs, ORMs, or database clients), route caching will not be affected unless manually configured via segment cache configuration.
To point out how much more control this gives developers: when within the pages directory, rendering would be blocked until all data is available. When using getServerSideProps, the user would still see the loading spinner until data for the entire route is available. To mimic this behavior in the App directory, the fetch requests would need to happen in the layout.tsx for that route, so always avoid doing it. An “all or nothing” approach is rarely what you need, and it leads to worse perceived performance as opposed to this granular strategy.
Extended Fetch API
The syntax remains the same: fetch(route, options). But according to the Web Fetch Spec, the options.cache will determine how this API will interact with the browser cache. But in Next.js, it will interact with the framework server-side HTTP Cache.
When it comes to the extended Fetch API for Next.js and its cache policy, two values are important to understand:
force-cache: the default, looks for a fresh match and returns it.
no-store or no-cache: fetches from the remote server on every request.
next.revalidate: the same syntax as ISR, sets a hard threshold to consider the resource fresh.
The caching strategy allows us to categorize our requests:
Static Data: persist longer. E.g., blog post.
Dynamic Data: changes often and/or is a result of user interaction. E.g., comments section, shopping cart.
By default, every data is considered static data. This is due to the fact force-cache is the default caching strategy. To opt out of it for fully dynamic data, it’s possible to define no-store or no-cache.
If a dynamic function is used (e.g., setting cookies or headers), the default will switch from force-cache to no-store!
Finally, to implement something more similar to Incremental Static Regeneration, you’ll need to use next.revalidate. With the benefit that instead of being defined for the entire route, it only defines the component it’s a part of.
Migrating From Pages To App
Porting logic from Pages directory to Apps directory may look like a lot of work, but Next.js has worked prepared to allow both architectures to coexist, and thus migration can be done incrementally. Additionally, there is a very good migration guide in the documentation; I recommend you to read it fully before jumping into a refactoring.
Guiding you through the migration path is beyond the scope of this article and would make it redundant to the docs. Alternatively, in order to add value on top of what the official documentation offers, I will try to provide insight into the friction points my experience suggests you will find.
The Case Of React Context
In order to provide all the benefits mentioned above in this article, RSC can’t be interactive, which means they don’t have hooks. Because of that, we have decided to push our client-side logic to the leaves of our rendering tree as late as possible; once you add interactiveness, children of that component will be client-side.
In a few cases pushing some components will not be possible (especially if some key functionality depends on React Context, for example). Because most libraries are prepared to defend their users against Prop Drilling, many create context providers to skip components from root to distant descendants. So ditching React Context entirely may cause some external libraries not to work well.
As a temporary solution, there is an escape hatch to it. A client-side wrapper for our providers:
// /providers.jsx
‘use client’
import { type ReactNode, createContext } from 'react';
const SomeContext = createContext();
export default function ThemeProvider({ children }: { children: ReactNode }) {
return (
<SomeContext.Provider value="data">
{children}
</SomeContext.Provider>
);
}
And so the layout component will not complain about skipping a client component from rendering.
// app/.../layout.jsx
import { type ReactNode } from 'react';
import Providers from ‘./providers’;
export default function Layout({ children }: { children: ReactNode }) {
return (
<Providers>{children}</Providers>
);
}
It is important to realize that once you do this, the entire branch will become client-side rendered. This approach will take everything within the component to not be rendered on the server, so use it only as a last resort.
TypeScript And Async React Elements
When using async/await outside of Layouts and Pages, TypeScript will yield an error based on the response type it expects to match its JSX definitions. It is supported and will still work in runtime, but according to Next.js documentation, this needs to be fixed upstream in TypeScript.
For now, the solution is to add a comment in the above line {/* @ts-expect-error Server Component */}.
Client-side Fetch On The Works
Historically, Next.js has not had a built-in data mutation story. Requests being fired from the client side were at the developer’s own discretion to figure out. With React Server Components, this is bound for a chance; the React team is working on a use hook which will accept a Promise, then it will handle the promise and return the value directly.
In the future, this will supplant most bad cases of useEffect in the wild (more on that in the excellent talk “Goodbye UseEffect”) and possibly be the standard for handling asynchronicity (fetching included) in client-side React.
For the time being, it is still recommended to rely on libraries like React-Query and SWR for your client-side fetching needs. Be especially aware of the fetch behavior, though!
So, Is It Ready?
Experimenting is at the essence of moving forward, and we can’t make a nice omelet without breaking eggs. I hope this article has helped you answer this question for your own specific use case.
If on a greenfield project, I’d possibly take App directory for a spin and keep Page directory as a fallback or for the functionality that is critical for business. If refactoring, it would depend on how much client-side fetching I have. Few: do it; many: probably wait for the full story.
Let me know your thoughts on Twitter or in the comments below.