Building an Images Gallery using PixiJS and WebGL
Sometimes, we have to go a little further than HTML, CSS and JavaScript to create the UI we need, and instead use other resources, like SVG, WebGL, canvas and others.
For example, the most amazing effects can be created with WebGL, because it’s a JavaScript API designed to render interactive 2D and 3D graphics within any compatible web browser, allowing GPU-accelerated image processing.
That said, working with WebGL can be very complex. As such, there’s a variety of libraries that to make it relatively easier, such as PixiJS, Three.js, and Babylon.js, among others.We’re going to work with a specific one of those, PixiJS, to create a gallery of random images inspired by this fragment of a Dribbble shot by Zhenya Rynzhuk.
This looks hard, but you actually don’t need to have advanced knowledge of WebGL or even PixiJS to follow along, though some basic knowledge of Javascript (ES6) will come in handy. You might even want to start by getting familiar with the basic concept of fragment shaders used in WebGL, with The Book of Shaders as a good starting point.
With that, let’s dig into using PixiJS to create this WebGL effect!
Initial setup
Here’s what we’ll need to get started:
- Add the PixiJS library as a script in the HTML.
- Have a
element (or add it dynamically from Javascript), to render the application.
- Initialize the application with
new PIXI.Application(options)
.
See, nothing too crazy yet. Here’s the JavaScript we can use as a boilerplate:
// Get canvas view
const view = document.querySelector('.view')
let width, height, app
// Set dimensions
function initDimensions () {
width = window.innerWidth
height = window.innerHeight
}
// Init the PixiJS Application
function initApp () {
// Create a PixiJS Application, using the view (canvas) provided
app = new PIXI.Application({ view })
// Resizes renderer view in CSS pixels to allow for resolutions other than 1
app.renderer.autoDensity = true
// Resize the view to match viewport dimensions
app.renderer.resize(width, height)
}
// Init everything
function init () {
initDimensions()
initApp()
}
// Initial call
init()
When executing this code, the only thing that we will see is a black screen as well as a message like this in the if we open up the console:PixiJS 5.0.2 - WebGL 2 - http://www.pixijs.com/
.
We are ready to start drawing on the canvas using PixiJS and WebGL!
Creating the grid background with a WebGL Shader
Next we will create a background that contains a grid, which will allow us to clearly visualize the distortion effect we’re after. But first, we must know what a shader is and how it works.I recommended The Book of Shaders earlier as a starting point to learn about them and this is those concepts will come into play. If you have not done it yet, I strongly recommend that you review that material, and only then continue here.
We are going to create a fragment shader that prints a grid background on the screen:
// It is required to set the float precision for fragment shaders in OpenGL ES
// More info here: https://stackoverflow.com/a/28540641/4908989
#ifdef GL_ES
precision mediump float;
#endif
// This function returns 1 if `coord` correspond to a grid line, 0 otherwise
float isGridLine (vec2 coord) {
vec2 pixelsPerGrid = vec2(50.0, 50.0);
vec2 gridCoords = fract(coord / pixelsPerGrid);
vec2 gridPixelCoords = gridCoords * pixelsPerGrid;
vec2 gridLine = step(gridPixelCoords, vec2(1.0));
float isGridLine = max(gridLine.x, gridLine.y);
return isGridLine;
}
// Main function
void main () {
// Coordinates for the current pixel
vec2 coord = gl_FragCoord.xy;
// Set `color` to black
vec3 color = vec3(0.0);
// If it is a grid line, change blue channel to 0.3
color.b = isGridLine(coord) * 0.3;
// Assing the final rgba color to `gl_FragColor`
gl_FragColor = vec4(color, 1.0);
}
This code is drawn from a demo on Shadertoy, which is a great source of inspiration and resources for shaders.
In order to use this shader, we must first load the code from the file it is in and — only after it has been loaded correctly— we will initialize the app.
// Loaded resources will be here
const resources = PIXI.Loader.shared.resources
// Load resources, then init the app
PIXI.Loader.shared.add([
'shaders/backgroundFragment.glsl'
]).load(init)
Now, for our shader to work where we can see the result, we will add a new element (an empty Sprite
) to the stage, which we will use to define a filter. This is the way PixiJS lets us execute custom shaders like the one we just created.
// Init the gridded background
function initBackground () {
// Create a new empty Sprite and define its size
background = new PIXI.Sprite()
background.width = width
background.height = height
// Get the code for the fragment shader from the loaded resources
const backgroundFragmentShader = resources['shaders/backgroundFragment.glsl'].data
// Create a new Filter using the fragment shader
// We don't need a custom vertex shader, so we set it as `undefined`
const backgroundFilter = new PIXI.Filter(undefined, backgroundFragmentShader)
// Assign the filter to the background Sprite
background.filters = [backgroundFilter]
// Add the background to the stage
app.stage.addChild(background)
}
And now we see the gridded background with blue lines. Look closely because the lines are a little faint against the dark background color.
The distortion effect
Our background is now ready, so let’s see how we can add the desired effect (Cubic Lens Distortion) to the whole stage, including the background and any other element that we add later, like images. For this, need to create a new filter and add it to the stage. Yes, we can also define filters that affect the entire stage of PixiJS!
This time, we have based the code of our shader on this awesome Shadertoy demo that implements the distortion effectusing different configurable parameters.
#ifdef GL_ES
precision mediump float;
#endif
// Uniforms from Javascript
uniform vec2 uResolution;
uniform float uPointerDown;
// The texture is defined by PixiJS
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
// Function used to get the distortion effect
vec2 computeUV (vec2 uv, float k, float kcube) {
vec2 t = uv - 0.5;
float r2 = t.x * t.x + t.y * t.y;
float f = 0.0;
if (kcube == 0.0) {
f = 1.0 + r2 * k;
} else {
f = 1.0 + r2 * (k + kcube * sqrt(r2));
}
vec2 nUv = f * t + 0.5;
nUv.y = 1.0 - nUv.y;
return nUv;
}
void main () {
// Normalized coordinates
vec2 uv = gl_FragCoord.xy / uResolution.xy;
// Settings for the effect
// Multiplied by `uPointerDown`, a value between 0 and 1
float k = -1.0 * uPointerDown;
float kcube = 0.5 * uPointerDown;
float offset = 0.02 * uPointerDown;
// Get each channel's color using the texture provided by PixiJS
// and the `computeUV` function
float red = texture2D(uSampler, computeUV(uv, k + offset, kcube)).r;
float green = texture2D(uSampler, computeUV(uv, k, kcube)).g;
float blue = texture2D(uSampler, computeUV(uv, k - offset, kcube)).b;
// Assing the final rgba color to `gl_FragColor`
gl_FragColor = vec4(red, green, blue, 1.0);
}
We are using two uniforms this time. Uniforms are variables that we pass to the shader via JavaScript:
uResolution
: This is a JavaScript object thaincludes{x: width, y: height}
. This uniform allows us to normalize the coordinates of each pixel in the range[0, 1]
.uPointerDown
: This is a float in the range[0, 1]
, which allows us to animate the distortion effect, increasing its intensity proportionally.
Let’s see the code that we have to add to our JavaScript to see the distortion effect caused by our new shader:
// Target for pointer. If down, value is 1, else value is 0
// Here we set it to 1 to see the effect, but initially it will be 0
let pointerDownTarget = 1
let uniforms
// Set initial values for uniforms
function initUniforms () {
uniforms = {
uResolution: new PIXI.Point(width, height),
uPointerDown: pointerDownTarget
}
}
// Set the distortion filter for the entire stage
const stageFragmentShader = resources['shaders/stageFragment.glsl'].data
const stageFilter = new PIXI.Filter(undefined, stageFragmentShader, uniforms)
app.stage.filters = [stageFilter]
We can already enjoy our distortion effect!
This effect is static at the moment, so it’s not terribly fun just yet. Next, we’ll see how we can make the effect dynamically respond to pointer events.
Listening to pointer events
PixiJS makes it’s surprisingly simple to listen to events, even multiple events that respond equally to mouse and touch interactions. In this case, we want our animation to work just as well on desktop as on a mobile device, so we must listen to the events corresponding to both platforms.
PixiJs provides an interactive
attribute that lets us do just that. We apply it to an element and start listening to events with an API similar to jQuery:
// Start listening events
function initEvents () {
// Make stage interactive, so it can listen to events
app.stage.interactive = true
// Pointer & touch events are normalized into
// the `pointer*` events for handling different events
app.stage
.on('pointerdown', onPointerDown)
.on('pointerup', onPointerUp)
.on('pointerupoutside', onPointerUp)
.on('pointermove', onPointerMove)
}
From here, we will start using a third uniform (uPointerDiff
), which will allow us to explore the image gallery using drag and drop. Its value will be equal to the translation of the scene as we explore the gallery. Below is the code corresponding to each of the event handling functions:
// On pointer down, save coordinates and set pointerDownTarget
function onPointerDown (e) {
console.log('down')
const { x, y } = e.data.global
pointerDownTarget = 1
pointerStart.set(x, y)
pointerDiffStart = uniforms.uPointerDiff.clone()
}
// On pointer up, set pointerDownTarget
function onPointerUp () {
console.log('up')
pointerDownTarget = 0
}
// On pointer move, calculate coordinates diff
function onPointerMove (e) {
const { x, y } = e.data.global
if (pointerDownTarget) {
console.log('dragging')
diffX = pointerDiffStart.x + (x - pointerStart.x)
diffY = pointerDiffStart.y + (y - pointerStart.y)
}
}
We still will not see any animation if we look at our work, but we can start to see how the messages that we have defined in each event handler function are correctly printed in the console.
Let’s now turn to implementing our animations!
Animating the distortion effect and the drag and drop functionality
The first thing we need to start an animation with PixiJS (or any canvas-based animation) is an animation loop. It usually consists of a function that is called continuously, using requestAnimationFrame
, which in each call renders the graphics on the canvas element, thus producing the desired animation.
We can implement our own animation loop in PixiJS, or we can use the utilities included in the library. In this case, we will use the add method of app.ticker
, which allows us to pass a function that will be executed in each frame. At the end of the init function we will add this:
// Animation loop
// Code here will be executed on every animation frame
app.ticker.add(() => {
// Multiply the values by a coefficient to get a smooth animation
uniforms.uPointerDown += (pointerDownTarget - uniforms.uPointerDown) * 0.075
uniforms.uPointerDiff.x += (diffX - uniforms.uPointerDiff.x) * 0.2
uniforms.uPointerDiff.y += (diffY - uniforms.uPointerDiff.y) * 0.2
})
Meanwhile, in the Filter constructor for the background, we will pass the uniforms in the stage filter. This allows us to simulate the translation effect of the background with this tiny modification in the corresponding shader:
uniform vec2 uPointerDiff;
void main () {
// Coordinates minus the `uPointerDiff` value
vec2 coord = gl_FragCoord.xy - uPointerDiff;
// ... more code here ...
}
And now we can see the distortion effect in action, including the drag and drop functionality for the gridd background. Play with it!
Randomly generate a masonry grid layout
To make our UI more interesting, we can randomly generate the sizing and dimensions of the grid cells. That is, each image can have different dimensions, creating a kind of masonry layout.
Let’s use Unsplash Source, which will allow us to get random images from Unsplash and define the dimensions we want. This will facilitate the task of creating a random masonry layout, since the images can have any dimension that we want, and therefore, generate the layout beforehand.
To achieve this, we will use an algorithm that executes the following steps:
- We will start with a list of rectangles.
- We will select the first rectangle in the list divide it into two rectangles with random dimensions, as long as both rectangles have dimensions equal to or greater than the minimum established limit. We’ll add a check to make sure it’s possible and, if it is, add both resulting rectangles to the list.
- If the list is empty, we will finish executing. If not, we’ll go back to step two.
I think you’ll get a much better understanding of how the algorithm works in this next demo. Use the buttons to see how it runs: Next will execute step two, All will execute the entire algorithm, and Reset will reset to step one.
Drawing solid rectangles
Now that we can properly generate our random grid layout, we will use the list of rectangles generated by the algorithm to draw solid rectangles in our PixiJS application. That way, we can see if it works and make adjustments before adding the images using the Unsplash Source API.
To draw those rectangles, we will generate a random grid layout that is five times bigger than the viewport and position it in the center of the stage. That allows us to move with some freedom to any direction in the gallery.
// Variables and settings for grid
const gridSize = 50
const gridMin = 3
let gridColumnsCount, gridRowsCount, gridColumns, gridRows, grid
let widthRest, heightRest, centerX, centerY, rects
// Initialize the random grid layout
function initGrid () {
// Getting columns
gridColumnsCount = Math.ceil(width / gridSize)
// Getting rows
gridRowsCount = Math.ceil(height / gridSize)
// Make the grid 5 times bigger than viewport
gridColumns = gridColumnsCount * 5
gridRows = gridRowsCount * 5
// Create a new Grid instance with our settings
grid = new Grid(gridSize, gridColumns, gridRows, gridMin)
// Calculate the center position for the grid in the viewport
widthRest = Math.ceil(gridColumnsCount * gridSize - width)
heightRest = Math.ceil(gridRowsCount * gridSize - height)
centerX = (gridColumns * gridSize / 2) - (gridColumnsCount * gridSize / 2)
centerY = (gridRows * gridSize / 2) - (gridRowsCount * gridSize / 2)
// Generate the list of rects
rects = grid.generateRects()
}
So far, we have generated the list of rectangles. To add them to the stage, it is convenient to create a container, since then we can add the images to the same container and facilitate the movement when we drag the gallery.
Creating a container in PixiJS is like this:
let container
// Initialize a Container element for solid rectangles and images
function initContainer () {
container = new PIXI.Container()
app.stage.addChild(container)
}
Now we can now add the rectangles to the container so they can be displayed on the screen.
// Padding for rects and images
const imagePadding = 20
// Add solid rectangles and images
// So far, we will only add rectangles
function initRectsAndImages () {
// Create a new Graphics element to draw solid rectangles
const graphics = new PIXI.Graphics()
// Select the color for rectangles
graphics.beginFill(0xAA22CC)
// Loop over each rect in the list
rects.forEach(rect => {
// Draw the rectangle
graphics.drawRect(
rect.x * gridSize,
rect.y * gridSize,
rect.w * gridSize - imagePadding,
rect.h * gridSize - imagePadding
)
})
// Ends the fill action
graphics.endFill()
// Add the graphics (with all drawn rects) to the container
container.addChild(graphics)
}
Note that we have added to the calculations a padding (imagePadding
) for each rectangle. In this way the images will have some space among them.
Finally, in the animation loop, we need to add the following code to properly define the position for the container:
// Set position for the container
container.x = uniforms.uPointerDiff.x - centerX
container.y = uniforms.uPointerDiff.y - centerY
And now we get the following result:
But there are still some details to fix, like defining limits for the drag and drop feature. Let’s add this to the onPointerMove
event handler, where we effectively check the limits according to the size of the grid we have calculated:
diffX = diffX > 0 ? Math.min(diffX, centerX + imagePadding) : Math.max(diffX, -(centerX + widthRest))
diffY = diffY > 0 ? Math.min(diffY, centerY + imagePadding) : Math.max(diffY, -(centerY + heightRest))
Another small detail that makes things more refined is to add an offset to the grid background. That keeps the blue grid lines in tact. We just have to add the desired offset (imagePadding / 2
in our case) to the background shader this way:
// Coordinates minus the `uPointerDiff` value, and plus an offset
vec2 coord = gl_FragCoord.xy - uPointerDiff + vec2(10.0);
And we will get the final design for our random grid layout:
Adding images from Unsplash Source
We have our layout ready, so we are all set to add images to it. To add an image in PixiJS, we need a Sprite
, which defines the image as a Texture
of it. There are multiple ways of doing this. In our case, we will first create an empty Sprite
for each image and, only when the Sprite
is inside the viewport, we will load the image, create the Texture
and add it to the Sprite. Sound like a lot? We’ll go through it step-by-step.
To create the empty sprites, we will modify the initRectsAndImages
function. Please pay attention to the comments for a better understanding:
// For the list of images
let images = []
// Add solid rectangles and images
function initRectsAndImages () {
// Create a new Graphics element to draw solid rectangles
const graphics = new PIXI.Graphics()
// Select the color for rectangles
graphics.beginFill(0x000000)
// Loop over each rect in the list
rects.forEach(rect => {
// Create a new Sprite element for each image
const image = new PIXI.Sprite()
// Set image's position and size
image.x = rect.x * gridSize
image.y = rect.y * gridSize
image.width = rect.w * gridSize - imagePadding
image.height = rect.h * gridSize - imagePadding
// Set it's alpha to 0, so it is not visible initially
image.alpha = 0
// Add image to the list
images.push(image)
// Draw the rectangle
graphics.drawRect(image.x, image.y, image.width, image.height)
})
// Ends the fill action
graphics.endFill()
// Add the graphics (with all drawn rects) to the container
container.addChild(graphics)
// Add all image's Sprites to the container
images.forEach(image => {
container.addChild(image)
})
}
So far, we only have empty sprites. Next, we will create a function that’s responsible for downloading an image and assigning it as Texture
to the corresponding Sprite
. This function will only be called if the Sprite is inside the viewport so that the image only downloads when necessary.
On the other hand, if the gallery is dragged and a Sprite
is no longer inside the viewport during the course of the download, that request may be aborted, since we are going to use an AbortController
(more on this on MDN). In this way, we will cancel the unnecessary requests as we drag the gallery, giving priority to the requests corresponding to the sprites that are inside the viewport at every moment.
Let’s see the code to land the ideas a little better:
// To store image's URL and avoid duplicates
let imagesUrls = {}
// Load texture for an image, giving its index
function loadTextureForImage (index) {
// Get image Sprite
const image = images[index]
// Set the url to get a random image from Unsplash Source, given image dimensions
const url = `https://source.unsplash.com/random/${image.width}x${image.height}`
// Get the corresponding rect, to store more data needed (it is a normal Object)
const rect = rects[index]
// Create a new AbortController, to abort fetch if needed
const { signal } = rect.controller = new AbortController()
// Fetch the image
fetch(url, { signal }).then(response => {
// Get image URL, and if it was downloaded before, load another image
// Otherwise, save image URL and set the texture
const id = response.url.split('?')[0]
if (imagesUrls[id]) {
loadTextureForImage(index)
} else {
imagesUrls[id] = true
image.texture = PIXI.Texture.from(response.url)
rect.loaded = true
}
}).catch(() => {
// Catch errors silently, for not showing the following error message if it is aborted:
// AbortError: The operation was aborted.
})
}
Now we need to call the loadTextureForImage
function for each image whose corresponding Sprite
is intersecting with the viewport. In addition, we will cancel the fetch requests that are no longer needed, and we will add an alpha
transition when the rectangles enter or leave the viewport.
// Check if rects intersects with the viewport
// and loads corresponding image
function checkRectsAndImages () {
// Loop over rects
rects.forEach((rect, index) => {
// Get corresponding image
const image = images[index]
// Check if the rect intersects with the viewport
if (rectIntersectsWithViewport(rect)) {
// If rect just has been discovered
// start loading image
if (!rect.discovered) {
rect.discovered = true
loadTextureForImage(index)
}
// If image is loaded, increase alpha if possible
if (rect.loaded && image.alpha < 1) {
image.alpha += 0.01
}
} else { // The rect is not intersecting
// If the rect was discovered before, but the
// image is not loaded yet, abort the fetch
if (rect.discovered && !rect.loaded) {
rect.discovered = false
rect.controller.abort()
}
// Decrease alpha if possible
if (image.alpha > 0) {
image.alpha -= 0.01
}
}
})
}
And the function that verifies if a rectangle is intersecting with the viewport is the following:
// Check if a rect intersects the viewport
function rectIntersectsWithViewport (rect) {
return (
rect.x * gridSize + container.x <= width &&
0 <= (rect.x + rect.w) * gridSize + container.x &&
rect.y * gridSize + container.y <= height &&
0 <= (rect.y + rect.h) * gridSize + container.y
)
}
Last, we have to add the checkRectsAndImages
function to the animation loop:
// Animation loop
app.ticker.add(() => {
// ... more code here ...
// Check rects and load/cancel images as needded
checkRectsAndImages()
})
Our animation is nearly ready!
Handling changes in viewport size
When initializing the application, we resized the renderer so that it occupies the whole viewport, but if the viewport changes its size for any reason (for example, the user rotates their mobile device), we should re-adjust the dimensions and restart the application.
// On resize, reinit the app (clean and init)
// But first debounce the calls, so we don't call init too often
let resizeTimer
function onResize () {
if (resizeTimer) clearTimeout(resizeTimer)
resizeTimer = setTimeout(() => {
clean()
init()
}, 200)
}
// Listen to resize event
window.addEventListener('resize', onResize)
The clean
function will clean any residuals of the animation that we were executing before the viewport changed its dimensions:
// Clean the current Application
function clean () {
// Stop the current animation
app.ticker.stop()
// Remove event listeners
app.stage
.off('pointerdown', onPointerDown)
.off('pointerup', onPointerUp)
.off('pointerupoutside', onPointerUp)
.off('pointermove', onPointerMove)
// Abort all fetch calls in progress
rects.forEach(rect => {
if (rect.discovered && !rect.loaded) {
rect.controller.abort()
}
})
}
In this way, our application will respond properly to the dimensions of the viewport, no matter how it changes. This gives us the full and final result of our work!
Some final thoughts
Thanks for taking this journey with me! We walked through a lot but we learned a lot of concepts along the way and walked out with a pretty neat piece of UI. You can check the code on GitHub, or play with demos on CodePen.
If you have worked with WebGL before (with or without using other libraries), I hope you saw how nice it is working with PixiJS. It abstracts the complexity associated with the WebGL world in a great way, allowing us to focus on what we want to do rather than the technical details to make it work.
Bottom line is that PixiJS brings the world of WebGL closer for front-end developers to grasp, opening up a lot of possibilities beyond HTML, CSS and JavaScript.
The post Building an Images Gallery using PixiJS and WebGL appeared first on CSS-Tricks.