Make a Custom Camera Filter in React

Milecia

With almost everything being online, having the ability to add custom filters to your camera can be pretty useful. We're going to build a React app that lets users adjust a filter for their webcam and then save the image to Cloudinary.

We'll be using p5.js to let us apply filters to a user's camera. Once you finish this project, you'll be able to start working with p5.js in all kinds of media apps.

Setting up the React app

We can use the create-react-app command to generate a new React project with TypeScript enabled so we can be ready to add types from the beginning. Open your terminal and run the following command.

1$ npx create-react-app camera-filter --template typescript

You should see a new folder called camera-filter and it will have a number of boilerplate files to get us started. There are a few packages we'll need to handle the camera filter and the upload to Cloudinary.

1$ npm i p5 @types/p5 html-to-image

These are the packages we need to make the filter for the camera. Also, if you don't have a Cloudinary account make sure you create a free one before we move forward because you'll need credentials to upload the pictures to this hosting service.

Now we can work on a new component for our camera filter.

Adding the camera filter component

Go to the src folder at the root of your project and add a new sub-folder called components. Inside this folder, add a file called CameraFilter.tsx. This is a common file structure you'll run into with React projects to help keep things organized. Since the camera filter won't be a whole page by itself, we classify it as a component.

Let's start building the camera filter by importing the following packages at the top of the CameraFilter.tsx file.

1// CameraFilter.tsx
2
3import { useEffect, useRef } from "react";
4import { toPng } from "html-to-image";
5import p5 from "p5";

These are the main packages we'll start with and we'll slowly add MUI elements as needed. Now let's add the functionality we need for p5.js to apply filters to the camera.

Setting up p5.js

Getting this working React projects can be a little tricky, but once you do it, you have a lot of power over how things are displayed on your site. This package has a lot of really interesting functionality you should check out. Make sure you have the CameraFilter.tsx file open and add the following code below the imports.

1// CameraFilter.tsx
2...
3
4let cam: any, custShader: any;
5
6function sketch(p: any) {
7 // p is a reference to the p5 instance this sketch is attached to
8 p.preload = () => {
9 custShader = p.loadShader("../assets/webcam.vert", "../assets/webcam.frag");
10 };
11
12 p.setup = () => {
13 // shaders require WEBGL mode to work
14 p.createCanvas(p.windowWidth, p.windowHeight, p.WEBGL);
15 p.noStroke();
16
17 // initialize the webcam at the window size
18 cam = p.createCapture(p.VIDEO);
19 cam.size(p.windowWidth, p.windowHeight);
20
21 // hide the html element that createCapture adds to the screen
22 cam.hide();
23 };
24
25 p.draw = () => {
26 // shader() sets the active shader with our shader
27 p.shader(custShader);
28
29 // lets just send the cam to our shader as a uniform
30 custShader.setUniform("tex0", cam);
31
32 // the size of one pixel on the screen
33 custShader.setUniform("stepSize", [1.0 / p.width, 1.0 / p.height]);
34
35 // how far away to sample from the current pixel
36 // 1 is 1 pixel away
37 custShader.setUniform("dist", 3.0);
38
39 // rect gives us some geometry on the screen
40 p.rect(0, 0, p.width, p.height);
41 };
42}

This code sets some variables we need for p5 to work with the camera and a custom shader to make the filter. You can learn more about WebGL shaders here so that you'll know how to make your own shaders. This could be useful if you plan on working with 3D media in your web apps.

Next, we create the sketch function that will hold all of the methods p5 will call to get the camera filter set up and ready to use when we create a new instance of the p5 object a bit later. The sketch function implements a few methods that p5 will expect.

In order to use our custom filter, we'll need a preload method for the p5 object. This will call the loadShader method from p5 with paths to our shader assets to make the shader ready to use. Then we have the setup method that tells p5 what to do with the DOM as soon as it loads on the page. Finally, we have the draw method which applies the shader to the camera we initiated in the setup method.

The camera component

With all of the p5 setup ready to use, we need to create the component that gets rendered in the browser. Beneath the p5 code, add the following code.

1// CameraFilter.tsx
2...
3
4export default function CameraFilter() {
5 const p5ContainerRef = useRef();
6
7 useEffect(() => {
8 // On component creation, instantiate a p5 object with the sketch and container reference
9 const p5Instance = new p5(sketch, p5ContainerRef.current);
10
11 // On component destruction, delete the p5 instance
12 return () => {
13 p5Instance.remove();
14 };
15 }, []);
16
17 async function submit(e: any) {
18 e.preventDefault();
19
20 if (p5ContainerRef.current === null) {
21 return;
22 }
23
24 // @ts-ignore
25 const dataUrl = await toPng(p5ContainerRef.current, { cacheBust: true });
26
27 const uploadApi = `https://api.cloudinary.com/v1_1/your_cloud_name/image/upload`;
28
29 const formData = new FormData();
30 formData.append("file", dataUrl);
31 formData.append("upload_preset", "your_upload_preset");
32
33 await fetch(uploadApi, {
34 method: "POST",
35 body: formData,
36 });
37 }
38
39 return (
40 <>
41 {/* @ts-ignore */}
42 <div id="camera" ref={p5ContainerRef}></div>
43 <button type="submit" onClick={submit}>
44 Save picture
45 </button>
46 </>
47 );
48}

Let's walk through this from the beginning. First, we use the useRef hook from React to define the HTML reference to the element we'll render the p5 shader on. Then we take advantage of the effect hook to create an instance of the p5 object using the sketch function we wrote earlier when the component is created. We also clean up and remove the p5 instance when the component is destroyed to make sure we don't have any weird behavior.

Next, we create the submit function for when we decide to save a picture that's been altered by our filter. Inside this function, a few things happen. We check and make sure that the referenced element has something in it currently and if it doesn't we just return from the function. If it does have something in it, then we use the html-to-image package to create a PNG file for the filtered image.

After we have the PNG file, then we make a variable that holds the API connection string to Cloudinary. Make sure you update this to use your own cloud name so that the images will go to your Cloudinary account. Next, we create a new FormData object that will hold all of the values Cloudinary needs to accept our upload programmatically. You'll also need to update the upload preset to match your Cloudinary account. You can find this value in your account settings.

The last thing we do in the submit function is call the fetch API to send our filtered image to Cloudinary. The only things remaining for this component are the actually rendered elements.

As you can see, there isn't much that gets rendered on the page. We have the <div> that the p5 instance references and a button that calls the submit function when we want to save an image. All that's left for our camera filter is defining the shader files that define what the filter does to the image.

Writing shader files

As we mentioned earlier, you can learn more about WebGL shaders here and I highly recommend you take a look because understanding what's happening in these files is important. We aren't going to do a deep dive into these files because it takes some understanding of topics outside of the scope of this tutorial. We will however create a couple of files to make our custom shader.

In the src directory, add a new sub-directory called assets. Inside that folder, make two new files: webcam.vert and webcam.frag. These will make our custom shader. Add the following code to the webcam.vert file.

1// webcam.vert
2
3attribute vec3 aPosition;
4attribute vec2 aTexCoord;
5
6varying vec2 vTexCoord;
7
8void main() {
9 vTexCoord = aTexCoord;
10
11 // copy the position data into a vec4, using 1.0 as the w component
12 vec4 positionVec4 = vec4(aPosition, 1.0);
13 positionVec4.xy = positionVec4.xy * 2.0 - 1.0;
14
15 // send the vertex information on to the fragment shader
16 gl_Position = positionVec4;
17}

Then open the webcam.frag file and add this code.

1// webcam.frag
2
3precision mediump float;
4
5// our texcoords from the vertex shader
6varying vec2 vTexCoord;
7
8// the texture that we want to manipulate
9uniform sampler2D tex0;
10
11// how big of a step to take. 1.0 / width = 1 texel
12// doing this math in p5 saves a little processing power
13uniform vec2 stepSize;
14uniform float dist;
15
16// an array with 9 vec2's
17// each index in the array will be a step in a different direction around a pixel
18// upper left, upper middle, upper right
19// middle left, middle, middle right
20// lower left, lower middle, lower right
21vec2 offset[9];
22
23// the convolution kernel we will use
24// different kernels produce different effects
25// we can do things like, emboss, sharpen, blur, etc.
26float kernel[9];
27
28// the sum total of all the values in the kernel
29float kernelWeight = 0.0;
30
31// our final convolution value that will be rendered to the screen
32vec4 conv = vec4(0.0);
33
34void main(){
35
36 vec2 uv = vTexCoord;
37 // flip the y uvs
38 uv.y = 1.0 - uv.y;
39
40 // different values in the kernels produce different effects
41 // take a look here for some more examples https://en.wikipedia.org/wiki/Kernel_(image_processing) or https://docs.gimp.org/en/plug-in-convmatrix.html
42
43 // here are a few examples, try uncommenting them to see how they affect the image
44
45 // emboss kernel
46 kernel[0] = -2.0; kernel[1] = -1.0; kernel[2] = 0.0;
47 kernel[3] = -1.0; kernel[4] = 1.0; kernel[5] = 1.0;
48 kernel[6] = 0.0; kernel[7] = 1.0; kernel[8] = 2.0;
49
50 // sharpen kernel
51 // kernel[0] = -1.0; kernel[1] = 0.0; kernel[2] = -1.0;
52 // kernel[3] = 0.0; kernel[4] = 5.0; kernel[5] = 0.0;
53 // kernel[6] = -1.0; kernel[7] = 0.0; kernel[8] = -1.0;
54
55 // gaussian blur kernel
56 // kernel[0] = 1.0; kernel[1] = 2.0; kernel[2] = 1.0;
57 // kernel[3] = 2.0; kernel[4] = 4.0; kernel[5] = 2.0;
58 // kernel[6] = 1.0; kernel[7] = 2.0; kernel[8] = 1.0;
59
60 // edge detect kernel
61 // kernel[0] = -1.0; kernel[1] = -1.0; kernel[2] = -1.0;
62 // kernel[3] = -1.0; kernel[4] = 8.0; kernel[5] = -1.0;
63 // kernel[6] = -1.0; kernel[7] = -1.0; kernel[8] = -1.0;
64
65 offset[0] = vec2(-stepSize.x, -stepSize.y); // top left
66 offset[1] = vec2(0.0, -stepSize.y); // top middle
67 offset[2] = vec2(stepSize.x, -stepSize.y); // top right
68 offset[3] = vec2(-stepSize.x, 0.0); // middle left
69 offset[4] = vec2(0.0, 0.0); //middle
70 offset[5] = vec2(stepSize.x, 0.0); //middle right
71 offset[6] = vec2(-stepSize.x, stepSize.y); //bottom left
72 offset[7] = vec2(0.0, stepSize.y); //bottom middle
73 offset[8] = vec2(stepSize.x, stepSize.y); //bottom right
74
75 for(int i = 0; i<9; i++){
76 //sample a 3x3 grid of pixels
77 vec4 color = texture2D(tex0, uv + offset[i]*dist);
78
79 // multiply the color by the kernel value and add it to our conv total
80 conv += color * kernel[i];
81
82 // keep a running tally of the kernel weights
83 kernelWeight += kernel[i];
84 }
85
86 // normalize the convolution by dividing by the kernel weight
87 conv.rgb /= kernelWeight;
88
89 gl_FragColor = vec4(conv.rgb, 1.0);
90}

This will create a convolution kernel effect on the camera, giving us our custom filter. You can find the source code for this shader and others in this repo. The shader files are already referenced in code we wrote earlier, so we are finished!

The only thing left to do is run your project with npm start and take a look at how the filter changes the camera image.

Finished code

You can take a look at the complete code in the camera-filter folder of this repo. Or you can check it out in this Code Sandbox.

Conclusion

Getting into advanced styling techniques and learning about some of the visualization libraries is a great way to stay ahead of the curve. With all of the virtual interactions we all have, it's a good skill to know how to render more complex things for users.

Milecia

Software Team Lead

Milecia is a senior software engineer, international tech speaker, and mad scientist that works with hardware and software. She will try to make anything with JavaScript first.