Forcefield Shader Breakdown

I’ve moved to a new site!
An updated version of this post can be found here, with no ads :


This post is a more advanced version of the Simple Forcefield Breakdown. You don’t have to read that one first, but it might help with understanding this one further. This shader uses a Fresnel Effect along with Scene Color and Scene Depth nodes to produce a spherical (or hemispherical (dome-shaped)) forcefield (or energy shield) that distorts the view through it, with glowing edges and intersections with objects in the scene. We will also add the ability to produce distortion ripples at points of collision with scene objects, through the use of a Custom Function node.

A few notes before we begin :

  • I’m using an Unlit Shader set to the Transparent rendering mode with Alpha blending, using the LWRP (Lightweight Render Pipeline). The nodes used should also be supported in the HDRP (High Definition Render Pipeline) but I haven’t tested it.
  • If you are looking to make a forcefield without any distortion to visuals – you should follow the simple version instead (linked above).
  • The distortion is created using the Scene Color node, which in the LWRP only shows opaque objects (not sure about HDRP). Because of this, transparent objects will not be visible through the forcefield.
  • This may not be the most efficient way to achieve a forcefield effect with distortion. One of the main issues I encountered was trying to get both the front and back faces to be visible through the forcefield. Without distortion this is pretty easy as we can just use transparency, however when we add distortion through the use of the Scene Color node we will need to only render the front faces. If we want back faces to be visible too, we would have to fake them entirely by assuming the shader is always going to be applied to a hemisphere. This is what I’ll be doing in this post, but there may be alternatives – such as using a multi-camera setup to render the scene, including the back faces of the forcefield, to a texture which can be sampled instead of using the Scene Color node, or perhaps have the distortion completely separate on a post processing effect – but you’ll need another camera to render a specific layer of the scene to describe which parts should be distorted and in what direction (similar to this Makin’ Stuff Look Good video).


Before we start, we need to click on the small cog on our Master node and switch to Transparent rendering mode, with Alpha blending and keeping it to Single Sided. These will be important, as we will be using the Scene Color and Scene Depth nodes. If you aren’t familiar with these nodes, you can read up on them more here : Scene Color & Depth Nodes Post.

We’ll first create a Fresnel Effect node. This will output a value based on the mesh’s surface normal and view direction, producing what looks like a “glow” around the edge of the object. We can increase the Power input to make this glow closer to the edges – I’m using a value of 8. For more info about this node see : Fresnel Effect Post. I’m also putting this into a Multiply with a value of 2 to make it a bit brighter.

Next we’ll handle the intersection effect with scene objects. To do this, we’ll create a Screen Position node with the mode set to Raw and put it into a Split. This gives us the depth of the pixel/fragment on the surface of the object in the W/A component of the split. We will also need to create a Scene Depth node set to the Eye sampling mode.

Because we want this shader to support distortion later, this will be different from the intersection effect made in the simple version of this shader – as we cannot use the Two Sided mode but still want the intersection to show for both front and back faces. The method I’m using for this is similar to what was done in the Water Shader Breakdown for the caustics effect. By knowing the position, object depth and the scene depth we can reconstruct the world position in the scene for that fragment, which we can then use to draw the intersection effect, based on the distance to the forcefield’s center.

To reconstruct this world position based on depth, create a View Direction node set to World space, and Divide it by the object’s depth from the W/A component of the Split node from earlier. Then Multiply it by the output from the Scene Depth node. Create a Camera node, and take the Position output and Subtract the output from our Multiply. If you aren’t sure about how this works, see the water breakdown linked above.

With this, we can do an intersection effect by comparing the distance from this position to the center of the forcefield (which will be at the object’s origin). Take the output of our worldspace scene position and put it into the A input on a Distance node. In order to get the object’s worldspace origin we’ll create an Object node and take the Position output and put that into the B input on the Distance node.

Currently this gives us values of 0 close to the forcefield origin, while the further parts has a higher output. Since we want to draw the intersection effect on the edge of the forcefield we need to take the output from the Distance and Subtract a value from it (I’ll come back to what value in a second), then take the Absolute. This will push those distance values of 0 into negatives then uses the absolute to “flip” them into the positive again.

In order to make this intersection effect on the edge of our forcefield, the value we need to Subtract should be based on the forcefield scale. We can do this by taking the Scale output from the Object node. This is a Vector3 however, and we only want a Vector1 so we’ll put it into a Split node and take the X/R component (We could also take the Z/B component, since our forcefield should always be the same on both the X and Z axis since it’s a sphere/hemisphere). You may also need to Multiply this by a value to control the scaling if the scale of the mesh doesn’t have a radius of 1 (aka 2 units wide).

We’ll then take the output from the Absolute and put it into a One Minus node as we want values of 1 on the forcefield edge instead of 0, then put it into a Saturate node and a Power node with a second value of 15. We can then simply Add the output of our Fresnel’s Multiply to this. We will then Multiply by a Color node (or property if you want to be able to edit it from the inspector), to tint the forcefield to a blue colour, using HDR mode with an intensity of 2.

If you temporarily put this into the Color input on the Master node you should see that the forcefield is black, but you can see the blue edges where it intersects with objects in the scene. Unlike in the simple version of this we won’t be making the forcefield actually transparent as we want to add distortion. To do this we will use the Scene Color node, which is a texture of all the opaque objects the camera has rendered. Before sampling this texture, we can offset the coordinates slightly to create distortions.

We’ll create a Screen Position node and put it into an Add node in order to do this offset, leaving the other input empty for now. Then put the output of this into a Scene Color node. We’ll then take the output of the Scene Color node and put it into the A input on a Lerp node, put the forcefield Color node we used earlier into the B input and put a Vector1 node with a value of 0.02 in the T input. This will allow us to interpolate between the scene colour and the forcefield colour based on a value which will control the visibility. Due to the forcefield colour being quite intense, we will want to keep this value very small. We can now take the Lerp output and Add it with our other colour (the output of the Multiply node from earlier) and put that into the Colour input on the Master node. We should now see the scene through the forcefield, but it isn’t distorted yet.

Going back to the Screen Position node from before, we need to offset it in order to create the distortion. We’ll use a Gradient Noise node to do this, with a Scale value of 25. As the output of this is between 0 and 1, we will want to Subtract 0.5 to move it into the range of -0.5 to 0.5 so we are distorting the view evenly in each direction. We can then Multiply it by a small value such as 0.01, to control the strength of the distortion, and put it into the second input on the Add node (the one with the Screen Position going into it).

We can also offset the UVs based on Time so that the distortion moves. Create a Time node and take the Time output and Multiply by a value of 0.1 to control the speed of the scrolling noise. Then put it into an Add node with a Position node set to View space. Put the output of this into the UV input on the Gradient Noise node.


So far we have a nice forcefield effect, but one of the things I wanted to add was a rippling reaction with projectiles that are fired at the forcefield as seen in the GIF in the original tweet. In order to do this we need to use the Custom Function node, as we need access to a couple things that aren’t yet supported in shadergraph normally. This function will output a Vector3 distortion direction which we will use to further offset the Screen Position going into the Scene Color node. It will also output a Vector1 value which will allow us to colour the ripple slightly to make it more obvious. Note that I’m focusing on the front faces only here – it may be possible to extend it to the back faces too, however I won’t be going through that in this post.

In order to allow for multiple ripples to be handled at the same time, we will need an array to store the positions of the origin of each ripple. We will then also need another value to control the lifetime of the ripple. To send these points into the shader we will need a C# Script, which will also control updating the lifetime and removing the point when it reaches a lifetime larger than 1. Since we have 4 values, we could use a Vector4/float4 array for this – but as we might want to add more values to control further things (such as power/scale, perhaps even a different colour for each ripple) I will be using a float array.

It is possible to have arrays in shaders – although shadergraph doesn’t yet support them normally. We can however use the Custom Function node to declare the array and loop through it. I believe we will need to be using the File mode in order to do this, as it needs to be specified outside of the function itself.

In order to define the array in the shader we also need to specify a fixed length. We’ll be allowing our shader to store up to 6 ripple points, each having 4 components (3 of which are the XYZ position, and the final being the lifetime, as mentioned before), this means we need an array of length 6*4. We’ll come back to the actual shader function later.

uniform float _Points[6*4];

If you want more information about arrays in shaders see this article by Alan Zucconi.

In order to initialise the array it has to be done externally, via a C# Script using Material.SetFloatArray(array).

The script I’m using looks like the following :

using UnityEngine;

public class ArrayTest : MonoBehaviour {

    public Material material;

    // Initalize Array
    // This should have the same length as in the shader!
    float[] points = new float[] {
        1, 0, 0, 0.1f,
        0, 1, 0, 0.2f,
        0, 0, 1, 0.4f,
        -1, 0, 0, 0.5f,
        0, -1, 0, 0.6f,
        0, 0, -1, 0.8f,

    void Update(){
        if (material == null) return;

        for (int i = 0; i < points.Length; i += 4) {
            float t = points[i + 3];
            t += Time.deltaTime;
            if (t > 1) {
                // Lifetime Complete
                // Create a new random point
                t = 0;
                Vector3 sphere = Random.onUnitSphere;
                // Keep it in the top hemisphere - leave this out for a sphere!
                if (sphere.y < 0) sphere.y = -sphere.y;

                // Set position
                points[i] = sphere.x;
                points[i + 1] = sphere.y;
                points[i + 2] = sphere.z;

            // Set lifetime
            points[i + 3] = t;

        material.SetFloatArray("_Points", points);

This script is just replacing the points in the array with random points when they reach the lifetime of 1 – so we get constant rippling effects for testing purposes. I won’t be going through the script for actual gameplay mechanics, but you would want to be able to:

  • Add points based on collisions (e.g. MonoBehaviour.OnCollisionEnter/OnTriggerEnter). If there are no spaces for new points (if there are more than 6 collisions at once) but we want to add one, we would likely want to loop through the array and find the one with the largest lifetime and replace it with the new point.
  • “Remove” points when they reach their lifetime of 1. Note : It’s important that the array length stays fixed and data is present when sending it to the shader so it can replace it correctly – so for removing points you will need to specify the values still, giving { 0, 0, 0, 2 } or something, where in the loop if the lifetime is 2 we know it’s a space for a new point, and our shader function should be outputting 0 to prevent anything being rendered for that point.

The following is the shader function used by the Custom Function node :

uniform float _Points[6*4];

void Test_float(float3 position, out float3 direction, out float strength){
	float3 directionOutput = 0;
	float strengthOutput = 0;
	for (int i = 0; i < 6*4; i += 4){
		float3 p = float3(_Points[i], _Points[i+1], _Points[i+2]); // Position
		float t = _Points[i+3]; // Lifetime
		// Ripple Shape :
		float rippleSize = 1;
		float gradient = smoothstep(t/3, t, distance(position, p) / rippleSize);
		// frac means it will have a sharp edge, while sine makes it more "soft"
		//float ripple = frac(gradient);
		float ripple = saturate(sin(5 * (gradient)));
		// Distortion Direction & Strength :
		float3 rippleDirection = normalize(position-p);
		float lifetimeFade = saturate(1-t); // Goes from 1 at t=0, to 0 at t=1
		float rippleStrength = lifetimeFade * ripple;
		directionOutput += rippleDirection * rippleStrength * 0.2;
		strengthOutput += rippleStrength;
	direction = directionOutput;
	strength = strengthOutput;

We specify the function inside the “void Test_float”, where the name of the function has to match the one given in the Custom Function node, in this case it was named “Test” – (I should probably name them properly in the future…)

We create some variables to hold the outputs, then loop through the array with the same length of 6*4 with “i += 4” so we can obtain all 4 values for each point in each iteration of the loop. We read the position and lifetime from the array via “_Points[i+n]” then set up the shape of the ripple based on the distance from the fragment’s position we passed in, and the point’s position.

This function needs to be saved in a HLSL File (in this case I saved in under “test2.hlsl”, again, need to work on my naming it seems!). Set the file as the source on the Custom Function node by clicking the cog icon on it. We also need to make sure we have a Vector3 input, and the Vector3 and Vector1 outputs defined on the node (I’ve named these Position, Direction and Ripple, they don’t have to match the same names as the function code – but they do have to be in the correct order).

We next need to take the Direction output from our Custom Function node and put it into a Transform node from World to View space in Direction mode. We can then take the output of that and Add it to where we are offsetting the Screen Position into the Scene Color node. Also, take the Ripple output from the Custom Function, Multiply it by 0.4 then Add it to the colour output right before the Multiply with the forcefield colour.

We should also take the distorted screen position output (from the Add node, before going into the Scene Color) and put it into the input of the Scene Depth node we made earlier. This will make sure we sample the distorted depth value so the intersection effect will be accurate to what is being viewed through the forcefield. I haven’t put these nodes close together so this will put a long line across our graph, hence why I’ve left this last to prevent confusion with other node connections. Here’s a final image of the full graph, also showing that connection :

Phew! We could also add some vertex manipulation to the ripple effect, as well as add the ripples to the fake back faces, however this post is already long enough so I’m leaving it here.

Thanks for reading this!

If you have any comments, questions or suggestions please drop me a tweet @Cyanilux, and if you enjoyed reading please consider sharing a link with others! 🙂