Fog Plane Shader Breakdown

I’ve moved to a new site!
An updated version of this post can be found here, with no ads :
https://www.cyanilux.com/tutorials/fog-plane-shader-breakdown/


Intro

This post includes two examples to produce fog shader effects. In both cases, the shader is applied to a flat plane or quad where the alpha of the surface is altered based on depth values to produce a fog effect. As shown in the tweet above, It can be used to produce a vertical fog effect or used in doorways or cave exits/entrances to fade into white/black (or any other colour).

The first example is simpler, but seems to have some inaccuracies at very shallow angles (between the camera’s view direction and plane surface) where the fog can look squashed / more dense. The second example is more complicated but doesn’t have this issue, so looks better if the camera can get close to the fog effect. Note however, in both examples the camera cannot pass through the quad/plane, or the fog effect will disappear.

The graphs provided will also only work in a Perspective camera projection. For Orthographic, a different method will be required. This twitter thread shows some differences between the projections which may assist in producing an orthographic version.

Settings

This is an Unlit Shader made in the LWRP (Lightweight Render Pipeline). It should also work in the URP (Universal Render Pipeline, which is what the LWRP has been renamed to in newer versions). The nodes used should also be supported in the HDRP (High Definition Render Pipeline) but I haven’t tested it (may need tweaking slightly due to differences in HDRP).

On the (Lightweight / Universal) Render Pipeline Asset, the Depth Texture option needs to be enabled for the Scene Depth node to work (or you can override the value on the Camera).

On the Master node, set the settings to Transparent rendering mode and Alpha blending. Make sure to also set the AlphaClipThreshold to 0 as we don’t want to discard pixels with alpha.

Breakdown

In order to create these fog effects, we need to sample the depth texture from the camera. Shadergraph has a Scene Depth node which will handle this as well as the conversions for us. By using the Eye sampling mode it will output the linear depth in terms of eye space units. You can read more about the Scene Depth node on the Scene Color & Depth Nodes blog post.

Next we need the depth to the surface of the plane. To do this we can take the alpha/w component of the Screen Position node set to Raw mode. It’s not too important to know why this is the object’s depth, but I believe it’s due to how 3D object positions are converted to screen coordinates by the model-view-projection matrix. It is important that we set the node to Raw however, as in the Default mode each component is divided by this alpha/w value, including the alpha/w component itself. This is usually referred to as the “perspective divide” – it converts the clip space coordinates (obtained after applying the model-view-projection matrix) into normalised screen coordinates, displaying the 3D perspective to the 2D screen.

With these to depth values, we can now move onto producing the fog shaders.

Example 1 (Simple)

We can Subtract the two depth values to obtain the difference in depth between the object’s surface and the scene depth. The further apart these two depth values are, the larger the output value is, producing a fog effect.

Before we plug that into the alpha, it would be good if we could control the density of the fog somehow. We can do this by using a Multiply node (or Divide, if you prefer to use density values like 2 or 3 instead of 0.5 and 0.33). We should also put the output into a Saturate node, to clamp values to be between 0 and 1, as values larger than 1 may produce very bright artifacts – especially when using bloom post processing.

Finally, in order to change the colour of the fog, we create a Color property to use as the Color input on the Master node. I’m also taking the alpha component of our property and multiplying it with the saturated output so the alpha value of our colour is taken into account.

IMPORTANT :

  • Make sure to set the alpha component of the Fog Colour in the shadergraph blackboard (AND in the material inspector!) to 1 (or 255 if using the 0-255 range), otherwise the effect will be invisible. Shadergraph defaults to (0,0,0,0)! For most effects I leave the alpha at 1 (aka 255), as fog is usually fully opaque at a distance, but I allowed the alpha to be changed in the case that you don’t want that. If you don’t want the alpha of the colour to be taken into account, remove the alpha Multiply.
(Click image to view a larger version)

Example 2 (Accurate)

For a more accurate fog, we can calculate the world position of the scene depth coordinate and use that to produce the fog instead, using the normal direction of the plane/quad to have the fog in the correct direction. Note that this will likely be less performant than the simpler version.

Create a View Direction* node set to World space. This obtains a vector from the pixel/fragment position to the camera. The magnitude of this vector is the distance (between the camera and fragment), but this is not the same as depth – The depth is a distance from the fragment position to a plane that is perpendicular to the camera, not the camera position itself. This creates a triangle as shown in the image.

*Note, View Direction is not normalised in URP, but is in HDRP. If you are using HDRP you should probably use the Position node in world space instead as we need it non-normalised. That might need also need negating (Negate node or Multiply by -1)? Not sure, it might be fine either way. (Could also use Position node in absolute world space, Subtract Camera node position, but that should just be the same thing as” world” space in HDRP due to it’s camera-relative rendering).

(I took this image from my water shader post, so ignore the fact it says “water surface position”. It’s the pixel/fragment position on the quad/plane the shader is applied to. Technically the arrow should also be going in the opposite direction too, as the view direction goes from the fragment to the camera – but it should still help get the point across).

In order to reconstruct the world position we need to scale this vector to the scene position, behind the quad/plane. The Scene Depth and position we want creates another triangle as shown in the image. We can use the scale factor of the two triangles to obtain the vector to the scene position, which can be achieved by dividing the View Direction by the Raw Screen Position W/A depth and then multiplying by the Scene Depth. We can then Subtract the camera’s world position from the Camera node to get the scene world position.

Side Note – Calculating the scene world position from the depth values is useful for other effects too (e.g. see Water Shader Breakdown post), but note that this method only works for objects in the scene, it won’t work for screen based shaders for post processing effects for example.

Also a useful tip – You can take the output of this, put it into a Fraction node and put it into the Color input on the Master node to help visualise the positions.

With this scene world position, we can now handle the fog. To make sure the fog is in the correct direction, we can Transform the position to Object space and take the Dot Product between it and the normalized Normal Vector in Object space. Since the normal vector points outwards from the plane though, we also need to Negate the result of the dot product (or negate the normal vector going into the dot product).

A side effect of using the Transform to Object space, is that we don’t need a property to control the density of the fog like in the first example. We can change it by just scaling the quad/plane in the normal’s direction instead.

We then Saturate our output to clamp values above 1, as values larger than 1 may produce very bright artifacts – especially when using bloom post processing.

Finally, in order to change the colour of the fog, we create a Color property to use as the Color input on the Master node. I’m also taking the alpha component of our property and multiplying it with the saturated output so the alpha value of our colour is taken into account.

IMPORTANT :

  • Make sure to set the alpha component of the Fog Colour in the shadergraph blackboard (AND in the material inspector!) to 1 (or 255 if using the 0-255 range), otherwise the effect will be invisible. Shadergraph defaults to (0,0,0,0)! For most effects I leave the alpha at 1 (aka 255), as fog is usually fully opaque at a distance, but I allowed the alpha to be changed in the case that you don’t want that. If you don’t want the alpha of the colour to be taken into account, remove the alpha Multiply.
  • To control the density of the fog, adjust the Y scale of the plane. Unlike the simpler version we use a Transform to Object space, which as a result allows us to use the object’s scale to control the density rather than using an additional property.
(Click image to view a larger version)

Thanks for reading this!

If you have any comments, questions or suggestions please drop me a tweet @Cyanilux, and if you enjoyed reading please consider sharing a link with others! 🙂