I’ve moved to a new site!
An updated version of this post can be found here, with no ads :
This cloud shader is applied to a flat subdivided plane. The vertex positions are offset vertically based on layered noise, where each layer is moving at a different rate over time to simulate the clouds changing shape over time. It also uses the depth buffer to fade the alpha where there are intersections with other scene objects.
As we are writing this in shadergraph, we are (currently) unable to add any tessellation to the shader, so instead we are using a plane that has already been subdivided a lot to ensure the vertex manipulation is smooth. Depending on the camera angle required it might also be possible to have a disc or pizza-slice shape, subdivided more at the center and move the position/rotation along with the camera, rather than having multiple planes to make up the clouds.
There is also two issues with making this shader in shadergraph, which we can fix by editing the generated code. I’ll go through these again at the end of the post but the issues are as follows :
- There is currently no way in shadergraph to have a Transparent shader which also writes to the depth buffer. We need this as parts of the mesh will overlap since we are offsetting parts of it, and the rendering will get confused about which faces should be rendered on top without doing this.
- There is a bug in LWRP where the Screen Position node is based off the initial vertex position instead of the offset one. This doesn’t happen in HDRP and is fixed in the URP package version 7.1.1, but in case anyone is using a version before that I’ll provide the fix I used. This also only needs to be fixed if you include the depth intersection part of the shader, as the Screen Position is used for the Object Depth, as well as the default input to the Scene Depth node.
- This is an Unlit Shader made in the LWRP/URP (Lightweight Render Pipeline, renamed to Universal Render Pipeline in newer versions). The nodes used should also be supported in the HDRP (High Definition Render Pipeline) but I haven’t tested it.
- On the Master node, set the settings to Transparent rendering mode and Alpha blending.
To begin we’ll set up the noise for our clouds. We’ll be using a mixture of Gradient Noise and Simple Noise nodes to do this – although you may also want to think about replacing some layers of noise with a seamless texture as a cheaper alternative.
In order to ensure we can use multiple plane objects and have no seams between them, we can set the UVs of the noise to be based on the Position node set to World space. That gives us a Vector3 but the UVs are a Vector2 so we also need to Split it and take the X/R and Z/B coordinates to make our noise be based horizontally along the plane.
We can use a Time node to offset the World Position so the noise moves slowly over time, with a Multiply node to control the speed at which the noise scrolls at. We’ll also add an overall speed multiplier as a Vector1 property so we can control the cloud scroll speed from the inspector.
We’ll use parts of this noise setup 3 times, one for the Gradient Noise and two Simple Noise nodes, all moving at slightly different rates as shown in the image below.
I’ve used a mixture of add/multiply to combine these layers of noise. We could take an average of all three (by adding them together and dividing by 3), but having a multiply in there made the layered noise output a bit more interesting, in my opinion at least.
We can now use our layered noise output to offset the vertex position. In order to ensure this is only a vertical offset, we’ll first put it into the Y input on a Vector3 node, with the other two inputs set to 0. We can then Multiply this by a value to control the scaling – I’m using the object’s scale for this, so that we can easily change it by adjusting the Y scale of the plane rather than having a separate property – But if you want to be able to apply this to meshes other than a plane, you might want to use a property instead.
Also note that we are offsetting the World space vertex position here, but the Master node wants a Object space position, so we need to use a Transform node to convert it.
Next we’ll handle the cloud colour as it’s currently hard to see anything we’ve done with the shader being a solid colour. I’m using the same output from our layered noise – giving the higher parts of the clouds a different colour to the lower parts. We can do this by putting the layered noise into the T input on a Lerp node, and set the A and B inputs to two Color properties. This creates a linear interpolation between the two colours, creating a gradient. At T=0 it outputs A, and at T=1 it outputs B.
So we can control how much cloud cover there is, we can add a Smoothstep based on the layered noise output and use this as the Alpha input on the Master node. The Smoothstep is similar to a Step node, which outputs 0 if the In input is smaller than the Edge input or 1 if it is larger. The Smoothstep provides a smoother transition where we specify two Edge inputs. If the In input is less than the first Edge it outputs 0 and if it is larger than the second Edge it outputs 1. Anything inbetween is an interpolation between 0 and 1, but it is not linear like a Lerp – it instead smoothly eases in and out.
I’m using a property to control the two Edge inputs. It outputs 1 if the Cloud Cover property is 0, as all noise values will be larger than 0, but as it gets larger the Smoothstep outputs 0 for the lower noise values, leaving only the higher parts behind. Also the falloff (which is controlled by Edge2) also increases, so the clouds also get softer, and we have an Additional Falloff property to control this further if required.
Finally, I wanted to add a depth intersection effect, so that objects fade into the clouds rather than having a harsh transition. We subtract the Object Depth from the Scene Depth, which we can then Multiply by a value to control how it fades, aka the Density of the clouds. In order to obtain the Object Depth, we use the W/A component from the Screen Position node set to Raw mode. We can then Multiply this with the previous alpha affect to combine them.
This is a common technique to have effects where there are intersections with scene objects, such as with forcefields or shoreline effects on water shaders. Note however that this will only work in a Perspective camera projection.
Important : Also, on the (Lightweight / Universal) Render Pipeline Asset, the Depth Texture option needs to be enabled for the Scene Depth node to work (or you can override the value on the Camera).
If you aren’t going to have any objects intersecting with the clouds you’ll likely want to leave this part out for better performance.
Note : Set the “Alpha Clip Threshold” on the master node to 0 for URP. In older versions of LWRP the value would do nothing unless a node was connected but this has been changed. You may want to use an additional property to control this threshold in order to correctly render shadows if required.
That’s the shadergraph complete – But before we finish the post we’ve got to tackle the issues I mentioned in the introduction. In order to fix these issues, we need to edit the code generated by shadergraph. We can obtain this by right-clicking the Master node and selecting “Show Generated Code”. This will create and open a file inside the Temp folder in your project folder. You’ll want to go to this folder and copy the shader file created and put it into your Assets folder, then close the generated code file. We can then drag the copied shader code file onto the material, so it uses the code version rather than the graph version, and open the shader code file so we can edit it.
The first issue :
There is currently no way in shadergraph to have a Transparent shader which also writes to the depth buffer. We need this as parts of the mesh will overlap since we are offsetting parts of it, and the rendering will get confused about which faces should be rendered on top without doing this.
In order to fix this, we need to locate “ZWRITE” at the top of our shader and set this to “on” instead of “off”. (Note : We may also be able to fix this with a LWRP Custom Renderer, but this method is a lot easier, although we will have to go through this whenever we want to make changes to the graph).
The second issue :
There is a bug in LWRP where the Screen Position node is based off the initial vertex position instead of the offset one. This doesn’t happen in HDRP and will be fixed in URP (renamed from LWRP) package version 7.1.1, but in case anyone is using a version before that I’ll provide the fix I used. This also only needs to be fixed if you include the depth intersection part of the shader, as the Screen Position is used for the Object Depth, as well as the default input to the Scene Depth node.
If we are using a LWRP/URP version before 7.1.1, we need to scroll through the generated code until we reach the “vert” function. From there, we should see a float4 ScreenPosition variable. We need to copy that line, without the “float4”, and paste it below the “v.vertex.xyz = vd.Position” line as seen in the image below. Without this fix, the depth intersection effect will look a bit wobbly… if that makes sense.
Thanks for reading this!
If you have any comments, questions or suggestions please drop me a tweet @Cyanilux, and if you enjoyed reading please consider sharing a link with others! 🙂