Screen Space Reflection is one of the most used techniques to improve image quality in real-time rendering. It uses the screen space buffer for calculating the reflection color to reduce the cost. In deferred rendering pipeline, with multi render targets, the screen space reflection could be implemented directly from the depth, normal, and glossiness buffer. But in the forward rendering pipeline, these data are not available.
Our solution is simply to use an additional pass to generate the screen space buffer and in the post-processing stage, we send these buffers to the screen space image effect shader.
In Unity, we are able to use the Replacement Shader feature to replace the current camera’s rendering shader. Unity also provides a very nice API that allows us to render the depth and color into two different buffers at the same time.
var width = camera.pixelWidth; var height = camera.pixelHeight; var floatFormat = RenderTextureFormat.ARGBFloat; var depthFormat = RenderTextureFormat.Depth; var rtColor = new RenderTexture(width, height, 0, floatFormat); rtColor.filterMode = FilterMode.Point; var rtDepth = new RenderTexture(width, height, 24, depthFormat); rtDepth.filterMode = FilterMode.Point; camera.SetTargetBuffers(rtColor.colorBuffer, rtDepth.depthBuffer);
With this feature, we can use a total of 4 channels for normals and other information. In the UnityCG.inc file, Unity also provides a nice helper function to encode view space normal into 2 channels. Thus, we are able to store the surface glossiness and refraction index to the last 2 channels.
The post-processing shader then becomes very straightforward. First, we need to first sample the depth information from the rtDepth buffer to restore the world position and then sample normals, glossiness, and refraction index from the rtColor buffer to perform the standard screen space reflection calculation.
Compared to the deferred rendering path, we must perform a pre-pass for scene analysis. Here are some optimization tricks we can do to reduce the cost overhead.
First, we can combine this pass with the depth pre-pass. A lot of forward-rendering algorithms require depth pre-pass. We can reuse the depth buffer from the pre path.
Secondly, we can use dynamic batching for the pre-pass because they use the same shader to render.