Articles

ISOMETRIC ENVIRONMENT LIGHTING : PART I

2D ENVIRONMENT WORKFLOW

 

There are many ways to address asset organization when it comes to the development of 2D environments inside your game engine.

In one approach, the whole environment can be created as one large background texture, with lighting and the environment elements rendered in.

Another approach would be to create a construction set of tileable textures that can be re-assembled to create any variation of environments that are needed.

While the first one allows for unrestricted creativity and provides the ability to import the assets into any engine as they are, they also introduce their own set of problems:

Obsidian Entertainment’s Tyranny features fully rendered unique backgrounds for each location.

 
  • They usually demand more texture space and large storage space within the final builds so are not always suited for mobile platforms.

  • Additional cost of rework. To adjust anything the assets need to be touched up and re-imported in the project. Having the ability to adjust the environments to your needs inside the engine itself saves you time and money.

  • Less flexible if they need to be interacted with via gameplay.

Turning the environmental assets into construction sets is not without problems either as you often need to develop complex systems to manage the implementation of the elements at runtime and edit time. But this last approach allows you to create many more environments from fewer assets.

RoomOpening.png

Supergiant Games Hades. Good example of mixing construction set tiles to use small and big objects

That is not to say that you should use either of the approaches exclusively. A mix of the two approaches can be used to produce highly flexible environments.

Reusable objects are great, however they do have another problem: How do you light them in such way to make them consistent with the environment they were placed in?

This blog post series is an exploration into techniques that can be used to create lighting models for 2d isometric objects. We will start with simple, vertex based attenuation, lighting model that is applied to 2D tile environment for our example.

 

EXAMPLE SETUP

The techniques described in this chapter can be adapted to work with any kind of 2D environment setup. For this example, we will use isometric elements constructed from tiles (texture billboards placed on the grid).

Here is an example of how a tiled environment is assembled from objects:

Untitled-1.png
Unity_2019-01-29_18-41-20.png
 

Starting with our initial grid we will change the lighting to create a night scene with the light source coming from the lantern.

For a quick test, lets see what can be achieved with adjustments to the Sprite Renderer alone. By simply controlling color parameters we can dim ambient light, and add the lantern light falloff.

image-asset.png

Since color from a Sprite Renderer parameter is applied to the object as a whole, our resulting lighting can not be interpolated smoothly from point-to-point and attenuation produces clearly visible tile outlines.

This generic approach while not work for our goal, however it still can be useful for some basic effects. It is worth considering if you need a quick and simple approach like this, since this generic approach is easy to use.

But for this example we need to achieve light interpolation inside each tile and to do that we have to provide a shader with points (vertices) to interpolate between.

 

CREATING A TOPOLOGY WITH INTERPOLATION POINTS

This approach uses mesh vertex colors to store additional position channel data that represents the 3D shape of the object. Inside the shader additional channel is used to visualize simple attenuated lighting sources.

For this to work we will have to manually create an object mesh and use a Mesh Renderer instead.

To illustrate the difference between sprite and our new custom mesh, let’s take a look at the generic sprite mesh Itself.

Sprite has two modes: Full Rect and Tight. In both cases it provides flat mesh to be used as billboards to represent a texture in 3D space.

Our custom meshes will be used as billboards as well, but beyond that, additional topology will be added to store and provide “Perceptible Position” points.

image-asset.png

Sprite Mesh. Full Rect and Tight modes.

“Perceptible Position” (PP) is basically a second position property of the mesh, only it represents perception of the simplified shape depicted on texture. This is the reason why we must paint it manually - there is no way for shader to parse object’s shape, of the object depicted on 2d Texture, into position that required for lighting calculations.

To add PP data into a vertex we will be adding additional color properties to the mesh. Since most 3D modeling tools have some type of vertex painting capability. You can adapt this approach to use other properties, assuming you’ve figure out how to paint and pack it in.

Untitled-2.png

Here is how our cube would look with PP points added. Note that the object topology is flat and we are using colors to paint normalized position in the object-space; RGB as XYZ.

For a cubic shape we just need to add one additional point in the center, to represent the position of the closest corner. Perceivable shape is quite simplistic. Sampled position will be interpolated between points as color does.

Here is an example of our mesh creation workflow:

  • Create a billboard plane.

  • Cut the plane to tightly fit the objects profile and reduce transparency overdraw.

  • Add topology to represent the simplistic perceptible shape.

  • Paint the vertex colors into the vertices to represent the XYZ object-space position.

  • In case of isometry - tile rotated by isometric angle and transform is reset. We don’t want to have any initial rotation of the object, which may cause transformation complications inside the shader. We will be able to rotate object during runtime.

 

This approach solves interpolation problem by providing two sets of positions: the original one to use as billboard for texture display, and the second one to use for light calculations.

At this point, you may be asking yourself “Why won’t I just ditch the redundant PP, and create a regular topology in the same way I would create the PP topology?” - Which actually would be a great idea, since it would simplify shading a lot. The problem is that even when viewing topology in orthographic projection, modifying the Z position of vertex can affect the interpolation between points and skew the texture.

Also, you don’t really need to create exact object topology, simple shape would be enough for light interpolation. More than that, you can avoid modeling PP for each object and just use one simple cube billboard for every object, which would be enough to create smooth lighting.

 

Shader (Vertex Light attenuation)

We will start from creating simplified shader: vertex light attenuation without batching support.

Shader "Environment/VertexAttenuation"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader
    {
        Tags
        { 
            "Queue"="Transparent" 
            "IgnoreProjector"="True" 
            "RenderType"="Transparent" 
            "DisableBatching" = "True"
        }

        LOD 100
        Lighting On
        ZWrite Off
        Fog { Mode Off }
        Blend SrcAlpha OneMinusSrcAlpha

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            #define MaxLights 4

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
                float3 color : COLOR;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                float4 vertex : SV_POSITION;
                float3 lightColor : TEXCOORD1;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);

                // Note: .w is required for matrix transformation to work.
                float4 perciveblePosition = float4(v.color, 1);

                // Transform percived vertex in to Model-View Space.
                // Note: Unity stores 8 vertex lights position in MV space, to calculate distance we shold transform vertex in to MV as well.
                float3 percivedVertexMV = mul(UNITY_MATRIX_MV, perciveblePosition);

                // Calculate light.
                // Set ambient light as base, loop and add light sources attenuation.
                float3 vertexLighting = unity_AmbientSky;

                for (int index = 0; index < MaxLights; index++)
                {  
                    float3 vertexToLightSource = unity_LightPosition[index].xyz - percivedVertexMV;    
                    float squaredDistance = dot(vertexToLightSource, vertexToLightSource);
                    float lightRange = unity_LightAtten[index].b;
                    float attenuation = 1.0 / (1.0 + lightRange * squaredDistance * squaredDistance);

                    float3 attenuatedColor = attenuation * unity_LightColor[index].rgb;

                    vertexLighting += attenuatedColor;
                }

                o.lightColor = vertexLighting; 

                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
                fixed4 textureSample = tex2D(_MainTex, i.uv);

                fixed4 finalColor = float4(textureSample.rgb * i.lightColor.rgb, textureSample.a);

                return finalColor;
            }
            ENDCG
        }
    }
}
 

Finally, we have some interpolation working!

Some precision artifacts are visible while using high contrast lighting with low detail topology, but still this is an improvement.

Depending on your project needs and if you can compromise with some lighting precision you will get a very performant vertex-based shader.

image-asset.png

Here is final result with new shader applied.


With careful positioning of the lighting sources and enough mesh detail to provide smooth interpolation, you can achieve a good enough picture and still enjoy solid performance.

You can get more precision by creating more detailed meshes, by subdividing painted meshes using your modeling software. Although, there is a limit were having lots of highly subdivided billboard objects in the scene may compromise batching and cause other problems.

To get more lighting precision we can move lighting logic from vertex to fragment shader.

 

SHADER (FRAGMENT LIGHT ATTENUATION)

We can achieve better lighting precision by simply moving lighting calculations from vertex to fragment shader. However, fragment based lighting calculations are significantly more expensive, and will result in about 5x performance difference from vertex based solution.

We can reduce performance cost, and achieve a better handling of multiple lights, by rewriting shader to use Forward Rendering Path. You can read more about Forward Rendering Path in the Unity documentation. Basically, this approach allows unity to group, sort and render lighting sources as an additional pass on top of already rendered objects, which will reduce fragment rendering for a cost of additional draw calls and compromised dynamic batching.

Shader "Environment/FragAttenuationForward"
{
    Properties
    {
        _MainTex ("Texture", 2D) = "white" {}
    }
    SubShader
    {
        Tags
        { 
            "Queue"="Transparent" 
            "IgnoreProjector"="True" 
            "RenderType"="Transparent" 
            "DisableBatching" = "True"
            "CanUseSpriteAtlas"="True"
        }

        LOD 100
        Lighting On
        ZWrite Off
        Fog { Mode Off }
        Blend SrcAlpha OneMinusSrcAlpha

        Pass
        {
            Tags { "LightMode" = "ForwardBase" }

            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
                float3 color : COLOR;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                float4 vertex : SV_POSITION;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);

                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
                fixed4 textureSample = tex2D(_MainTex, i.uv);

                // Set ambient light as base, additional light will be added in ForwardAdd pass.
                float3 lighting = unity_AmbientSky;

                fixed4 finalColor = float4(textureSample.rgb * lighting.rgb, textureSample.a);

                return finalColor;
            }
            ENDCG
        }

        Pass
        {    
            Tags { "LightMode" = "ForwardAdd" }
            Blend One One

            CGPROGRAM

            #pragma vertex vert  
            #pragma fragment frag 
            #pragma target 3.0

            #include "UnityCG.cginc"
            #include "AutoLight.cginc"
            #include "Lighting.cginc"
            #pragma multi_compile_lightpass

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
                float3 color : COLOR;
                float3 unBatchedVertex : TEXCOORD1;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                float4 vertex : SV_POSITION;
                float3 percivedVertexMV : TEXCOORD1;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);

                // Note: .w is required for matrix transformation to work.
                float4 perciveblePosition = float4(v.color, 1);

                // Transform percived vertex in to World Space.
                // Note: This is different from vertex based solution becouse pixel lights are provided in world space.
                o.percivedVertexMV = mul(unity_ObjectToWorld, perciveblePosition);

                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
                fixed4 textureSample = tex2D(_MainTex, i.uv);

                float3 percivedVertexMV = i.percivedVertexMV;

                // Calculate light attenuation.
                UNITY_LIGHT_ATTENUATION(lightAttenuation, i, percivedVertexMV);

                fixed3 diffusedColor = textureSample.a * textureSample.rgb * _LightColor0.xyz * lightAttenuation;

                return fixed4(diffusedColor.rgb, 0);
            }
            ENDCG
        }
    }
}
 

Looks way better.

It’s worth mentioning that performance difference will be noticeable on mobile devices. You can reduce the number of dynamic lights in the quality settings to get a little boost, but generally fragment calculations are just that expensive. Also, one of the significant drawbacks of forward rendering is that Multi-pass Shaders are bad at dynamic batching.

Use static batching, if you can, and try different approaches to see what gives better performance, since best approach will depend on your scene setup. Several versions of fragment shaders are included with attached project files.

 

Batching Support

To reduce the total number of draw calls Unity’s dynamic batching combines group of objects in to one mesh and passes it as a single draw call to GPU. This process modifies vertex positions and objects origin itself. For a given object, its PP values (which defined relative to object's local space) need to be converted to world space. Because batching modifies vertex positions and object origins, this conversion cannot be done without some additional data.

My solution to this problem is to store an additional copy of the vertex position properties and use it to calculate differences between original and batched vertices. The result is added to the perceivable position before transformation.

How to effectively make a copy of vertex positions? You can use modeling software like Houdini FX that allows easy wrangling of exported mesh properties. Another relatively easy option is to create a simple Asset Post-Processor script inside Asset/Editor folder, like so:

public class MeshPostProcess : AssetPostprocessor
{
    void OnPostprocessModel (GameObject gameObject)
    {
        var meshFilter = gameObject.GetComponent();
        var importedMesh = meshFilter.sharedMesh;

        List verticesList = new List(importedMesh.vertexCount);
        importedMesh.GetVertices(verticesList);

        // Inject copy of vertex array in to second UV channel. 
        // This mesh data is not going to be modified during batching and we can use it to get the vertex batching offset.
        importedMesh.SetUVs(1, verticesList);
    }
}

When we create an additional channel to store a copy of the vertex position, we can use it inside the shader to get the batching offset, and add it to PP to get the world position.

// Update Tags.
    Tags
    { 
        "Queue"="Transparent" 
        "IgnoreProjector"="True" 
        "RenderType"="Transparent" 
        // Removed Disable batching tag.
    }

// Update passed in vertex structure.
    struct appdata
    {
        float4 vertex : POSITION;
        float2 uv : TEXCOORD0;
        float3 color : COLOR;
        // Added new parameter ro be read from mesh UV2. 
        float3 unBatchedVertex : TEXCOORD1;
    };

// Change perciveblePosition varieble inside Vert function.
    {
        // Unbatch current position by using baked in original vertex position.
        float3 batchOffset = v.vertex - v.unBatchedVertex;

        // Note: .w is required for matrix transformation to work.
        float4 perciveblePosition = float4(v.color + batchOffset, 1);
    }

That should do it. Assuming you are using the same material instance, this would allow Unity to dynamically batch large group of tiles into a single draw call.

For Unity to actually batch an object it requires having the same material Instance with one texture assigned. So, in the cases like this, where you surely want to display and batch different types of objects, you must use a texture atlas (Using Unity sprite packer or standalone tool) and manually modify mesh UV’s to display specific object from the atlas palette. Developing an atlas packing workflow is a topic of itself, that i surely will explore in a separate post.

Also, a note about Unity provided light sources: Unity provides whole 8 vertex lights that can be easily sampled inside a vertex shader. Beyond that, unity sorts, culls and assigns the brightest lights to each individual renderer so your shaders will always render the most important light sources. But by providing different parameters to a material, dependent on the distances to the light sources, it may break one batching group in few smaller ones. So, in the cases where better batching is more important for you than having more dynamic light in the scene, it is worth considering creating a custom way of injecting light sources into a material to ensure that every object has the same material properties.

 

Conclusion

We achieved a simplistic but performant vertex based 2D lighting model.

While upsides are good performance and no additional textures to store. The downsides are hassles with manually setting up the meshes and simplistic lighting.

Those characteristics make this approach good for mobile platforms.

Next chapter continues this topic by describing how to add normal map support and implement Phong lighting model inside isometric space.

Perceivable Position can be used for many cheap effects. Here is example of how system could be extended further, with dimensional fog, AO, simple Caustics and deformation.

Leo Kruts
Got a project?
Let's talk