Category Archives: rendering

Yet Another Level Demo

This is a test level as well as the previous one. It is a relatively large map so the portal culling comes quite in handy in this case.

Advertisements

Another Indoor Test Level

I have done a quick video by request. Check it out. 🙂

Shadow Maps on Omnidirectional Lights

Theoretically, using six shadow mapped spot lights to piece up a shadow casting point light is not a big deal. Especially if the light geometry calculation for deferred lighting is correct. Well, in my engine, it was not the case. 🙂

There was a hidden calculation error that resulted in a frustum with incorrect corner points that made it bigger than desired. This error caused no problem so far because projected mask texture on spotlights and “discard” codes in fragment shader prevented the extra pixels from being seen. But once I have used spot lights with no mask to render point light, the result was erroneous. It looked so strange and mysterious that it took few hours to find the root of the problem.

Finally, the shadows on point light are working now, and I proudly present some screenshots here.

Shadow map on omnilight. Shadow map on omnilight. Shadow map on omnilight. Shadow map on omnilight.

The First Indoor Level

Now the exporter supports nearly all the things required to create an indoor level for the engine. After several days of work and a helping artistic hand from Endre Barath (etyekfilm.hu), we have created the first explorable level in the engine. There is no much to say, aside from that it was a large amount of work from me to make things work. It was mostly debugging and optimization but some new features were also added like color grading, tangent/bitangent generation at loading time,  and multi-material support per model.

Of course, the job is not finished yet, so many things should be fixed, but there is a short video showing the results.

Hierarchical Scenegraph and Physics Interaction

Lately, I have rewritten a large part of the hierarchical scenegraph management code.

Scene objects (objects to be rendered in 3D view or affect the 3D scene in a way, such as models or light sources) can be simply added to the scene, but scene objects can be added as a child to other scene objects also to form an object tree. In the latter case, the child objects inherit base transformation from their parent – or chain of parents. It means that if the 3D transform of a parent node changes (due to physics interaction for example) then its children  follow the parent node in 3D space, keeping their relative (local) transformation from it. We can construct compound objects in this way: we can attach light sources to car models or street lights, build objects from multiple models, and so on.

Spatial index tracks these transformation changes as well, so features based on spatial queries are remaining functional.

To test this, I have added a simple physics interaction to the FPS character controller, making possible to pick up objects and drop or throw them. See the results on YouTube:

Real-Time Volumetric Light Shafts

Lately, I have replaced the recently added screen-space light shaft algorithm with a new one because I was not satisfied with the results. The new method uses ray marching with interleaved sampling of the shadow map to simulate single scattering in participating media in real-time, thus, it produces more realistic results with relatively low cost.

The implementation calculates sample values in a separate pass to a new framebuffer, and then it collects the samples at the light combiner pass. The interleaved sampling is performed using a hand-crafted 8×8 pixel grayscale image containing random pixels with uniform value distribution.

There are some images with the results.

Dynamic Daylight

I have made dynamic sunlight and shadow support wired to the UI, so the sun position can be controlled interactively. The engine calculates the sun position by using azimuth and elevation as parameters. Sun and fog color is adjusted according to the elevation by using a color gradient generated with GIMP. The result can be seen here:

Normal Mapping with Partial Derivatives

I have added normal mapping support recently. It is not perfect, but works well with some new models I have obtained from ShareCG (thank you, Nobiax). The model loader still lacks of tangent and binormal processing support (still relying on lowly OBJ files), so I am using partial derivatives to reconstruct T and B from GBuffer and shader parameters.

I have extended the deferred light combiner shader to handle multiple BRDFs with many parameters (diffuse, specular, environment reflection, etc.). The BRDF parameters are organized in a table that is indexed by a field in the GBuffer. Different objects can have different BRDFs, the particular index is an effect parameter. Deferred light combiner shader handles backside lighting (e.g. the sunlight shines through the leaves), controlled by a diffuse parameter in the BRDF table.

Here are some images produced by the current state of the renderer.

This slideshow requires JavaScript.

Flicker Free Soft Edged Cascaded Shadows

Just finished the cascaded shadow map support. It uses the technique described in NVidia’s CSM paper. The original method gives somewhat flickering shadow edges so I have added some reduction technique described in a talk about Frostbite. To make it more smooth, I have added 3×3 PCF filtering. There is a short video with the results: 

Through the Valley of Cascaded Shadows and Depth

After some struggling with frustum and light projection matrix calculation, CSM is finally working in the engine. There are four splits, parallel light projection and dynamic bounding frustum calculation. The technique is basically the same as described in the NVidia’s article.

There is still some annoying bug with frustum culling in the shadow map render code, thus, the shadows are leaking sometimes. But performs quite good after all.

Here are some pictures of the results.

This slideshow requires JavaScript.