Yet Another Level Demo

This is a test level as well as the previous one. It is a relatively large map so the portal culling comes quite in handy in this case.

Another Indoor Test Level

I have done a quick video by request. Check it out. ūüôā

Shadow Maps on Omnidirectional Lights

Theoretically, using six shadow mapped spot lights to piece up a shadow casting point light is not a big deal. Especially if the light geometry calculation for deferred lighting is correct. Well, in my engine, it was not the case. ūüôā

There was a hidden calculation error that resulted in a frustum with incorrect corner points that made it bigger than desired. This error caused no problem so far because projected mask texture on spotlights and “discard” codes in fragment shader prevented the extra pixels from being seen. But once I have used spot lights with no mask to render point light, the result was erroneous. It looked so strange and mysterious that it took few hours to find the root of the problem.

Finally, the shadows on point light are working now, and I proudly present some screenshots here.

Shadow map on omnilight. Shadow map on omnilight. Shadow map on omnilight. Shadow map on omnilight.

Scripting is Fully Functional

In an earlier post, I have written that I have implemented Lua script support in the engine. Now it reached its full power since I have added a proper callback mechanism to it. It makes possible for the triggers in the engine to call user specified Lua functions on firing. Unfortunately, SWIG does not help to implement such callbacks, so I had to do it by myself.

Read the rest of this entry

Box Projected Cubemap Environment Mapping

I have added a new technique called … I’d rather not write ¬†it down again…. So, the great thing is that it mimics reflections on shiny surfaces using a simple, but yet quite accurate approximation based on environment cubemap textures.

For more details, see

The hard part was to find a hidden bug in the engine that prevented me to render in-scene cubemap snapshots using a dedicated off-screen compositor chain. But eventually I have managed to find it and it seems that it worth the effort.

The engine automatically places light probes in each room in the level at load time, then it renders a fully lit HDR cubemap texture for each of them. The engine uses these cubemaps and some extra parameters assigned to the probes, to calculate the reflections with the aforementioned method.

First I have experimented with a similar idea that uses spheres to distort the cubemap look-up coordinates, but it was inaccurate. BPCEM gives a more accurate results in box-shaped rooms.

There are some images with the results:

A demo video:

The First Indoor Level

Now the exporter supports nearly all the things required to create an indoor level for the engine. After several days of work and a helping artistic hand from Endre Barath (, we have created the first explorable level in the engine. There is no much to say, aside from that it was a large amount of work from me to make things work. It was mostly debugging and optimization but some new features were also added like color grading, tangent/bitangent generation at loading time,  and multi-material support per model.

Of course, the job is not finished yet, so many things should be fixed, but there is a short video showing the results.

A Blender Exporter is in Progress

Currently, I am working on an exporter for Blender that greatly speeds up content creation process for the engine. It is not complete yet, but I thought that it worth a post.  It supports meshes, instances, materials, textures, point and spotlights.

The exporter saves all meshes in obj format (ugly, I know, but works). It creates Merlin3D scene files with all mesh and light instances. Moreover, it generates effects files automatically, configuring the call of an appropriate GLSL shader and configures the parameters for it. Some of these parameters are the textures taken from texture slots in Blender materials. The exporter supports diffuse, normal, specular and glow maps at this time, but support for further texture types can be easily added. There are also shader parameters coming from material texture settings like specular and glow coefficient, and things like that.

The script extracts several light parameters, such as color, energy, radius (or distance in Blender terminology) spotlight angle or falloff type.

There are a lot of things missing, but the results achieved now are promising… ūüôā

Finally, let’s see some screenshots.

The First Teleports

Again, I had some time to play with the code. And again, I have rewritten the game object representation structure to make things simpler inside the engine. I have added a new object type “function”. It ¬†is a non-rendering scene node, making different things to other objects. It can heal, give damage, etc., and teleport. It is activated a physics trigger by touching.

I have implemented  some teleports in the scene using these new objects. The following video demonstrates how it works.

Hierarchical Scenegraph and Physics Interaction

Lately, I have rewritten a large part of the hierarchical scenegraph management code.

Scene objects (objects to be rendered in 3D view or affect the 3D scene in a way, such as models or light sources) can be simply added to the scene, but scene objects can be added as a child to other scene objects also to form an object tree. In the latter case, the child objects inherit base transformation from their parent Рor chain of parents. It means that if the 3D transform of a parent node changes (due to physics interaction for example) then its children  follow the parent node in 3D space, keeping their relative (local) transformation from it. We can construct compound objects in this way: we can attach light sources to car models or street lights, build objects from multiple models, and so on.

Spatial index tracks these transformation changes as well, so features based on spatial queries are remaining functional.

To test this, I have added a simple physics interaction to the FPS character controller, making possible to pick up objects and drop or throw them. See the results on YouTube:

Real-Time Volumetric Light Shafts

Lately, I have replaced the recently added screen-space light shaft algorithm with a new one because I was not satisfied with the results. The new method uses ray marching with interleaved sampling of the shadow map to simulate single scattering in participating media in real-time, thus, it produces more realistic results with relatively low cost.

The implementation calculates sample values in a separate pass to a new framebuffer, and then it collects the samples at the light combiner pass. The interleaved sampling is performed using a hand-crafted 8×8 pixel grayscale image containing random pixels with uniform value distribution.

There are some images with the results.