Category Archives: 3D programming
I have added a new technique called … I’d rather not write it down again…. So, the great thing is that it mimics reflections on shiny surfaces using a simple, but yet quite accurate approximation based on environment cubemap textures.
For more details, see http://www.gamedev.net/topic/568829-box-projected-cubemap-environment-mapping.
The hard part was to find a hidden bug in the engine that prevented me to render in-scene cubemap snapshots using a dedicated off-screen compositor chain. But eventually I have managed to find it and it seems that it worth the effort.
The engine automatically places light probes in each room in the level at load time, then it renders a fully lit HDR cubemap texture for each of them. The engine uses these cubemaps and some extra parameters assigned to the probes, to calculate the reflections with the aforementioned method.
First I have experimented with a similar idea that uses spheres to distort the cubemap look-up coordinates, but it was inaccurate. BPCEM gives a more accurate results in box-shaped rooms.
There are some images with the results:
A demo video:
Lately, I have rewritten a large part of the hierarchical scenegraph management code.
Scene objects (objects to be rendered in 3D view or affect the 3D scene in a way, such as models or light sources) can be simply added to the scene, but scene objects can be added as a child to other scene objects also to form an object tree. In the latter case, the child objects inherit base transformation from their parent – or chain of parents. It means that if the 3D transform of a parent node changes (due to physics interaction for example) then its children follow the parent node in 3D space, keeping their relative (local) transformation from it. We can construct compound objects in this way: we can attach light sources to car models or street lights, build objects from multiple models, and so on.
Spatial index tracks these transformation changes as well, so features based on spatial queries are remaining functional.
To test this, I have added a simple physics interaction to the FPS character controller, making possible to pick up objects and drop or throw them. See the results on YouTube:
Lately, I have replaced the recently added screen-space light shaft algorithm with a new one because I was not satisfied with the results. The new method uses ray marching with interleaved sampling of the shadow map to simulate single scattering in participating media in real-time, thus, it produces more realistic results with relatively low cost.
The implementation calculates sample values in a separate pass to a new framebuffer, and then it collects the samples at the light combiner pass. The interleaved sampling is performed using a hand-crafted 8×8 pixel grayscale image containing random pixels with uniform value distribution.
There are some images with the results.
I have added a new post-processing effect that approximates light scattering on the sunlight according to this GPU Gems 3 article. It required a minor coding on the engine, I have introduced two new built-in shader parameters: sun screen-space position to use in the calculation, and a term that gradually scales down the ray effect when the camera is not turned towards the sun. The process itself has been implemented in the postprocessing shader of the deferred chain, that made implementing it easy because all required data is available at this point of the rendering.
Here are some pictures about the result.
Just finished the cascaded shadow map support. It uses the technique described in NVidia’s CSM paper. The original method gives somewhat flickering shadow edges so I have added some reduction technique described in a talk about Frostbite. To make it more smooth, I have added 3×3 PCF filtering. There is a short video with the results:
After some struggling with frustum and light projection matrix calculation, CSM is finally working in the engine. There are four splits, parallel light projection and dynamic bounding frustum calculation. The technique is basically the same as described in the NVidia’s article.
There is still some annoying bug with frustum culling in the shadow map render code, thus, the shadows are leaking sometimes. But performs quite good after all.
Here are some pictures of the results.
I have played with the shaders in the renderer. Now it supports glow maps and specular maps. The specular map is used in two ways. On the one part, it is used as a specular term in the light equation in the lighting pass. On the other part, it is used to add some environment reflections on objects in the final pass.
The terrain shader blends between multiple textures depending on the terrain slope and height. The shading is done in runtime so the changes in the terrain geometry caused by the erosion simulation are properly appearing in the rendering.
Here are some images about the results.
To emphasize the effects of environment reflections (and to make calculations simpler), I have replaced the procedural skybox to a static HDR environment cube map.
Update 10/01/12: I have found a bug in the shaders. I have fixed it, and I have done some modelling to see the results. I have obtained some (cars and dumpster) from TurboSquid too, and I have put some new images to the slideshow to show the results.
…and there are spotlights. And we see the lights, and this is good.
And I am very happy with it, because it took some time to get it work correctly due to some small and annoying bugs. But the results are fine. We can add spotlights into the scene, we can put projected textures on them and they can cast dynamic shadows. They are integrated into the deferred renderer, of course. Spotlights are rendered with a closed pyramid, but is possible to use a cone instead. Bounding boxes are adjusted according to the current transform of the light frusta to keep visibility culling efficient.
Read the rest of this entry