So I managed to fix the grass lighting bug with relative ease. I was right, I introduced it probably 30 minutes before doing the packaging. The grass response to terrain deformation bug might get fixed until Monday. I'm not sure yet because first I need to optimize that code a lot, because the bug fix implies doing some very slow stuff.
Still, I am on track. Snapshot 6 will be proper and will also finally have INI file saving. I also isolated all grass related features into a standalone class that is very easy to use. All you have to do is give it the terrain and in rest it will take care of loading, saving, responding to deformation, rendering, etc...
So let's talk about future engine features. The very next feature will be static persistence and a few new extra assets to persist. Finally, those trees and boulders will be available for map generation and you will also be able to place them, and once you do, they will be there next time you load up your save. This will be pretty easy, just need to optimize the old placement code, add some persistence and clear grass around the newly placed objects.
I'm not sure what the next step will be. With fully persistent maps, I could go heavy into map generation, adding about 20-30 assets and making the outside world interesting too look at first. After it is interesting to look at, it will need to become interesting to explore. Or I might delay this in favor of adding lakes, with buoyancy and ripples and reflective graphics. The third major candidate is random dungeons. I could create some very simple rectangle based assets and write a random dungeon generator. Later, a skilled 3D artist (or some hobo I can find who does 3D for food) could replace those rectangle shapes with a dungeon tile-set. It will work as long as you respect the alignment and join rules. The same basic tile-set could be skinned to create several different looking dungeon sets.
Too bad I don't have my old audience or traffic on the blog, we could hold a vote! I'm really excited about all 3 possibilities. OK, maybe not exactly excited about lakes, but I'll gladly do it.
Once this is done and we have a good over-world and good dungeons, maybe with lakes, we need path-finding and combat. And in the meantime, I'll be slowly adding skill implementations, new assets and new creation tools. The next creating tool will be a variable radius smooth tool. Have a rough terrain? Need to place some houses and make it more easy to traverse? Smooth it out on a large scale. The small scale editing tools work for this, but it can be fiddly to create a smooth terrain and takes some time. With the smooth tool, you'll just need a click.
These are all functionality related features. But how do I stand with graphics? Well:
- Terrain: mostly done. Graphics wise I am happy. The only thing I might change is the blending of textures from close to far to reduce repeating patterns and maybe add support to a 5th or 6th material.
- Grass: almost done. I need to update the generator to use all available grass textures, not just one and also make it that 1 in 10 or something like that bushlets use a weird color, just to break up the uniformity. A second grass texture might make it into Snapshot 6.
- Lighting: on the right track. Terrain handles up to 10 point lights/cell and might raise it to 15. Grass handles up to 5/cell and might raise it to 10. Once I add the static geometry, each mesh should also handle 5/10 dynamic lights. Dynamic geometry will be harder to illuminate, but I'll handle it. There is still one nasty bug with point lights I can't figure out that makes me waste one extra shader output, so rendering 5/lights per mesh might require two passes.
- Shadows: I'm not touching that with a ten foot pole. This is such an American expression. I have no idea how long a foot is. And I don't care. But I love poking stuff with poles. So anyway, I have wasted weeks trying to create good looking shadows. I believe past projects have died because of shadows. That's why I am not touching them. I won't even take the shadow mapping from an open source project. On the other hand, if somebody from those projects is willing to isolate the shadowing code and write some easy to set up components that I can try with ease, I will give it a try. I might have to find somebody and pay them for that. As long as I am not the one who is touching the shadows. This is a purely subjective and artificial requirement. I stubbornly refuse to waste more time with shadows and delay other good features for this cause.
- Post-processing: well bloom is working round the clock. I believe snapshot 5 is more bloomy that the rest and I'll dial it back. I used to have more post-processing, like the black edges used in cell shading, but that is dead code.
What? Dead code? In my engine? Can't stand for that! Let's fix it and enhance the post-processing pipeline for teh ultimate graphics! I'm not sure about the black borders, I don't think the fit well with my assets. What I really want is light shafts. Not a very physically based or expensive implementation, just some basic subtle ones. Then I would like to enhance my fog with a depth of field effect. I generally dislike DOF and turn it off in games, but I love the way Warframe does DOF. So I'll be trying to implement a DOF scheme where the focal plane is so set up that everything in front of you is focused and before one reaches the fog, everything becomes blurry. Finally, since we are at it, might add some SSAO.
Let's start with light shafts. Light shaft require a depth buffer to be rendered, but depth is boring and since other effects require a normal buffer to be rendered, we'll start with that.
My post-processing framework also expects these buffers, so let's write some fresh implementation for generating them and get rid of the dead code. First we fix and resurrect the secondary render targets and their display on screen:
The second render target is now the normal target. As you can see this target renders the scene again, but it filters some content. We don't want the skybox or grass. Grass is just some eye candy, it doesn't respond to physics and we don't want it cell shaded or influencing SSAO. With this filtering, the pipeline for generating these targets becomes simpler and faster. But we also want to strip out more, so we strip out all the lighting functionality from secondary render targets:
As you can see, while the regular scene is rendered fully, the normal target doesn't waste time with unimportant components, neither on the CPU or GPU. The lighting is only computed for the main scene.
Finally, we replace the fairly expensive shaders use in the main target with a new set of shaders for the secondary target. For now, only a normal output shader:
Ignore the fact that colors are not scaled correctly to RGB range. I fixed this.
For some strange reason, I decided to not remove fog yet from the secondary render target pipeline:
I reduced fog offset to make it visible. I will probably remove fog from the pipeline though. The main reason for the screenshot is to show that I added the ability to inspect secondary targets in full screen mode.
Next, we add all the mobile meshes to the pipeline:
And replace the shaders they will use:
Ignore the small transparency bug. It is fixed.
Now comes the real challenge. While most of the effects I need can use such a simple normal map, some might need real normal mapping:
This was very hard to do. In the past I failed to go from normal maps to such a scene where all normals are pointing the right way. It took me two hours to do this, most of it spent on studying tangent space transformation. And this was the key. The values you sampled from you normal map need to be converted to tangent space.
Terrain is not fully normal mapped because for now I don't need it. And while color blending is easy, I'm not sure how to blend the normals right now.
Since we can render normals in two ways now, I add the possibility to switch on the fly which method is used. The simple method:
Or the one that samples the normal maps for objects:
Since I don't want to leave you with these screenshot that were done before all the bugfixes, here is a glorious shot of the final look of the normal target:
I think this is already paying off and might turn out to be a valuable debugging tool. Take a look at this screenshot:
Do you know how I'm always complaining that the rings on the barrel sometimes are "extruded" in the wrong direction? Well, those normal maps seem to be somehow changing direction from left to right. This might lead me to track down the problem finally.
So what next? Now that I have the normal target, I need to add the depth target. Then I will start working on the light shafts.
No comments:
Post a Comment