Monday, November 14, 2011

So Random... - 01 - Noise World

I have three more major things do before Tech Demo 1: first I must finish the LOD switching. Coding is done, but I have tons of profiling to do and this will not result in posts so it will be a longer but silent task. Then I need to decide what to do with the shadows. Stencil shadows work fast enough only with very low item density. I had zero luck with shadow mapping. For the Irrlicht solution I tried XEffects, but that package is far to old and can't be compiled with modern Irrlicht. XEffects - Reloaded has several problems, the largest beeing that it won't work with out of the box Irrlicht DLL. You need a special DLL. Of course, the link to the DLL is dead. I still got my hands on it though, but because I have also modified Irrlicht I get some weird failure in the creation of the 3D device. I do not have the sources for the modified Irrlicht  XEffects was created from in order to merge both. And the biggest problem is that XEffects requires GPUs that are more powerful and newer that the ones I am targeting. And speaking about dead links, at least 90% of all links when you google a topic like shadow mapping are either incredibly outdated, or more often just dead. The final thing I must do is the administrative tasks before the release, which again do not involve coding.

So I need some filler content that can occupy my time for the time being. I decided to do some very experimental stuff (even for me) while the above points are resolved, but while still following these rules:
  • What I implement must potentially advance the project by adding features or at least improving existing features while adding new lines of code. I could spend days tweaking values and balancing stuff, but I am not allowed to do that.
  • It must not be completely outlandish and unrelated, like e.g. simulating the movement of bird wings in thick mustard.
  • Every session must add at least 200 lines of code.
For the first topic I though I'd start with one thing that is missing and is very apparent that it is missing: "owerworld" generation. I need a larger and fairly realistic land mass from where you can select you embark location.

Right now I am using midpoint displacement for the embark location, but I will switch over to Perlin noise. What is Perlin noise you ask? This:


It is coherent noise (smooth and without seems, optionally tileable) that has a bunch of interesting properties for world generation. I hear the big boys are moving away from Perlin noise to other types of noise that have even better properties, but I'll stick to what is clearly documented. Also, Perlin has a large number of advantages over midploint displacement:
  • Midpoint displacement is slow. You are basically subdividing your area and randomly shifting the midpoint and then recursively repeating the process until you get a small (typically size of 1) square. So basically recursive computations with floating point numbers, where the size of the map directly influences the depth of the recursion. Slow. Perlin noise is no speedster, but by adjusting the number of octaves and interpolation method you can get some faster results for testing purposes. You can also do a fast calculation followed by a slower but higher fidelity calculation based on some information you have learned from the fast run.
  • It is hard to control the shape of the midpoint displacement. You are dividing and randomly increasing the middle point. If you want better shape you use the square-diamond method, which makes it even harder to control. The dimension of the results is also kind of fixed. It is not a lot easier to control the shape of Perlin noise, but you get to discover some interesting tricks to compensate for this. Also, a good Perlin implementation is very customizeable.See what I can do with the same algorithm:

  • Midpoint displacement must be computed down to the smallest unit and the end result must be kept in memory. Perlin noise does not need to be kept in memory. You can compute it on the fly for every point you desire. You can cache the results or just do it on the fly (as long as it is fast enough). So you can have virtually an infinite sized world. Diamond-square displacement on the other hand would require enough memory to be allocated for the generated area to fit inside and the value of a point is dependent on the point around it. Here is a nice comparison shot of the same "world" generated once with low detail, once with high detail (the twos sets were generated in complete isolation, one does not need to know about the other):

  • Perlin noise offers "infinite" detail as you zoom in. As you increase zoom level you need to increase the detail parameters, but theoretically the only limits are given on how fast you want to generation to be. The memory consumption is constant (and zero if you decide not to cache anything; not a good idea). Using the above high detail map, let me show how it looks when zoomed in 20 times to a point near one of the middle darker spot:


As you can see, the zoomed in portion is smooth and still detailed. Generating the zoomed in portion takes the same amount of time as generating the zoomed out one. If you consider these results the values in a height map, you can begin to see how this could be used for world generation: the darker a pixel is, the lower the elevation.

But the above patch does not look realistic at all. What piece of land looks like this? Maybe a section full of hills somewhere in the middle of a big continental map, but such maps are boring. I'll present a very intuitive way that can be discovered by experimentation to give some fair results.

First, one might argue that you can't really tell what is going on because of the smooth transition in the shades of gray. So one might be tempted to render the same "map" with some form of edge detection/strengthening algorithm. Or just simply increase the amplitude:


Wow! Is that it? That's the result we get by simply increasing the amplitude? The image on the right is the same map, but rendering at a zoom of 20 while centered on a random point. This gives an interesting clue: the zoomed out map is still too busy to be realistic, but the zoomed in looks like a section from a continental mass. Maybe we need to start with such a high zoom level and zoom in even more to get to expedition section. But before I talk about that, let me prove that the two maps are identical and also demonstrate some interesting composition properties. Let me show you what happens if we take the two previous pictures and place then on top of each other, with white used for transparency (please ignore the picture on the right; only the one on the left is important; but still, the right section looks very interesting if you imagine it with selectively inverted heights):


Back to the use of zoom for more realistic land masses! By applying the zoom factor of 20 for the normal map, and again 20 (so 400 total) for the zoomed in section and after adjusting some parameters to make edges more pronounced at such zoom levels, we get this:


The image on the left is our section of the continental map and it looks fairly realistic. One could choose the coordinates to zoom in randomly, do some computation to achieve some ratio of water/land or some desired height average or just do what I did: take coordinates (0, 0). The local map on the other hand does not look that good. It is very hard to find a coordinate in which we don't get an image similar to the one on the right: a clear and relatively straight separation of water and land.

I tried to compensate for this by lowering the zoom factor and making edges even harder. Also, choosing a starting location that is more interesting and has a lake also helps:


The problem with this approach is that you loose detail. So I tried to generate two maps. One with an amplitude of 10 and one with an amplitude of 100. Overlaying the two gives this result:


Hmmm, not great, but we have some detail in the elevation progression. Let's try different parameters:


Well, obviously this is going to take a lot more fine tuning. Here is another blending, but this one is more advanced because it is done in Photoshop. If I were to add some kind of normalization to make sure the border between water and land is more smooth, I could get (hopefully) better results:


Total number of lines of code: 267. One question before I leave: do you thing it would provide some value if I were to post the source code for the example above?

3 comments:

  1. "For the first topic I though I'd start with one thing that is missing and is very apparent that it is missing: "owerworld" generation. I need a larger and fairly realistic land mass from where you can select you embark location."

    Why?

    ReplyDelete
  2. A very big piece that I see missing here is animation. For a world like DF with lots of things moving around per frame, if u intend to have smooth animation u will end up doing lots of transformations. We r talking about moving every vertex in every object u r animating every frame. Whether u do these on the GPU or CPU these can become very expensive. U need to think of a way to deal with this!

    ReplyDelete
  3. This really depends on how the final camera system is going to work, if you are going to actually be able to have in view a huge section of the world at any give time as you would not have to update vertex positions of objects out of view only the frame-time information would need to be updated which is not costly at all (only update the vertex position of those objects in the view frustum, but frame-time information for everything) and there is also the practice of shallow copying to save on memory costs where you only have one copy of the base mesh and then transform it to the position of all the copies (you'd also have to transform each vertex).

    ReplyDelete