Wednesday, September 21, 2011

LL3DLGLD – 3 – Double strips

Let’s start with a short recap of this series: in part one we showed how you can’t create a world out of a lot of small objects. We used cubes, but this goes for any kind of object. Rendering was extraordinarily slow (a FPS of 3) not because the cubes had a lot of extra faces that were not visible yet entered the rendering process (although this is certainly was a huge performance bottleneck), but because we had a huge amount of objects. Then in part two we started using strips with an arbitrary width, with one strip being one object. We got a FPS of 30.

Today we’ll start by breaking the convention of one strip equals one object. By object I mean multiple things: a logical abstraction to help me organize stuff, a mesh and a single mesh push operation. This means that every object will submit its entire mesh in one operation to the hardware. Using these objects that contain multiple strips, we render the 300x300 world again:


Wow: 500 FPS! We went from 30 to 500 FPS, while using the same amount of polygons, but reducing object count. Using this method, we’ll divide our map into small rectangular sections. The engine will analyze that section of the map and create a mesh for it. Every time a change happens in a given section, the associated mesh will be updated, and if is not possible, recreated from scratch. To better illustrate this, I’ll use a new color coding scheme, where each section has its own color (rather than continuing to color code strips):


As one can see in the above picture, the world is now created with the help of only 100 objects. Each object has multiple strips and each strip has a polygon formed out of two triangles. The question one must answer now is how large should a section be? In a 3D engine, it is customary to define two cut-off planes, set at set distances from the camera. Everything that is outside the area constrained by the planes is not rendered. A very useful optimization! The problem is that the check is done on a per polygon basis, so even if one single point of you polygon is outside the area, the entire polygon is skipped. So if you choose an appropriately small section size, the polygons on the extremities, the ones close to the “horizon” will be skipped. You want this, because those polygons won’t be visible and there is no use rendering them. But if the section is too large, this skipping process might bleed over to close to the camera and create holes in your world. In the early days of 3D engines, developers use to a not too distant fog to hide the fact that you needed to have a relatively close cut-off plane to maintain performance. 

Another thing one must consider is that because an object is a mesh, there is maximum absolute number of vertices and triangles that can be added to that object. This is a hardware limitation. So one must either choose a section size that is guaranteed to never go over this limit, or you detect that you are about to go over this limit and create multiple objects. 

A third thing to consider is that DwarvesH is very different from other games that use 3D. While in most games the meshes forming the world are fairly static, the very nature of what I am trying to achieve with this game implies change to the world. There is a good chance that every cycle some small change happens. And thanks to time compression that skips ahead if nothing happens, this chance pretty much becomes 100%. So basically, every cycle you will be updating a mesh. So you must select a section size that can be updated in a short enough period of CPU time so that you game runs smoothly.

And a final consideration: large sections work well with the camera far away from the action, while close up shots favor smaller sections because the engine can do culling and eliminate small objects that are not visible.

There is a sweet spot, a section size that offers a good compromise between all these factors. Unfortunately, this sweet spot varies from engine requirement to engine requirement and you can only determine it experimentally and verify it by testing on a varied range of hardware. For starters, I’ll go with a 30x30 section, because this way I can easily build a 300x300 world. But section size must not be static. You can create an engine that dynamically creates sections of different sizes based on how much stuff you have in that area of the map.

But enough ranting! Take a look at this picture:


It looks the same, yet the framerate dropped a lot. This is because I am rendering now two planes:


The next step is to invert the order of vertices. This way we signal the hardware not to render the polygons facing away from the camera, thus invisible. The second lane is no longer visible:


But if we fly over to the other side, we see our second plane, while the first one becomes invisible:


The problem is that even though always only one of the planes is visible, this still impacts our framerate. This is where we need to be clever and can’t rely on the GPU. Since the planes are parallel to the X0Y plane and we know the position of the camera, we could compute the angle and determine which of the planes is visible. We could do this every time the user changes camera position and show or hide the appropriate plane. But this is more advanced stuff and won’t be bothering with it right now.

One last thing to note: even though we now have one extra plane, the number of objects remains the same. We now have double the polygons. In case you were wondering, every object now has 240 vertices. This number won’t stay so low and constant for long.

Now we’ll add a very simple wall layer, simulating a section though a cliff side:


I kept the shape simple and coinciding with section boundaries. Next time we’ll work on this. Framerate took another hit, because the wall layer again uses two planes. This is where in the future the analyze step of the mesh conversion will figure out that two strips are in the same place, but with opposite facing direction and eliminate them both.

Now we’ll remove the color coding and apply a different texture to the wall. Unfortunately, creating a mesh that uses two textures is far more complicated than what systems I have right now, so I’ll be creating a new object for the wall sections:


The FOV and camera position are not well suited so it is not that easy to tell what is going on. It also does not help that I only render top and bottom.

I need to figure out a way to use multiple materials in the same mesh.

2 comments:

  1. This is a really interesting series of posts. It would be very helpful if you could post some code along with it part.

    Could you explain why you render two planes for each layer? I guess it is so that something is still rendered when the camera is close to "edge on", but I'm not certain. It looks like there is some separation between them?

    I also assumed that the "cliff" layer was elevated above the "grass" layer but it's hard to tell from the screenshots.

    ReplyDelete
  2. To syndicatedragon:

    Posting code might have been possible in the early stages of this series, but right now I have 2000 lines of code that are responsible for terrain rendering only and the number of lines is going to grow.

    I am rendering two planes for each layer because this is how layers are structured: a thin "floor" layer and a taller "wall" layer. The sub-layers in each layer can be made out of different materials.

    I'm hoping than I will be able to improve layer contrast in the future.

    ReplyDelete