I have been working on a system to allow for dynamic zoom levels, not just the fixed ones. There are two causes for me wanting this. First, while I was experimenting with getting new graphics and talking to third parties, the gravity of tile size choice become apparent. Doing isometric graphics that look good is hard. Working with small tiles is hard. Isometric tiles edges have very special rules. Combine these and we get quite a difficult task. Artists complain that there is not enough space. Due to the rules you cannot scale tiles up easily and scaling them down is also a lot harder than necessary. And due to these difficulties, asking for the same tile in two different sizes often doubles the cost. So if I want tiles designed by artists, I need to stick with one, at most two sizes. This wouldn't be a problem. I always had one size and the zoom levels were generated by dividing the size with 2 and 4.
Here is where the second problem comes in. 32 pixel tiles are too small for modern monitors. With 1080p monitors becoming quite common, you would have to be really close to the monitor or squint you eyes to see what is going on in the game. On the other hand, if you do not have such a large monitor, the large size is a disadvantage. There is no one size to rule them all. Zooming is a must. 3D engines do not have these problems because you do not work with pixels and it is fairly simple to keep the same proportions.
So I tried to create a dynamic zoom system, where both shapes and content for tiles can be created procedurally and for cases where more is needed, 2D textures can be applied on the surface of these shapes. Using these techniques I created an algorithm that created the entire tileset for the game (leaving out some stuff for now).
But this approach has a whole lot of flaws:
- Applying textures is ugly. It will take a lot of time (if ever) before I can improve this. Probably need to use a software trilinear filtering inspired by 3D graphics.
- Procedural generation is not that easy. Every shape needs a separate algorithm. Sure, after you do it once and start to pile up on algorithms that you can combine, it starts to become easier and easier.
- Procedural generations is slow. The total tileset for all size will take minutes even on strong machines. So you can not generate them on the fly, i.e. every time you zoom. You need to cache them on the hard disk.
- I could not find a way to edit textures with Irrlicht. Or at least tell it to use/convert an in memory bitmap. Irrlicht actually has extremely poor 2D image manipulation capabilities if I am not mistaken, and I think this is intentional. So I need to create the procedural graphics with U++, save them to disk and load them with the Irrlicht API. Making slow things even slower.
- You cannot cache them in RAM. It is not easy to tell, but even now, DwarvesH has thousands of tiles. By the time it will be done it will have at least 3 times as many tiles (adding buildings, animals, ores, gems, …). With PNG compression, the cached tileset occupies right now 105 MiBs of disk space. Of course, one cannot store PNGs in video memory. In raw bitmap/texture format, the tiles occupy 240 MiBs, and this only with 24 bit colors. I can’t even imagine how much space these tiles would occupy with 32 bit colors. The math on that would be really crazy. *snark* Do you see how my game looks like? There is now way I can say with a straight face: “Yes, this game requires 512 MiBs of video RAM.” So I must load from the disk only as much as needed.
- Even if you load as much as needed, the large number of tiles means that a lot is always needed. At big tile sizes, loading all the needed tiles from disk takes a while. Not that much, but enough so you do not have smooth zooming like you see in some games, where you touch your mouse wheel and instantly everything has zoomed.
I think these are the major disadvantages. Sure, 2D isometric is passé, but my engine is quite impressive technically, if I do say so myself. I’m sure it can be improved and there are other people out there who can write at least as good of an engine as mine, but this does not change the conclusion: my engine is very good and very fast. It can be ported in hours to any API that can draw a portion of an image to given set of coordinates. I could easily do a SDL, Allegro, pure DirectX or any other hardware accelerated graphics platform out there. It is quite fast in software mode too, but software is always going to be slower than hardware, pretty much by the definition of dedicated hardware components.
But the new dynamic zoom my engine does not feel that good any longer. It feels less capable technically speaking, even though it is smarter. I uploaded this first video, where you can see it action:
In this video I have some strange stutters and generally speaking I am not that happy with it. I spent over an hour trying to find the cause of the stutter, only to realize that it was not the fault of the game. After hours of working on the engine, with the IDE launched, countless compiler sessions, Opera, Firefox and Chrome launched, FRAPS and VirtualDub launched, my system stability and load was not at its best. After a restart of windows, the stutter disappeared. I did a second video, without stutter and at a higher resolution:
I do not know exactly how to proceed. The old engine is good. Dynamic zoom is challenging, but in the end good, but it will always be too slow.
But the thing that bothers me the most is that this highly tuned 2D engine that does so much is still technologically inferior to a half-assed 3D implementation.
I guess I really need to go with the 3D one. Polygons are more powerful than the pixel. I have repeatedly failed to create a good 3D engine. Without prior experience, one cannot sit down and just write a 3D engine. Which is surprising, since one can just sit down and write a DF inspired game.
So this new try with a 3D engine needs to be different:
- I will be working very slowly on it, taking time to learn the ropes first. Primary development will be centered on the 2D engine.
- I will be using some third party 3D engine, no longer creating something from scratch. Probably in the first phase, I will be retrofitting an existing project to be capable of displaying maps from my game.
- In the first phases I won’t care about speed. If I can create a good 3D engine that feels right, but is too slow, I won’t mind. Things can be improved latter.
- I won’t be afraid to ask help!
- It will look ugly! OK, this part won't be different then :P.
So in conclusion, 2D development will continue, only a little bit slower while I dedicate some resources to 3D. If someday the 3D engine becomes primary, I would hate having my effort in the 2D engine go to waste, so I need to figure out a way to keep it alive and useful. Suggestions are welcome.
Ohh, one more thing. I missed the 3rd of September window, but I promise that I won’t miss the 4th of October window. For this date I had scheduled the first version of the editor to be made available and I’ll stick to this plan, even if I have to use RapidShare. Sure, things will be launched a little bit out of order, as the editor by itself is not that useful. I need to release after at least a tech demo so you can load up the stuff from the editor.
Anyway, baby steps.