Progress report: Hills now look quite a bit more like hills.
As mentioned previously, the hills were looking pretty awful – even a 2048×2048 texture, when stretched over miles of landscape, has pixels larger than a house. Instead of using such a texture directly, it makes more sense to tile a smaller texture repeatedly over the landscape. To add variation, one can use a landscape-wide single-channel image to blend between two different textures. As it happens, the author of the height field I’m using for the hills (virtual-lands-3d.com) provided the hill model with a map named “flows” and a map named “rough” which seemed to correspond to fertile areas and rocky areas. After a little tweaking of their levels, I merged them into a two channel texture.
This technique pretty much solved the problems with variation in the distance, but then so could the “one big texture” approach if I didn’t care about problems up-close. The problem being that “half dirt, half grass” under this scheme means you get a semitransparent layer of grass everywhere.
Applying a threshold to the value doesn’t help much either, since the low variation rate just means you get a mile of grass, followed by a crisp cutoff into a mile of dirt. One solution to this is to overlay a medium-scale noise over the blending value, but I found that while that did add variation, the tiling became too obvious in the medium distance and the cutoffs close to the camera were still too stark. A better approach turned out to be embedding a gradient in the alpha channel of one of the tiling images, and comparing that to the blend value. In this way it’s possible to paint dark areas around the cracks in the dirt texture, so low key values only display grass there, while brighter values represent areas where grass will only grow if the terrain is very grassy.
The problem with cutoffs, though is that they become meaningless in the distance. Only a handful of metres out from the camera, the cutoff texture has begun to blur down towards middle-grey. The solution to this is simply to use a soft threshold that gets progressively softer as the distance gets higher. Even the grass in the foreground benefits from a little softening.
There are still a few things I’d like to do with the landscape shader. Currently it’s quite wasteful to sample the high-detail textures all the way out in the distance where they’ve blurred into a solid colour. If I were to create a proper heightfield-based landscape system rather than just building the model by hand, it would be quite easy to tweak where to switch to a less expensive shader.
In a similar vein, it would be nice to support normal maps on the near detail textures. One thing I didn’t mention above is that the hill model is a little too low-poly in the distance and doesn’t light very well, so rather that using its vertex normals I converted the original heightfield into a normal map and used those. If I were switching to a system that had a different shader and more-tessellated geometry in the foreground, the cost of generating tangent space vectors could be paid once when the high-detail geometry is created.
Finally, it would be nice if the hills actually cast shadows. Relying on realtime shadowing techniques would probably look terrible, since they tend toward crisp edges which would look strange on huge, soft features like hills. One solution would be to pre-bake a soft lighting solution into another channel of the blending texture, and merge that with any realtime shadows at render time. If it were necessary to support the possibility of the sun direction changing, it would be better to create a shadow texture for them (probably via a large number of raycasts) when the program starts, and only update it if the sun moves. Either technique would need to balance softness against potential strangeness when a conventionally-shadowed object is standing in the penumbra casting its own crisp shadow, and it would be necessary to work out when and where non-hill objects could safely receive soft hill shadows. Given all these problems, I’m probably going to have to wait until I actually have shadow casting available for normal objects in the first place.
aaaso that a cutoff at a certain percentage will give exactly that percentage of coverage. Mine has far too many solid whites but gave decent results for this scene.