“Train as you fight“ – Be it executive trainings, mission planning, armed or police forces or emergency management: there will come a time when a team will need training to carry out its duties and responsibilities. This training should take place under careful supervision and in circumstances that will resemble the respective reality situations as much as possible. Some special abilities, like retaining the situation awareness when under stress, when facing swift changes of the situation and in unknown terrains, can be learned only through practical training.
But does this training in the field offer an ideal surrounding even then, when realistic circumstances cannot be fully recreated, when weather and stress will dampen the attention of most experienced trainers and the day is simply not long enough for setting-up processes and material maintenance?
For these reasons and also due to the cost, certain real-life training sections have been transferred to virtual simulations. A common example of a simulated training is the system Virtual Battlespace 2, VBS2 for short, by Bohemia Interactive Australia Pty Ltd. The system is used in armed forces in many countries, but it lacks an important aspect: the standard terrains that are offered with these simulators are not sufficient any more as soon as concrete actions are to be trained on concrete objects. In such cases, the terrain data base has to be generated manually, which represents a laborious process, even for skilled personnel.
And while the time necessary for the manual generation of a simulation terrain data base will be lacking, the raw materials for an automatic generation will often already exist or are easy to create – from a safe distance, without attracting attention and even by means available to lower units of the chain of command, for example with unmanned aerial vehicles like ALADIN or MIKADO. The derivation methods of 3D terrain models from sensor data as developed at the institute Fraunhofer IOSB represent the initial point for further development of a process chain, which can generate a very up-to-date, convenient simulation terrain data base within very short time spans and with minimal user interaction. The most important goal, besides high-level automation and the time gained, is the possibility to offer the needed simulation terrains to every unit of the chain of command without having to resort to specialists for that.
Laser scanner data represent a particularly suitable input data for the direct measurement of 3D positions and in many systems also for the derivation of intensity values by means of reflectivity analysis. But also videos or images taken from the air should and can be used as a source. For such purposes, 3D information is first to be computed from 2D measurements. The institute Fraunhofer IOSB has further developed well-established photogrammetry procedures to enhance their suitability for the analysis of videos, obtained by unmanned aerial vehicles. It is thus now possible to compute depth maps from almost any number and combination of images, without having to rectify them, as is usual in photogrammetry. The forms occurring in depth maps are analyzed and classified according to geometrical criteria: different depths along straight lines will thus indicate buildings; irregular forms probably mean trees or bushes. Any color information will be used to help underline the distinctions. Even the analysis of contours in the images will offer useful information to help distinguish the buildings. If the outlines of buildings have been successfully approximated by a polygon, the slants of the roof areas can be determined by means of gradients from the depth values. From walls and roof areas, buildings are then “compiled”. In Figure 1, individual stages of this process on the basis of reconstruction of aerial images are shown. The recognition of individual geometrical composition parts like “wall”, “roof”, “tree” etc. will be very valuable for the integration of the model into the simulation system. While the approximate position and height of separate trees can be determined from tree crown diameters and their height, measurements are not possible with forests, so one has to rely on standard values or agreements. Any color information, contained in sensor data, can be used for model texturing. For this purpose, model polygons are projected into images and the encircled areas are copied into the model as texture images. For trees and forests, textures for different seasonal appearance are used, as shown in Figure 5. It is also possible to import the data about the width and the course of the roads, bodies of water and similar important objects that cannot be reconstructed from sensor data entirely or not at all. A typical example is shown in Figure 2: a road system, imported from a vector map. Figure 3 shows a terrain data base in mission editor of VBS2 and Figure 4 a detailed view.
Figure 1:Model-based reconstruction of urban structures from photogrammetric depth maps
Figure 2: Import of geographical information (road networks and closed forests) from vector maps
Figure 3: An automatical generated terrain database, seen from the VBS2’s mission editor
Figure 4:A close view at the automatical generated terrain database in VBS2
Figure 5: Seasonal tree texturing