Next Generation, 4-D Distributed Modeling and Visualization
|Home | People | Executive Summary | Presentations | Publications | Results | Links|
Executive SummaryGaining a detailed tactical picture of the modern battlespace is vital to the success of any military operation. This picture is used to direct the movement of assets and material over rugged terrain during day and night in uncertain weather conditions, taking account of possible enemy locations and activity. To provide a timely and accurate picture of the battlespace, most modern command operations centers have access to a multitude of systems which provide information from many different sources including eye witness reports, aerial photographs, sonar, radar, synthetic aperture radar, multi-spectral imaging, hyper spectral imaging, foliage penetration radar, electro-optic, infrared and moving target imaging data. These disparate and sometimes conflicting sources must be combined together in a timely and accurate fashion to provide an overall view of the battlespace that is clear, concise, coherent, complete and accurate.
Historically, most tactical decision makings were performed on a sand table, i.e. a box filled with sand shaped to replicate the battlespace terrain. Today, these operations are carried out using detailed paper maps and acetate overlays which could take many hours to print, distribute and update. Most recently, an experimental 3D visualization system called Dragon has been developed at NRL which is designed to address a number of visualization needs of military. While Dragon offers a vast improvement over existing systems, it has several significant drawbacks. First and foremost it is most useful in applications where several users collaborate around a workspace, such as a table. As such it is not suitable for the mobile soldier, or pilot, or other military personnel that cannot possibly carry a workbench and are thus limited to lightweight PDA or HMD for 3D display purposes. Second, the workbench can be enhanced with augmented reality techniques whereby actual imagery and video from the battlespace can be registered with computer generated terrain data stored in geospatial databases and displayed as the virtual environments. In a mobile scenario, this can help the mobile individuals identify where they are located, and visualize where they are heading to. Third, a battlefield necessarily deals with uncertainty, either due to sensor errors or processing algorithms or unreliable communication, and it is necessary to determine ways to represent and encode the confidence level that exists for each piece of battlefield data. An important challenge is to present the level of uncertainty associated with each object in the virtual scene in a visually intuitive way without cluttering or resulting in information overload. Fourth, time must also become a part of battlefield visualization system. This might be used to play back the previous 24 hours or to store and review the plans for the upcoming 24 hours. This necessitates the generation of 4D models, three determining space and the fourth dimension giving time. Fifth, distributed computing is the direction in which all military systems are moving. This will include remote person-to-person collaboration as well as distributing the computing across multiple platforms.
Based on the above requirements, we propose to investigate a distributed, database system for battlefield visualization, tailored to the needs of future mobile military personnel. This database incorporates and presents uncertainty associated with objects in the virtual scene, is four dimensional, and is initially constructed and subsequently updated by registering sensor imagery to ground control points and reference imagery, with the additional input of maps and elevation data. The resulting 4-D model provides the scene context for interaction, interpretation, and the visualization needs of the users. Specific research issues we will address include, but are not limited to, algorithms for geo-registration and tracking, fast and accurate 4D model construction techniques, portable visualization, dynamic and universal data structures with fast updates for visualization, multimodal interaction, uncertainty representation computation, visualization and validation, and information fusion and uncertainty processing in distributed mobile networks. We plan to verify results from the above results experimentally, by integrating them in a demonstration system.