Skip to main content

From CPU to GPU

By [email protected] - 22nd June 2018 - 10:47

Remotely piloted aerial systems (RPAS), popularly known as drones, have quickly become an integral part of modern military operations. They can stay on station for far longer than manned aircraft at a high altitude while carrying complex sensor arrays, making them ideal for surveillance.

These benefits have been recognised by NATO, which had been searching since the late 1990s for a high-enduranc e alliance ground surveillance (AGS) system capable of providing high-quality monitoring of situations to support its operations, but had been unable to find a system capable of meeting all its core requirements. In 2010, after the Lisbon Summit, a tender was put out calling for a system capable of providing global drone surveillance coverage. A consortium of major defence contractors was selected to fulfil this tender, starting in 2012. The consortium included Northrop Grumman, which provided Global Hawk drones for the air component, and Airbus Defence and Space, which provided the ground stations.

The AGS system was designed as a platform for collecting data at a truly massive scale. In the words of an unnamed NATO official, the “AGS drone is a huge collector of data… a vacuum cleaner of data” that is capable of producing unparalleled intelligence for the alliance and member states. The data produced by these drones comes in the form of radar sensor data – including video, image and moving target data – which is gathered at a high altitude using the Global Hawk’s radar and sensor array. The data is then transmitted to the mobile general ground stations (MGGS), the development of which was led by Airbus Defence and Space’s Intelligence division. These are operated and deployed by six people, and are where the sensor data is fused and analysed. The information produced here provides forces with near real-time surveillance data for operational intelligence and mission planning.

Time is of the essence

However, visualising surveillance data from drones (particularly video data) and combining it with other data sets (such as terrain data or satellite imagery) in an analysis system at the speed and accuracy demanded by military settings is a significant challenge. This is because removing the effects of perspective and relief displacement to create planimetrically correct images requires intensive processing. This is not optional, however, since while inaccuracy can cause significant problems even for civilian geospatial analysts, in a military setting, any distortion can significantly hamper the delivery of intelligence, and thus the planning and success of missions, as well as posing a threat to the safety of personnel.

The correction process needs to be carried out on each image or video frame. So, when it comes to the video feed from the Global Hawk’s sensor array, you can imagine how much processing needs to be done. Traditional systems that rely on computer processing unit (CPU) processing struggle to do this at speed because they process data linearly, one computation at a time. Now imagine the mammoth amount of time that it would take to process the data produced by the so-called ‘vacuum cleaner of data’.

The high-pressure nature of their work means time is not something that the NATO users of the AGS – whether in the MGGS or elsewhere – have. Whether on the tactical, operational, strategic or political level, decisions need to be made quickly and accurately, as intelligence could relate to natural disasters, piracy in the oceans, or monitoring weapons of mass destruction. Therefore, using a traditional CPU-based model was simply not an option.

A fusion reaction

Airbus Defence and Space approached Luciad as we had recently made a major technology breakthrough that could solve this issue. A key requirement was that the system would be able to fuse data from visualisation sensors, as well as imagery and video data from the Global Hawk drones, with complex terrain data and data from all interoperable C2ISR systems operated by NATO and by member states. This meant that any technology used needed to not only support dynamic orthorectification, but also defence geographic data formats. As well as this, the solution would have to support military symbology formats such as the NATO joint military symbology formats introduced in APP-6A and MIL-STD-2525B and C.

Having previously provided geospatial technology for other NATO systems such as the Integrated Command and Control System (ICC) and the interim Geo Spatial Intelligence Tool (iGeoSIT), the team were able to meet the key requirements as their application programming interfaces (APIs) were fully compatible with standard military formats. We also ensure that rapid modification is possible in case adaptation to additional formats and features as required.

Most importantly, however, was the breakthrough that had been made before the commencement of the AGS project which allowed for the dynamic orthorectification of video and images. Throughout the 2000s, the defence geospatial industry has been aware that the increase in the velocity and variety of geospatial data would require a fundamental shift in the design of systems. This includes static data becoming dynamic (such as satellites), as well as new sources of dynamic data becoming available (drones are an excellent example of this).

With this in mind, a shift to using the graphics processing unit (GPU) is under way. This means that systems can handle data (particularly images and videos) in near real-time, without a need for pre-processing.

In a real-time case, the data typically comes in over UDP (User Data Protocol) as a STANAG 4609 stream. This NATO standard is based on MISB 601. The video itself comes through as a MEPG2 stream where additional metadata embedded in the stream defines the platform location and orientation, as well as the intrinsic and extrinsic sensor parameters. The video and metadata streams are decoded on-the-fly and fed to the GPU in order to enable dynamic orthorectification.

In addition to this, elevation data is required. This data typically comes in the form of a DTED, GeoTIFF or any other elevation data source. The GPU accesses the elevation data and combines it with the video frames and telemetry data. Algorithms then perform the orthorectification in real-time using parallel GPU code. By making use of the power of the GPU, tools can handle multiple drone video feeds in real-time and still maintain interactive visualisation rates of 60 frames per second (fps).

Algorithms can work in 3D, by draping the video on top of the terrain, and in 2D, correctly warping the video on top of the base layers. Any data sets can be combined – for example, vector data on street networks and place names and even CBRN sensor data – with drone videos to gain situational awareness. On top of this, users can add annotations onto videos and 2D and 3D maps. The functionality can also be used for mission debriefs where videos can then be opened from a file system or network servers.

Joining the front-line

As Sumit Gupta (then of NVIDIA) explained in GeoConnexion International in 2014, the use of GPU computing significantly boosts performance to a level that is 75 times faster that CPU processing. This boost enabled the development of a key capability – an algorithm able to provide dynamically orthorectified video feeds from drones and other aircraft at 60fps. This provides dramatic benefits, particularly for NATO’s AGS project. It means that analysts are able to see live, continuous surveillance data while on the ground. They are also able to fuse this with other data in order to gain accurate insight into situations and crises as they develop, rather than having to wait for data to be processed.

This has the potential to enhance both the speed and accuracy of decisions. It was announced in 2017 that NATO would complete the acquisition of the AGS system for 15 allies (including the USA), and the system is currently being deployed.

Overall, the AGS project showcases the benefits of using GPU processing and provides a key example for those in the geospatial industry of how dynamic data from novel sources can be used. Due to this breakthrough, the NATO AGS project has been able to unlock the full potential of the detailed intelligence gathered by the drone fleet.

Bart Adams is director of products and innovation at Luciad (a Hexagon Geospatial company) (www.luciad.com)

Download a PDF of this article

Download