“We have better maps of the Moon and Mars than we do of most of our seas, oceans and coastal areas. It is almost impossible to think, in the 21st century, that governments would produce sustainable economic development plans on the shore with no maps. But we seem to be doing it at sea, and we seem to have plans to continue. I suggest that is not a good thing.”
So said International Hydrographic Organization (IHO) president Robert Ward in 2013, drawing attention to the state of survey of our seas and oceans. But what’s the solution?
Crowd-sourced bathymetry (CSB) began in the early 2000s to meet the needs of fishermen. It works by having a fleet of vessels logging GPS and depth data as they go about their normal activities, which are then shared and processed to provide bathymetry with better coverage and higher accuracy than could be achieved by a single vessel.
Since then, it has slowly been gaining acceptance in the hydrographic community. Indeed, the IHO has set up a working group to create a ‘cookbook’ of guidelines for hydrographic offices using CSB.
We estimate that there are some 10 million seagoing vessels globally, from RIBS to super tankers. Each type of vessel has its own habitat – workboats are limited to their port or windfarm, fishing vessels just go to fishing grounds, and ships just ply between commercial ports, for example. Each could contribute in its own area and together, they could produce a huge amount of useful bathymetric data.
That’s not to say CSB is easy. The first step is collecting data and the first step in collecting data in crowd-sourcing is to find the crowd. Indeed, an often-underestimated task in any crowd-sourcing activity is recruiting the crowd and keeping them active. It may be that they are part of a group that wants the data – for example, commercial fishermen using the Olex fishing and navigation software, or the fleet of workboats for a port or windfarm. Otherwise, one has to find other motivators, such as contributing to the common good, logging data for purposes other than bathymetry, or wanting to use their participation for good PR.
Whatever their motivation, if the crew are going to contribute data, this needs to be made as simple and straightforward as possible, and at zero or minimal cost, or they won’t participate for long, if at all. To this end, a number of different methods should be available to meet different user requirements. Most of TeamSurv’s fleet currently use a data logger with a USB stick, with data being uploaded once ashore, but we can also use logs from an increasing number of navigation packages. For small craft, we will have an app with a Wi-Fi interface to the instruments later this year, whilst we have some larger craft streaming the data over VSAT. We are also conducting a trial using AIS with ExactEarth.
Processing the data
In the same way that specific algorithms have been developed for multibeam data, new algorithms are needed to make the most of crowd-sourced data. In contrast to the historic surveying practice of putting a great deal of effort into taking fewer measurements but making each measurement as accurate as possible, CSB produces a mass of noisy, lower quality data that must be treated statistically to get the highest accuracy. The large number of data points collected means this is a problem that needs to be fully automated.
Two approaches have been taken. Systems geared towards the needs of fishermen place great importance on providing real-time data at sea so it can be used immediately by fishermen. By contrast, TeamSurv doesn’t need this real-time capability, and our focus is greater accuracy. We have developed algorithms that produce this using additional sources, such as tide gauges, and the greater processing power of a cloud-based network of computers. These can take time to get the best results, with some tide gauges releasing data monthly or even just once a year. However, on harvesting this new data, our servers automatically identify the affected areas, recalculate the results and issue new bathymetric data.
As many as possible of the standard survey processes are carried out, although a different approach is sometimes used, as the sensors available are different. All the processing is carried out after data collection.
Here are the corrections carried out in TeamSurv’s data processing chain:
- Transducer depth and transducer/GPS antenna offset are measured during an initial calibration exercise, which also checks the depth sounder against a measured depth.
- All data points are subjected to quality assurance, with bad data points from errors in time, position or depth filtered out.
- Vessel motion is not corrected for directly, as most vessels do not have the necessary sensors, but the larger beam angle of depth sounders for general navigation removes most of this error.
- For ships, squat is corrected for, as well as draft changes from taking on and discharging cargoes.
- Tides are corrected for, initially using tide predictions (an offshore gridded model combined with coastal tidal stations) and subsequently using tide gauge differentials.
- Speed of sound is corrected for using a global monthly speed of sound atlas we have developed from 3D oceanographic datasets of salinity and temperature, with higher resolution in coastal waters.
With factors such as speed of sound or vessel motions, it must be remembered that with crowd-sourced data, we aim to build up a high data density. As long as any errors tend towards a zero sum, they will not affect the results once enough data is gathered. This is in contrast to a single survey, where each measurement must be of the highest quality.
Once the tracks are reduced to chart data, they are combined. We have developed a statistical method that uses robust statistics to cope with the non-normal distribution of the data and reduce the effect of outliers, providing both a depth value and a data quality metric for each grid cell. An adaptive grid is generated, with lower resolution in areas where the data is sparse and/or less consistent, and higher resolution where there is more, consistent data. Also, no data is output if the data quality threshold cannot be met.
As data comes in, the reduced tracks are available within minutes of being uploaded, and the grid is typically updated within 12 hours, in a fully automated process. From this grid, a DTM is generated, from which all the usual bathymetry products can be produced as required.
How good are the results?
Of course, the only way to measure the quality of a survey is to compare it against a known good, contemporaneous survey. To this end, we have carried out validation exercises in the UK, and with the port of Klaipeda in the Baltic. We found that 95% of our results are within 0.2m of the multibeam survey in the flat, non-tidal waters of Klaipeda, rising to 95% within 0.8m in deeper open waters, with an extended tidal range and waves.
We have found that there is an interesting synergy between crowd sourced and satellite-derived bathymetry (SDB). The General Bathymetric Chart of the Oceans (GEBCO) has long combined altimetry-based SDB with tracks from research vessels, but has done this using whatever data is available, as the mismatch in scale between the 10km grid of the SDB and the echo sounder tracks makes it a challenge to combine the datasets well. A better fit is between CSB and optical-and SAR-based SDB.
We did an initial study into this for the European Space Agency, in conjunction with Telespazio Vega, which showed that the approach had promise – combining the techniques extends the depth range (see Figure 1 on page 31), CSB can provide the all-important ground truth data to SDB, and each can fill in gaps in the other’s coverage. The datasets are also similar in terms of resolution and accuracy. A further benefit in many areas is that there is no need to deploy a survey vessel and associated crew in the area.
This combination is being explored fully in the BASE-Platform project, where we are running trials in the Wadden See (North Sea), Channel Islands (English Channel), Balearics (Mediterranean), Azores (Atlantic) and Mauritius (Indian Ocean). The data generated by these trials will be used by EMODNet, by the German waterways authority BAW, and by the Mauritius government in their tide and surge early warning system.
Choosing the application areas
CSB is just another tool in the hydrographic toolkit and like all tools, it is more suited to some tasks than others.
Its strengths are that it costs little, it gathers data in areas where vessels actually go, and once in place in an area, it offers a continuous resurvey. Its weaknesses are that it takes time to enrol the crowd and log sufficient data, targeting areas is a broad brush as it depends on the movements of the vessels, limitations in accuracy and bottom coverage mean that it cannot do better than S-44 Order 1b surveys, and maximum depths are about 100m for small craft or 2km for ships.
Considering these factors, the following are the primary areas where CSB can play its part:
- Use of off the shelf crowd-sourced data. In general, all TeamSurv’s data goes into a common data pool, and off the shelf data is available in areas we have covered, giving the user gridded depth data and data quality metrics. This is a quick, simple, and very cost effective way of getting data in an area of interest.
- Surveying of non-critical areas. For example, a harbour might survey areas outside shipping lanes every few years, and this could mostly be handled by CSB using the harbour authority’s boats as well as local craft.
- Surveying in areas where there is no adequate official chart data. This may be because a new port has opened up, or an existing port is in a location that is not being surveyed and the last survey was by lead line, or a cruise liner using a new anchorage. Again, as well as using local boats in general, workboats, ships using the port or anchorage, and also the launches used to ferry cruise passengers ashore can all contribute to the data collection.
- Surveying in developing countries. Where resources are scarce, and there is little existing bathymetric data, I suggest it makes sense to use tools like CSB to build up a ‘good enough’ baseline bathymetry, and then just concentrate high accuracy, high cost resources such as multi-beam surveys on areas where they are needed. In fact, I suggest that any capacity building activity should include setting up a local CSB project, as this also engages the local population in understanding their seas. Whilst many boats in the area may lack a GPS and depth sounder, we can provide a low cost, all in one device on a pole that simply clamps to the gunwale or the transom.
- Monitoring changes in the sea bed. Once vessels are logging data in an area, this effectively gives is a continuous resurvey, so the data may be time sliced to detect silting or scouring, or other changes in the sea bed. Compared to a regular professional resurvey, this can give earlier warning of changes, albeit at a lower resolution. Examples of this include monitoring movements of channels or sand banks; scouring or silting in ports and harbours; and changes in depths over underwater cables and pipelines. In many of these cases there is a fleet of workboats that can gather data, supplemented by other vessels operating in the area.
- Use as a pre-survey tool for a professional survey. CSB cannot claim to meet the accuracy and resolution of a professional survey, and many applications do need these. But crowd-sourced data can be used in pre-survey planning, providing a low resolution baseline set of bathymetry, helping in identifying possible hot spots as well as helping plan to make best use of the survey vessels, thereby enabling more effective use to be made of high cost resources
CSB is not a replacement for professional survey vessels, LIDAR or SDB. It is merely another tool for the hydrographer’s toolkit, well suited to some application areas, unsuited to others. As with all tools, users are naturally cautious when they first use CSB. But as projects trial its use, their confidence grows, as it has for LIDAR and SDB.
Tim Thornton is director of TeamSurv (www.teamsurv.com)