First introduced to the market in 2014, the latest version (4.0) of Pix4Dmapper made its debut at this year’s INTERGEO Expo in Berlin to an enthusiastic reception. For the first time, the desktop software embodies photogrammetric machine learning tools with which users can automatically classify dense 3D point clouds into a number of categories such as buildings, roads or vegetation.
This is just the beginning of the company’s quest to revolutionise contemporary photogrammetry workflows and enable many new ones through machine learning. In the end, it will allow the conversion of raw imagery into 3D reality models with attributed semantic information.
In practice, it means that instead of having to manually inspect and measure 3D reality models, users will directly receive automatically-generated answers to questions such as:
How many trees are within the project area and at what locations? What is their height and species?
What is the total road surface area in your area of interest?
What is the amount and distribution of roofs that are suitable for solar cell coverage?
How many cars are at your parking lot and at what locations?
Answering such specific questions will enable the direct connection of photogrammetric processing workflows to GIS databases, thereby speeding and streamlining updates of vectorised information from newly-acquired drone imagery.
Work in progress
Of course, there is still work to be done. Machine learning techniques are only as good as the training data used to build the classification models. For this reason, Pix4D opted to give its users the tools to control and refine the classification. Just as a baby gradually learns how to see and interpret its environment, these machine learning techniques will evolve with the training data. The result will be more object categories and greater reliability.
As of today, professionals will primarily use the new machine learning-based point cloud classification to automatically generate Digital Terrain Models (DTMs). In the near future, this classification will form the basis to extract buildings and model them as a semantic composition of geometry. This composition will, for example, include elements such as roofs, facades, windows, doors and balconies. Needless to say, Pix4D’s growing R&D teams in Lausanne, Berlin and San Francisco are dedicated to this challenge.
Trained algorithms
A first step in this direction has been the development of novel machine learning-based point cloud classification algorithms. Based on geometry and pixel values, these have been trained to understand object classes. Today, Pix4D is collecting user inputs with which to train algorithms that can adapt to many new topics, e.g., separating aggregates stockpiles from bare terrain; automatically measuring volumes with unprecedented accuracy, or automatically digitising newly-constructed road and buildings.
Bare earth terrain extraction
A considerable amount of hydrological or geological analysis needs to be performed using bare earth terrain models. In Pix4Dmapper, the point cloud classification function can be used to separate all the above-ground objects and improve the classification using the point editing tools, as pictured above left and right.
Volume measurement
To get an accurate volume measurement, it is crucial to remove vegetation or human-made objects from the point cloud. With the point classification, it would be more time-saving to achieve more reliable volume calculations.
Examples of this technique are
pictured below.
Vegetation growth control
Overgrown vegetation is one of the leading causes of power line outages. It is extremely important for an energy company to keep track of vegetation growth and trim it before it causes damage. With Pix4Dmapper’s point classification, the extracted infrastructures can be grouped and manually digitised for further analysis, as pictured above.
Christoph Strecha is Founder and CEO of Pix4D (www.pix4d.com), headquartered in Lausanne Switzerland and with offices in Berlin, Germany, San Francisco, USA, and Shanghai, China