Skip to main content

Mine eyes have seen the glory

By [email protected] - 21st August 2017 - 11:03

In previous editions of this series, I have discussed how to construct the hardware necessary to perform high-accuracy UAV mapping, as well as considerations regarding tying to a geodetic network. In this article, we’ll have a look at the workflows and tools necessary to extract products from UAV data – after all, for a services business, it is the mapping product, not the method of collecting data, that is of interest to the client. I cannot overemphasise the importance of acquiring good data to begin with. I strongly recommend that you prepare two tailored, written checklists for each project that you undertake. The first should be directed at flight operations; the second should be focused on the project control, data collection and processing strategies. It is good practice to plan control and check points (discussed in a previous part of this series) before arriving on-site. If you program these into a handheld GNSS receiver, you can simply use its ‘move to waypoint’ function to set control. I find it rather difficult to accurately estimate these positions once on site – things look much different from the ground than they do in a nice Google Maps planning image!

A generic workflow for UAV mapping is depicted in Figure 1. Some of these steps may be omitted, depending on the product you are creating and the circumstances of the objects being mapped.

I have a strong preference for using a UAV equipped with a post-processed kinematic (PPK) direct geopositioning system. Real time kinematic (RTK) is okay so long as it supports raw data recording, enabling PPK mode. A PPK system will dramatically improve vertical accuracy while simultaneously reducing the amount of project control necessary to achieve a target accuracy level.

Keep very careful field notes. You’ll want to ensure you record information such as camera body and lens identification to aid in using the proper calibration data. For non-metric cameras (built-in or DSLR), you will need to always calibrate the body-lens as a unit.

Be mindful that you will not get a ground model in vegetated areas. Notice in Figure 2 the magenta points in the profile view. These were collected using a hand-held RTK plumb pole (‘RTK pogo’) and thus represent the true ground surface. Not only are these points below the false surface of the point cloud, they are also at random depths. You’ll need to either resort to LIDAR or RTK pogo these vegetated areas.

You should plan your mission such that you have a minimum of five images covering all areas that you plan to map. It is a common error (at least on the first few projects) to plan a mission out to the boundary of a project when, in fact, it must extend 100m or so (depending on camera focal length and flying height) beyond the boundary to ensure you get the recommended five image overlap.

Generating point clouds

Our experience with generating point clouds is primarily with Agisoft PhotoScan and Pix4D. I greatly prefer a desktop application over a hosted solution because we tend to iterate on a proper solution (withholding individual photos, changing a priori exterior orientation and so forth). But whatever package you choose, be sure that it can individually weight a priori position information in each of X, Y and Z (both PhotoScan and Pix4D support this individual weighting). It is a good idea to blur test images – this is built into higher quality point cloud generation software – and remove those below an acceptable threshold if you have adequate redundant coverage. Pay particular attention to spatial reference systems, ensuring that you do not mix ellipsoid heights with geoid heights. The output of this step will be a point cloud in LAS format and an orthomosaic as a TIFF or JPEG.

Point cloud generation tools (PhotoScan, Pix4D for desktop and any number of cloud solutions) are not suitable for data cleaning and product extraction in anything but the simplest of sites. We use our own product, LP360, for all downstream processing.

A competent tool kit must include:

• Ability to measure check points

• Ability to shift/re-project the LAS data

• Tools for locating and tagging ‘noise’ points

• Tools for ‘classifying’ points (tagging as to their location in object space – ground, building, conveyor, noise…)

• Tools to create and edit 3D line work (LP360 includes an automated stockpile toe extractor, saving hours of time on this tedious task)

• Volumetric computation tools

• Cross-section and profile tools

• Tools to create and topologically thin topographic contours

•Tools to create raster elevation models

• Tools to create ‘intelligently’ thinned point clouds (these are called ‘model key point’ algorithms)

If you are engaged in a general practice of creating UAV-derived products, you will use most of these tools on every project!

Downstream processing

The first step in downstream processing is to measure the horizontal and vertical accuracy of the point cloud. This is comparing the known locations of surveyed check points to where they appear in the processed data (Figure 3). If you use correct procedures, you can perform relative computations such as volumes with no control at all, but I always recommend you have a few check points to avoid embarrassing yourself when you deliver the product!

Point clouds derived from imagery are inherently noisy. Consider the example of Figure 4. Note the points classified in purple in the profile view. These are caused by miscorrelation of the images in the region of the rock crusher. If you are producing a ground model or topographic contours for the customer, you will have to classify these points as noise so they will not distort the contour and ground models.

The amount of ground classification you will need to perform depends on the products your customer (or you, if this is for internal production) requires. If you are doing stockpile volumetrics, you will not need a ground model since you have no interest in the data regions not involved in the volumetric computation. You will need to completely clean the data that lie within the boundary of the toe. This cleaning comprises classifying points that are not part of the stockpile surface such as low/high noise, mobile equipment such as loaders, and any overhead structures such as conveyors.

You will need good 3D vector editing tools to manage all but the simplest of stockpile toes. These are used to handle bins, mixed piles, piles on slopes and similar geometries. An example of a stockpile of moderate complexity is shown in Figure 5. This is a wood chip feeder stock pile at a pulp mill. All processing up to the point displayed in Figure 5 was accomplished by automated tools. This series of tools automatically found the chip pile toe (the blue 3D line), thinned the vertices, classified overhead points into the ‘conveyor’ (magenta) class and classified other points to the ‘ground’ (orange) class.

You can see that while these tools do a remarkably good job, there are a few areas that require interactive attention. Note in the profile view some points that are not on the pile classified as ground. In addition, there are several noise points below the surface of the machinery area classified as ground. These will need to be interactively edited to the correct class to prevent them from distorting the volume computations. While the toe is complex, an inspection with 3D tools shows that it does follow the geometry of the edge of the pile and will not require editing to provide an accurate base model for the volumetrics. This is good since toe editing is typically the most time-consuming aspect of a volumetric project.

A priori bottom models

There are many mining companies who have a priori bottom models of their sites. You will want to ensure that the software you use for analytics can support integrating these a priori data as model constraints. The data is typically supplied as mass points and/or break-lines.

We find that clients (particularly mining customers) often want 10cm or 20cm vector contours as a deliverable product.

There are two primary considerations when generating contours: having sufficient valid ground points to accurately support the model and carefully keeping non-ground points out of the ground class. Generally, the best strategy is to very selectively add points to an originally empty ground class (a conservative approach) rather than doing a poor classification and trying to remove points.

Thus, start with an automatic ground classification tool with very conservative parameters and then ‘thicken’ the resultant sparse ground class where necessary. Figure 6 depicts the results of an automated ground classification (orange points). The point cloud is superimposed over the orthophoto in the top view portion of the image.

Note in the profile view that the automatic classifier has done a decent job of discriminating between ground (orange) and non-ground (grey) points. At first glance, it appears that the classifier has assigned roof points (top red arrow) to the ground class. However, a careful inspection in the profile view (lower red arrow) shows that these are actually points on the ground under the edge of the shed roof so they are correctly classified. Of course, once you have ground models and accurate 3D stockpile toes, the creation of volumes and topographic contours is a simple automated process.

We have found that most clients want volumetric data delivered as an Excel spreadsheet. Our own company offers a map-based web cataloguing system (AirGon Reckon) that is used for job planning and general data management. Even though this provides a visual display of stockpile toes and volumes, it tends to be used in the quality control stage. The financial department typically just downloads the Excel sheets from the site.

Delivery

In summary, the products desired by mine and industrial site operators do not change simply because the sensor is being carried aloft by a UAV. Thus the best practices used in generating products remain similar to a LiDAR workflow and even a traditional airborne photogrammetric mapping flow. The point cloud generated in the dense imaging matching algorithm can be a bit more challenging since it is much denser than most other data sources and because the miscorrelations show as high/low noise. However, with a well thought out workflow and the proper set of processing tools, you will quickly be generating quality products!

Download a PDF of this article

Download