This year there are two large geospatial events taking place the same week. At GeoInt, the latest geospatial technology geared toward a military audience will be on display, and at INTERGEO the integrated tools for surveying and geomatics will be highlighted. Due to the large number of high-level attendees at each event, the tradeshow floors have an element of far-forward and costly proof of concepts on display that highlight R&D advancements. With these events on the horizon, it’s time to wonder what new technologies we may see that aim to improve efficiency by adding automation.
Computer vision and machine learning (either separately or together) have been at the heart of many of the more interesting displays for years, such as robotic mining devices or drones that capture and also self-classify imagery. While it’s not hard to think of a huge number of surveying and mapping tasks that could be automated through the combination of these technologies, the ability to quickly process what can be sensed and seen, and then act upon it, isn’t yet close to the efficiency needed to replace humans, but it’s getting there.
Computer vision is a very broad field that includes acquiring, processing, analyzing, and understanding images from the real world in order to produce decisions. From the start of the GeoInt event, there have been computer vision displays to capture human faces or weaponry to detect if friend or foe. These systems have been integrated into security cameras to create alerts if perimeters have been breached, and also to profile and pass along details of moving suspects from camera to camera so that they aren’t lost. Advances in this type of computer vision go well beyond the idea of augmented reality where handhelds devices register images and provide background details to help you navigate.
At INTERGEO, similar forays are made to automatically detect and read such things as signage from mobile mapping platforms, or to classify aerial imagery automatically for land use planning and change detection. Lidar point clouds provide more details for comparison with their 3D profiles, allowing for greater confidence that parameters of objects meet the conditions for classification. Of course, a great deal of computing power is needed to rapidly compare objects, particularly with the volume of lidar point clouds, but advancements are being made. With increasing capability to automatically classify images and points in all context, the automation of mapmaking and surveying will reach new levels.
Machine learning is a branch of artificial intelligence where inputs from sensors or databases are run through algorithms to recognize complex patterns and make intelligent decisions based on the input data. The goal is to get computers to learn from experience instead of explicitly being programmed, with an emphasis on prediction of outcomes. Imagery is a perfect input for pattern recognition, and in the GEOINT space there’s even a DARPA-funded initiative to invent new approaches to the identification of people, places, things and activities from still or moving imagery.
In the geomatics arena there are needs to quickly classify such things as forest and vegetation types, soil, minerals, infrastructure, and other elements in order to speed mapping and site work. When computers gain the ability to sift through massive amounts of data, the tedium of mapping that no human is ideally suited for, will be gone and greater insight will be achieved. In recent coverage Google has touted their StreetView street-level imagery collection as a means to ground truth contributed map changes, yet there is still a need for human interaction to verify each and every user-generated change request. Machine learning provides hope for true automation so that we reach an automatically-updated map.
With the combination of computer vision and machine control we get such disruptive technologies as an agricultural robot that crawls a field, identifies weeds, and dispatches them without the use of fertilizer. Imagine a combination of the two that automatically classifies imagery and dispatches human patrols or drones to deal with insurgent activity or the scanning of mines and automation of mineral extraction based on sensor input that verifies ore content and quality.
With such possibilities on the horizon, measurements and mapping will be much less demanding of individuals to create the maps. Instead, machines will create their own maps based on their tasks, and machine-to-machine communication will allow for cooperative mapping to build the knowledge base about our ever-changing world.