One of the more amazing things to come out recently in the geospatial sphere is the coupling of GIS functionality with SAP HANA, the platform as a service that returns real-time analytics with its in-memory processing. The speed at which this predictive analytical engine operates, coupled with Esri’s GeoEvent Processor for real-time data feeds in the context of location, truly is a game changer for the kinds of operations that can benefit from spatial analysis.
While this predictive analytics via the cloud technology has been slow to take off in the business community, with many companies wondering how to apply it, there are others exploring some eye-opening applications with many business advantages. The column-based configuration of this database that does operations in RAM and not files, and in the cloud, means great speed of transactions and analytics allowing for spatial queries on huge amounts of data in a quick amount of time.
The use case that was presented at the recent Esri User Conference involved a utility company in The Netherlands that have more than 3 million electric customers and a nearly equal number of gas customers. With so many assets, the speed to analyze the network was taking more than three hours for each query to process. With this new approach, that response time was driven down to three seconds.
The possibility for such game-changing improvements without a complete re-build of the system, and a massive investment in new software, makes this approach truly compelling. Simply by taking advantage of a database appliance accessed through the cloud, now this utility has the means for real-time and interactive queries to assess all pipes in all buildings, prioritizing repair and fine-tuning the system performance thanks to this remarkable speed change. Again, it wasn’t a matter of new software or hardware investments, but simply dropping in an on-demand engine that’s on the scale of switching from a Yugo to a Ferrari engine, and business benefits with time and money savings have accrued.
Sticking with the vehicle analogy, the ability to monitor current conditions, combined with information about the performance of all assets, is akin to the revolution that’s taken place in the automotive world. The common controller area network (CAN) bus standard that started in the late 1980s, and the standards for various subsystems and control units, have allowed for micro controllers to communicate with each other, and for the recording of performance details. With all this data on a vehicle’s complex machinery, diagnostics are greatly improved and the common standards means that any mechanic can access this information for lower-cost maintenance.
This new configuration of GIS analytics, with greatly improved speeds for analysis, and the ability to combine and analyze a great many scenarios and queries in a short amount of time, holds promise for similar gains in managing complex networks. The complexity of managing such a network has meant a reactive rather proactive approach for most maintenance, although the impact to lives and livelihoods can be hugely significant, including impacts on a nation’s economy should outages or leaks occur. The risk in the case of a utility does require a new rigor that’s within reach, and the same holds true of other large-scale and complex management tasks where GIS has been applied, including city or resource management.
Monitoring or determining the impacts of events and variables outside of standard queries from the past is the true promise of this technology. Now, rather than waiting hours for one query return, you can get the answers in seconds, which leads to more complex queries that factor in all kinds of inputs. It’s now quite possible to analyze impacts of weather on the network in real-time, or to incorporate trends witnessed via social media to factor in human behaviors that might impact the grid and to have human sensors provide input to unfolding events.
It’s also possible to factor in conditions that you may never have thought of, such as the variability of asset locations down to soil types or the performance of materials given certain environmental conditions. The more that is known about the network, and the variables around it, the more that can be factored into the analysis as readings of current conditions course through the system, and return feedbacks through sensors and performance outcomes.
Thanks to analytical performance improvements, it’s becoming possible to finely tune the systems based on their own feedback loops. With speeds that are reaching near real-time, we’re gaining greater trust through the current picture. This insight is also beginning to feed automated notifications and actions, making our systems work for us, and finally paying dividends for all the hours that it’s taken to get these systems in place.