If you’re like many geospatial professionals, the Jan. 1998 speech by then-Vice President Al Gore at the California Science Center sparked a broader awareness of what all aligned geospatial technologies could one day become. The title, “The Digital Earth: Understanding our planet in the 21st Century,” set the tone for a talk with its roots in problem solving and decision making. However, it was the more fanciful and far-forward vision of an immersive virtual reality experience that set an audacious goal for geospatial technology’s role in aiding understanding.
Geospatial technologies put problem-solving into context at any scale, from local, regional, national and even global levels. The need for such insights was outlined in the speech, with the application examples of diplomacy, crime fighting, biodiversity preservation, agricultural productivity and climate change prediction. The speech also outlined some of the technological challenges and barriers, and many of those have broken down in the intervening years.
In the introductory paragraphs of the talk, Gore mentions the capability of the Landsat program to take a complete photograph of the entire planet every two weeks while also lamenting that the majority of the images are never seen and instead are siloed. This statement speaks to both the data capacity at the time when the 30-meter Landsat imagery was the primary commercial source and the need for more open and interoperable means of making that data available more widely.
The speech was nearly a year prior to the successful launch of the first high-resolution Earth imaging satellite IKONOS with imagery at 1- and 4-meter resolution in both multispectral and panchromatic and a revisit rate of 1-3 days. That launch set off a global market that has greatly expanded, with single companies such as Planet Labs setting out to image the whole Earth every day with constellations of their own satellites. The pace of collection is only expanding with 179 Earth-observation satellites launched from 2005-2014 and 427 planned or projected for launch from 2015-2024.
The speech also reflected a high-level endorsement of the Open Geospatial Consortium, which has made remarkable strides in the sharing of geospatial information. That foundation of technological interoperability has also been met with the founding of the Group on Earth Observations (in 2002) that has done a great deal to coordinate and sustain the development of the Global Earth Observation System of Systems (GEOSS). So, on the fronts of data sharing and coordinated monitoring, we’ve come a long way.
While the capacity for data collection has exploded, there’s still a bottleneck in our ability to process and present this information in a way that unlocks insights. The commercial model has been effective in increasing our capacity with the latest rounds of Silicon Valley-based entrepreneurial companies focused not really on imagery, but instead on information and insights.
These companies tout their development teams that have experience at imagery processing companies that work at mobile Internet scales, such as the daily processing of Twitter, Flickr and other imagery sites that scale to millions of daily users. Making the leap from millions of data sources to hundreds of satellites makes the scaling problem far less daunting.
Advances in communication speeds are among the drivers cutting into information latency and backlog issues. Gore speaks to a frontier of networks reaching 10 gigabit/second speeds and thankfully we continue to make inroads in that area. Today’s fastest wireless network communicates at 100 gigabits/second. Additionally, the Airbus Space Data Highway that uses lasers to communicate satellite-derived information at 1.8 gigabits/second far surpasses the 320 megabits/second rate that the first IKONOS satellite operated at. To hear of high-speed laser links communicating at distances of 40,000 kilometers for data latency of less than 30 minutes is truly revolutionary.
Companies like Orbital Insight are using pattern recognition and other machine learning techniques to take the human out of the processing equation. Most recently this company has turned their attention to the freely available Landsat archive to pinpoint the location and surface area of water on our planet on a global scale. And, this isn’t a static data set as the company repeats this exercise every two weeks, with plans of doing this weekly.
The raw power that is now possible with the infinite computing available through cloud computing providers makes this all possible. With automation and distributed computing at play, far more information will be turned into insight-as-a-service on the commercial side and monitoring and policy guidance on the scientific side. So, we’ve come a long way on increasing both our imagery gathering and processing capacity, although we still have a long way to go before we’re able to harness intelligent agents to “find and link information about a particular spot on the planet” as Gore suggests.
The idea of a virtual digital Earth that could be explored in both time and space through an immersive virtual reality headset with a data glove and voice-recognition interface speaks to some of the inroads at the time. While early inroads at the time were promising these technologies were too expensive and clunky to reach wide adoption at that time.
Since that time, we’ve seen technology advance rapidly and come way down in cost from the under $600 virtual reality platform Oculus Rift launched this year to voice recognition that offers streamlined voice searches on our phone. The barriers to how we interact with this information have come way down of late.
While we can’t yet explore human or geologic history in an immersive environment as Gore envisioned, there have been pockets of immersive mapping and story map experiences that are starting to nibble on this opportunity. The idea of an explorable digital Earth gained great credibility with the advent of Google Earth back in 2005. We’ve continued to add to such capabilities with more data and a greater ability to explore with the 360-degree immersion that Google Street View offers. Understandably, we’ve added much more data at scales larger than local but at the same time we’re all equipped with very capable data capture devices that are informing our local understanding.
Certainly, the need for the insights afforded by a Digital Earth are far more pronounced today than they were back in 1998, well before Gore’s Oscar-winning documentary “An Inconvenient Truth” in 2006. That film put a spotlight on the impacts we face from global warming, although Gore will now tell you that we’ve made amazing progress on the problem.
The vision and promise of the Digital Earth have weathered the years well. We’ve come a long way on the technologies that enable immersion in time and space and now we need to work more on creating experiences that bring this data to life for active exploration.
How soon until we see a seamless high-resolution digital Earth?, Perspectives column, Nov. 18, 2014