There is growing momentum for the Google Glass wearable computing device, along with a bit of a backlash, and the device has yet to become commercially available. This device promises an interesting future of hands-free computing, with much of the interaction happening as geospatial wayfinding of our world. While enhanced navigation is one component, documentation is another, and these devices promise to capture a whole new level of geospatial information.
Wearable computing, and such hands-free approaches as voice recognition systems, have been around for some time in the geospatial arena, but none have really taken off. The eyeglass form factor could change that however, what with the Google brand buzz, and their deep geospatial offerings. This new hardware will have an impact on geospatial field data collection, both pro and potentially con.
The full promise of Google Glass is the ability to access digital information more seamlessly as we navigate reality, overlaying information as well as allowing us to capture photos, video and audio within context. It’s the capture within context, and the potential for multi-dimensional and multi-sensory information within the context of place that’s the most interesting.
To date, the bulk of discussions about the device are around the personal interface and the use in replacement of a smartphone for personal use. What’s missing from this dialogue are the potential uses in businesses where tablets and phones have come up short. Taking the computing device out of hands that need to be used for another task could easily be the leading use of such technology, and there are a broad range of uses that revolve around collecting information about place or navigating place with information overlaid upon reality.
There’s a great opportunity for Google Glass to be harnessed in surveying and data collection workflows to ease cumbersome issues with hand controlled field data collection devices. Already, location application developers such as Foursquare are talking about their plans for development using the Google Glass Mirror API, that Google has yet to release. This open platform, with all it’s possibilities for integrated location-aware content, will most certainly lead to the creation of new geospatial applications.
With a voice-controlled and location-aware screen directly in front of our eye, and the potential for geospatial enterprise connectivity, this device could become the go-to for taking GIS to the field. Additionally, capturing images with a glance, and voice entering attributes, would leave hands free to conduct additional tasks, such as maintenance and repair work. The potential for these same glasses to convey the information in maintenance manuals, while also recording the time and scope of the maintenance performed would mean a leap forward in efficiency. Being able to later access and recall these visuals, with their voiceover describing decisions and the complete context of the decision, would streamline all future operations in a similar way to how we now rely on Google search to fill in knowledge gaps.
The ability for these devices to capture photos, video and audio has already prompted some pre-emptive bans from establishments that want to protect patron privacy. This reaction may indeed become a firestorm as a whole new level of public exposure is sure to accompany these devices. The trick will be to maintain privacy, while also allowing individuals to access their eyeglass computers that might also carry a prescriptive lens to correct vision or perhaps an integrated device to augment hearing.
Today’s public access laws allow for a broad range of data capture, without the need for release forms or special permission. If there’s such a strong reaction to this device, it could cause a backlash against anyone that is wearing it, regardless of purpose. Similar privacy concerns were discussed when cameras were added to phones, and have even led to a variety of applications with self-destructing messages and photos. It would seem to go too far to have a hardware lock on functionality in particular locations, but some solution to privacy in certain locations seems inevitable.
Barrier-free computing has been a long-standing dream for a myriad of applications. While the current focus on Google Glass is on personal and social interaction, this platform has perhaps greater potential for streamlining operations, easing business communication, and improving on the accuracy and detail of geospatial data capture. With the potential for easy integration with enterprise computing and other local devices, one could imagine an RTK system augmenting the location for very precise positioning, and improving our systems by orders of magnitude thanks to both this precision and the added context that we’re able to capture.