Sensors and Systems
Breaking News
Reimagining Michael Baker International to Accelerate Growth and Innovation
Rating12345BELLEVUE, Wash. – EagleView Technologies, Inc., a portfolio company...
Dynam.AI Unveils Vizlab, a Next-Generation AI Platform with Customizable Real-World Machine Learning Capabilities
Rating12345Data scientists are encouraged to join the early access...
HawkEye360 to Deliver Large Data Files Over LEO Satellites using BitRipple Technology
Rating12345BitRipple Inc., a provider of Metaverse solutions that enable...

February 23rd, 2014
What impact will Google’s new mapping phone have on our digital realities?

  • Rating12345

 

Google has just released details of an initiative dubbed Project Tango that provisions an Android smartphone with the ability to map and remember location. The phone builds on the company’s model of harnessing Android phone users in combination with phone sensors to inform applications, to date this capacity populates maps with traffic volume and speed. This next-generation onboard sensor fusion adds 3D mapping capacity. The device uses both a motion tracking camera and depth sensor to build a map of its surroundings.

Many people have suggested the idea of a handheld that could some day map and quantify the world around it just by scanning or passively sensing our environment. A recent smartphone project from IkeGPS accomplishes precise measurement with a laser inside an add-on case for your phone. With Google controlling the world’s most popular phone platform, and with this integrated and dedicated mapping capacity, our phones will soon have some incredible possibilities. There are implications in augmented reality, indoor navigation, gaming, and building and design professions.

Human Scale

Google’s Project Tango technology builds on computer vision and robotics research, with the stated goal of giving the handheld device a human-scale understanding of space and motion. The specs and capabilities of this device are impressive: integrated 4MP camera, motion tracking camera, depth sensor, and two computer vision processors; full 3D tracking in real-time; a quarter million 3D measurements every second, updating position and orientation of the phone that then fuses into a single 3D model.

These devices have the potential to inform 3D models to an unprecedented level of detail. Augmented reality hinges on the ability to capture surroundings with accuracy to mesh your precise position with a digital model overlay that provides added information. Google already gathers a great deal of details about what surrounds us with their Street View mapping cars, the data from what people are searching and finding with their online maps, and the data feeds from Android users location and movements. The sensor view of the phone fused with the data Google already has could revolutionize mapping by providing a true 1:1 map.

Advance Work for Robots

Harnessing the capacity to map in order to create a larger 3D model that powers applications is the genius behind Goggle’s move, and something that ties in directly with their existing mapping capacity. This fills a pattern for Google that cleverly harnesses the users of these services to act as agents to inform each other’s experience. This trend is different than a Wiki-like approach as it’s a more passive participation. By using the phone you map and share your surroundings, stitching your personal 3D map with the larger whole. The advantages are around the incredible increases in efficiency for constant updates of the map from users who don’t have to be actively mapping. It’s not an OpenStreetMap, it’s an AutomatedStreetMap.

While an individual’s ability is intriguing, the aggregation of these different 3D models is the true promise. Having a human scale 3D model offers truly seamless indoor and outdoor navigation, and all new areas to navigate thanks to a better informed understanding of our surroundings. The highly detailed model is also a precursor to better machine navigation, with robots benefitting from the detailed human worldview that these devices will capture.

Dev Kits Out the Door

Project Tango is the result of a collaborative research and development effort that includes a number of different university and industry labs, led by Johnny Lee who helped Microsoft develop the Kinect. It isn’t a far-forward vision like Google’s autonomous car, or Project Loon’s global network of high-orbit balloons that distribute Internet bandwidth. This is a very tangible handheld product that goes out the door to developers in the next few weeks in order to start prototyping applications that take advantage of this new sensing capacity.

The device runs Android and includes development APIs to provide position, orientation, and depth data to standard Android applications written in Java, C/C++, as well as the Unity Game Engine. The call is out to professional developers to push the technology forward.

There are a great many interesting application areas, such as navigation for the visually impaired, easy model making of our interiors to aid home decorating and remodeling, and detailed navigation of store interiors to guide us to the exact shelf for the product we are looking for. All of these applications are intriguing and exciting, but the real excitement lies in the kind of capacities that this level of sensing will unleash that we haven’t yet imagined.

View the Project Tango video here: http://youtu.be/Qe10ExwzCqk
Sign up to get a prototype here: http://www.google.com/atap/projecttango/

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *