Sensors and Systems
Breaking News
Trimble Pivot Platform and Alloy GNSS Reference Receiver Now Support BeiDou Generation III Signals
Rating12345STUTTGART, Germany — Trimble (NASDAQ: TRMB) announced today new...
NCTech joins forces with GeoSLAM on ZEB Discovery
Rating12345Edinburgh, UK – September 18, 2019 – NCTech, a...
Strategic Concepts for Unmanned Aviation in Urban Environments
Rating12345Stuttgart/Braunschweig/Karlsruhe – Strategic concepts for unmanned aviation in urban...
  • Rating12345

Maguire,David.gifThe geographic information system approach of ESRI has long had ties to science. Recently, ESRI placed more emphasis on this role by designating senior executive David Maguire as chief scientist. V1 editor Matt Ball sat down with Maguire at the ESRI User Conference in San Diego to speak about the science initiatives at the company, both in terms of technology frontiers and research and development priorities.

Maguire,David.gifV1: Since we met last year, you’ve taken on the role of chief scientist. This is a newly created position at ESRI, and very different from your job as director of Products and International. What does the job entail?

Maguire: GI Science is something I’ve been passionate about for many years, and it’s really the reason why I got into GIS in the first place. I wanted to get into a role at ESRI where I could be a little bit more hands on with GIS, rather than managing other people that are building GIS.

I’m interested in both science and the technology. Personally I don’t think that one should have to make a choice between science and technology. In these days of large data sets you need very good technology in order to manage, efficiently process and visualize the data. GIS Professionals also need to understand the science behind the technology, because it can have such a profound impact on the work that is done and the results that are obtained. Making sure that when you process data, you don’t introduce too much additional uncertainty and error and that the visualizations that you do are coherent and consistent with the data is really important.

V1: One ongoing debate in the geospatial industry is the distinction between professional tools versus web mapping tools and applications. What is your take on the distinction between web-enabled tools versus professional tools?

Maguire: First of all, I don’t think that it is useful to draw a clear distinction between the consumer versus professional. Instead, I just tend to think about the types of things that people are trying to do and the questions they’re trying to ask. I think that determines the user persona and the type of application that they need, and then the type of hardware and software architecture they need in order to support what they want to do.

So, for example, there are lots of people creating and publishing authoritative content on the web with a sort of top down strategy. Their goal might be to create topographic maps or to build a database describing animal behavior or to build a cadastral base map for people and publish on the web. At the same time, there are other people who have cameras, and GPS measuring devices, or even pen and paper in some cases and they’re just observing things, collecting information, and using the web to organize, store and publish the information.

It seems to me that when you combine citizen-centric information with the professional information, then you get the best of both worlds. The top down stuff tends to be a sort of a long-term average view of things. It takes a lot of time to create it, a long time to QA it, and a long time to get it out there, but it tends to have quite a degree of longevity. On the other hand, the user-generated content or volunteered geographic information as it’s often called, tends to be more local both in space and time and is complementary to the authoritative information.

One familiar example of this approach is dining preference. If you want to go out for dinner to a restaurant, you can look in a guide of the area, and find out what some critic (authority) said about the restaurant probably a year or two ago, and that will give you a general indication about things. But I would also go and look in the local blogs to see people’s take on it: for example, what ten people who ate there in the last month really think about it. This same notion of combining authoritative and user-generated content also applies to geographic applications.

V1: You brought up the Gartner hype cycle [LINK: http://vector1media.com/spatialsustain/?p=931] in your talk, and I found it to be a useful and interesting way to look at different trends in geospatial technology. You mentioned that SDI rested along the down slope of the “Peak of Inflated Expectations,” and that was one of the more intriguing things that you placed on that half of the graph. I’d like to get your explanation about why SDI rests there.

Maguire: The hype cycle is about the amount of media exposure or hyperbole surrounding a technology, which explains where it rests in the media’s minds. It doesn’t necessarily mean that a technology is bad or it’s not going to be useful. Since the technology trigger, people have talked a lot about SDI and have said it’s going to be a universal infrastructure, everybody is going to get data, and it’s going to solve lots of problems.

I think the realism right now is that although there are some technical issues to do with building and sharing information, the main challenges are organizational. My own view right now is that SDI’s are going to work very well for some types of communities and not for others. The big issue is who will support the necessary computing infrastructure so that information services are robust, reliable and performant. The idea that grassroots efforts will be enough to support real high quality applications is naïve.

I’m more in favor of a hybrid model of serving GIS data layers. In the case of the so-called framework datasets, somebody needs to take responsibility for obtaining, merging, and integrating that data, and ensuring a general quality level. The necessary hardware, software, and networking infrastructure needs to be available to serve framework data sets in a robust, reliable, and highly available way. If we can establish that, then I think it’s possible for individuals or small organizations to publish their own more specialist content, which they can paste on top of the other data. Unless we get a solid base infrastructure, I think that whole edifice is built on a house of cards and won’t stand the test of time.

V1:
There’s a growing trend in academia for more of a multi disciplinary approach toward the environment and sustainability. The University of Washington has a new College of Environment and Colorado State has done something similar. How do GIS tools meet that need for collaborative science?


Maguire:
GIS has several important roles to play here. GIS provides the ability to easily ingest, organize, and present data, and so can help raise awareness about some of the key environmental issues and the role that individuals can play to understand them. Secondly, GIS is an inherently integrative technology that can help synthesize multiple disparate information sources together.

One of the things we know about ecosystems and the environment is that they’re multi faceted — everything’s connected to everything else – and that relationships are very important. GIS can bring together multiple data sets and help us understand the complexity of these systems. A lot of the work I see right now has been about understanding climate or understanding water or understanding the spread of urban centers, but what is more interesting as far as I’m concerned is to not just understand the changes over time but to understand the impacts of those things on people, on science and society.

Because GIS can combine data sets together it can help us model scenarios and test hypotheses. And because it’s widely used in government and other organizations, we can connect science work with policy makers and make a persuasive case to do something about some of the deleterious impacts that are changing the world forever.

V1: Is integration with CAD and BIM high on your agenda?

Maguire: We’ve always felt that CAD and GIS are two different worlds conceptually and in terms of the groups that use them, although both are built on similar graphics-based technologies which require precise 2D and 3D data sets.
GIS tends to emphasize multi-user transactional databases, spatial analysis and modeling, and high quality cartography and visualization. CAD is more concerned with the graphical representation of real world objects, collecting information about structures and materials and about designing new entities. CAD users often work at a finer scale than GIS users.

We are continuing our general strategy of integrating CAD software with our GIS software to make it easier for CAD users to access enterprise GIS databases, and GIS analysis and cartography tools.

V1:
What plans does ESRI have for 3D?

Maguire:
Some of the work that the MIT facilities group showed at the recent ESRI User’s Conference (3D visualization and analysis of buildings and infrastructure on the MIT campus) was actually done using our shipping software. We have some good capabilities in ArcGIS 9.3 to visualize 3D objects, execute 3D queries and do 3D geometric operations like overlap and intersect. We have a big project for ArcGIS 9.4 to enhance the 3D capabilities of our software. I feel that the main contribution that we’ll make to the 3D market when all is said and done is 3D analysis.

V1:
Do you see that 3D mission extending into dynamic modeling as well?

Maguire:
Yes, but you’ve got to be a little bit careful when you get into dynamic models in 3D, because you generate a very large amount of data very, very quickly. There’s no intrinsic reason why it wouldn’t work. My only caution would be to be careful about the volumes of data and make sure that the system is sized appropriately to handle those, but that’s definitely the way we are going.

V1: The quality of the web GIS presentations on the main stage this year at the ESRI User’s Conference, were noticeably more flashy and polished in terms of visual presentation. Has visual presentation been a strong focus?

Maguire:
Several people remarked in a similar way about these presentations. What’s really funny is that all of the functionality in those demos has been in our server for over a year and all we’ve done is to expose it through a simple to use, widely available API. The API is the REpresentational State Transfer protocol (REST), which can then be consumed by JavaScript. We’ve also linked the API to some of the new web based GUI tool kit builders, specifically Adobe’s Flash/Flex and Microsoft’s Silver Light libraries.

We can take credit for managing the data, for doing the backend processing, and for fulfilling query requests, but all the praise for the sexy, nice looking drag and drop, and fades, comes from those tool kits. But you know, that’s what it’s all about really. It’s about leveraging other people’s work using true information technology standards to make better systems. It was actually relatively easy for us to connect those GUI tool kits up given the open architecture that we have built.

I think it’s true to say that the days of a 2D flat HTML application are numbered now that those new tool kits have come along. I’m certainly looking forward to next year when all of our users have gone crazy knocking themselves out with the new GUI capabilities.

What’s also interesting for me is that the web interface is surpassing the desktop interface in terms of popularity. For the last ten years it seems that the desktop has been the standard for user interface design. Servers have always been very powerful but the UI has been a bit clunky, with some limitations on the things that we can do, and shows lack of interactivity and responsiveness. But now with these new tool kits I think the Web and servers have done a leapfrog straight over the desktop and now if anything these are the state-of-the-art in terms of sexy interfaces. It will be interesting to see what the desktops do to try to emulate or exceed the web innovations.

V1: One frustration that Jeff Thurston and I share, and that was integral to starting V1 magazine, is that not enough people are using spatial analysis tools. You might have the title GIS Analyst, but you’re not really doing much analysis. What is your feeling on that — should we be doing more and how can we better enable GIS users to get the full promise from the tool set?


Maguire:
First, I agree with you that most GIS applications, if you were to classify them, are about creating inventories of the natural and culture environment or about visualizing the world. A relatively small number are taking GIS to its full potential, which is creating new knowledge and insight based on spatial analysis and modeling tools and techniques.

I honestly feel that we’re moving into a new era of much wider use of analytical tools and modeling approaches. One thing that absolutely struck me as I went through the Map Gallery this year at the conference, was the number of posters compared to previous years that are using geostatistical analysis and spatial analysis, whether they are looking at networks or taking an analytical and model-based approach to understanding the world. One of the main reasons for this is that there are now more high quality tools in our software. The knowledge of the people who are coming through undergraduate and graduate programs and have a background in these sorts of thing is also increasing all of the time. Plus, I think it’s just a general realization that we need strong GIS analysis and modeling to focus on the really critical problems of our time, such as climate changes, the use of energy, the sustainability of natural resources, the equitable planning of new cities and transport systems, and improving efficiencies of our world in general. This is something that is dear to our hearts at ESRI, and something we’re pushing ahead on in a big way.

Leave a Reply

Your email address will not be published. Required fields are marked *