Sensors and Systems
Breaking News
Terra Drone Invests in Aloft Technologies to Enter U.S. Market, Boost Global UTM Development
Rating12345Terra Drone Corporation, a leading drone and Advanced Air...
LeddarTech Concludes Licensing Arrangement With Renesas
Rating12345QUEBEC CITY, Canada — LeddarTech Holdings Inc. (“LeddarTech”) (Nasdaq: LDTC),...
Hexagon partners with Nemetschek Group
Rating12345(Stockholm, Munich, 26 March 2024) Hexagon’s Geosystems division and...
  • Rating12345

Regnauld Nicolas 1Spatial Traditionally maps were produced using a long and detailed process, which would start at the initial data collection stage and run through to the final creation and printing of a map. However, different map series would often have their own data collection processes. This would mean that small scale maps could be partly derived from larger scale maps but would need to be completed with information especially collected for that map.

National Mapping and Cadastral Agencies (NMCA) would therefore often need to maintain data captured at different resolutions, driven by the scale of the map they had been captured for. The use of such data was exclusively in the form of map reading but this all changed when Mapping Agencies started to store their data in big databases and in vector format.

The aim of utilising a big database was and still is today, to be able to collect data once at the larger scale, store it and then continuously update and maintain it. This enables organisations to use the data as a central source for deriving and updating all products. The most complex task in the derivation process is generalisation. This is where the information which is relevant to the target map is identified and used to build a representation that is adapted to the new desired display scale.

Ongoing Challenge

Automating the generalisation process has proved a major challenge. Despite more than 25 years of continuous research in this domain by various organisations across the world, it is only in the last few years that systems capable of deriving full maps, mostly automatically, have started to emerge in production. Such systems have been built by NMCAs themselves, using specialist toolkits provided by software vendors. The challenge is now on the software vendors side to expand their toolkits to propose fully operational generalisation software. This will bring automated generalisation within reach of most NMAs, most of which could not afford to develop their own generalisation processes.

Now that this generalisation dream, sometimes called the Holy Grail for NMCAs, is almost within reach, is it going to slip further away?

Moving to Data Sales

Maps are still a very useful medium but do not count for a large part of the NMCAs market anymore. NMCAs more and more sell their data. In the past 15 years, new types of products have started to appear where data is sold for use in GIS software. These data products have evolved from simple line geometries with limited attribution to much more comprehensive models of the world where the data are organised in geographic themes, better structured, and better described through a richer attribution.

This new approach allows for complex and useful analysis to be done, using more capable GIS platforms. These GIS platforms started as desktop applications and later evolved into client server platforms, always accessing data from a central database. This is changing rapidly again, as masses of data are now available through the Web, and people want and expect to be able to use them on the move. So the GIS capabilities will increasingly be migrated into services hosted in the cloud, accessing data distributed across the Web.

As people are getting used to using these online services, they expect quick and accurate answers to their problems or queries. For example, they now want to see a map as the support for the information they are interested in rather than just a static map. This is already becoming available of course and mostly takes the form of map mashups generated by search engines or routing devices, with markers or itineraries on top of a base map.

Data providers are also multiplying. Google, Microsoft and crowd sourced data like OpenStreetMap are proposing map portals that cover a large proportion of the world. NMCAs have also started to propose their own map portals at a national level, proposing similar services to the likes of Google but using their own data. NMCAs still have levels of detail in their data that these other data providers don’t and the currency of their data is managed. This is important to professionals who might be using it to plan a new development, plan the emergency response to a disaster, or simulate the spread and impact of diseases or pollution.

Fit for Purpose

But this level of data management is not enough! The data must be fit for purpose and if the main usage of geographic data will in the future be through services that discover the relevant data and combine them with others, then the data must be ready to be discovered and processed by such services. For NMCAs, and providers of geographic data in general, this means that the focus will move away from prefabricated maps, to move towards map components that can be assembled on demand to build custom maps. These map components will be required at different resolutions to facilitate their integration and presentation. Not all information is available or needs to be presented at a very large scale.

So while NMCAs today propose analytical data at high resolution and derived maps at lower resolution, the next requirements are likely to be for derived analytical data, at lower resolution, ready for integration with whatever thematic data customers will want to cross analyse… all potentially using automated online services. This means that more data components will need to be made available, at various resolutions and maintained with the same currency.

The good news is that the automatic generalisation processes used to derive generalised maps should already include most of the processes required to derive these components. Many NMCAs have already adopted the principles of deriving Digital Landscape Models (DLM) from their large scale data. These DLMs store representations of the world at different resolutions, they are stored in vector databases organised in themes. Today these DLMs are mostly used as sources to derive Digital Cartographic Models, which are their cartographic representation, ready to be displayed as a map. They do however constitute a good starting point for delivering data components at different resolutions. They will have to be formatted to comply with standards that will allow services to use them.

As NMCAs are expected to deliver information in the shape and form required by users with ever growing needs, the principle of capturing the data once, and using it many times is still very relevant for them to be able to keep feeding the market without increasing production costs.

 

Leave a Reply

Your email address will not be published. Required fields are marked *