The concept of spatial data quality is not only about the data. It is also about the impressions people make when using spatial information. And, it is about supporting all of the underlying practice and use of spatial information – including geospatial technologies. If we’re not willing to address quality more substantially, and pay attention to the propogation of error and inaccuracy, then why concern ourselves with the details of technologies and how they perform? Better quality could actually ease the way forward, help to get more people on the same wavelength and expedite understanding while reducing conflicts.
The concept of spatial data quality is not only about the data. It is also about the impressions we make when using spatial information. And, it is about supporting all of the underlying practice and use of spatial information – including geospatial technologies. If we’re not willing to address quality more substantially, and pay attention to the propogation of error and inaccuracy, then why concern ourselves with the details of technologies and how they perform? Better quality could actually ease the way forward, help to get more people on the same wavelength and expedite understanding while reducing conflicts.
This column is about spatial quality – period. While some consumers and applications do not require high accuracy and detail, that does not take away from their value and usefulness. But it does point to fit-for-purpose applications. The bigger issue, I think, is that we have to ask ourselves why we spend time, energy and finances working toward accuracy and detail, while seemingly being willing to propogate errors and inaccuracy with the final results as evidenced through visual representation, lack of metadata and working toward spatial data infrastructure (SDI) in slow motion? We must ask ourselves, why are we doing what we do?
Not a day goes by that we don’t hear about a GPS device that has higher accuracy and more features for achieving it. Virtual reference stations have not only revolutionised the use and deliver of GPS accuracy, but they have improved upon it. More earth observation satellites are circling the earth than at any other time in history, each capable of much higher resolution than a time not too long ago. Do you remember just a few short years ago when we fell over ourselves if we could acquire LandSat imagery twice a year for the same place at 90 m resolution? Wow…we are most fortunate today.
It can’t be much longer than 4 years ago when we looked at computer generated visualisations of city buildings and saw only square blocks arranged in similar colours, most of which had no windows and took ages to generate numerous view points for – prior to animation and motion tracking.
We’ve come a long way. It is an exciting time and these technologies can potentially answer some of the world’s most pressing problems. In fact, they can do that in ways that we never imagined possible not long ago.
So – why do most web maps continue to use a Mercator projection solely as the basis for visual representation? Even the most unenlightened of us queries why an airplane trip from North America to Europe does not fly over the Azores, but instead flies much further to the north.
We see maps of the 49th parallel between Canada and the United States and ask ourselves, “why is border a straight line?” We take out a paper map and plan a trip with a ruler between two places, then plug in the same distance on a computer and get a different answer – so what’s up with that?
Don’t you find it odd that many GIS today have the means to enter metadata but that even the most experienced professionals are not entering the information (why is it that government people do it more often anyhow)?
Then some of us stand back, point a finger and say, “you know, spatial data infrastructure are not working out!” Is it any wonder? Mind you, most SDI projects I have witnessed are successful, at least in terms that I would consider evaluating upon.
Namely, the participants are working together toward the same issue (often from different countries), managing to reconcile administrative tasks, spread out responsibilities and have generated a final product with lively discussions about pro’s and con’s of what they experienced. Afterall, isn’t that the point?
Why haven’t we been able to come up with fuzzy border analysis (uncertainty modelling) – more effectively? And, why isn’t everyone discusses maps and spatial analysis in definitive and absolute terms, only? Have we lost the abilty to be flexible, think dynamically and to tolerate differences? Spatial technologies, cartographies and mapping ought to be leading the discussion about differences, not attempting to reduce discussion solely to common denomintors.
Earlier I wrote about 3D GIS asking which geospatial technologies would be impacted. The road ahead is fairly clear in terms of 3D, many people will come with different approaches using different technologies and attempt to represent a 3D world. If you think the 2D world is wiggling around currently in terms of projections, then just wait for 3D objects that seemingly tilt and sway in their placement.
True – there is no absolute way to represent the 3D world in 2D, but we don’t seem to identify that much and ask questions about it. By not including more rigourous information about data quality I question how far ahead we can move. Without it, the message we shed to a watching audience is that it does not matter. When it does not matter, then there are no real reasons to support standards, or at least they are not understood well and what they can do. If people can’t grasp why documentation, for example, would be useful to how they use the information they are provided with, then they are not provided with an opportunity to learn that they may be wasting their time forcing a square peg into a round hole in some cases.
I’m not sure any of us expects a homogenous set of data to all questions around the world anyhow, but assembling a data set that can work to address some questions is next to impossible without documentation and metadata. But consider the impacts when Person A uses data of unknown quality and then Person B adds in more data before Person C finally uses the information and it totally falls apart upon application. Errors propogate and there is no way of understanding why or where this happened so that it can be corrected.
As we hurdle forward to a robotic future, many of the applications that will directly impact your life will depend upon spatial data quality. Water routing through pipelines, lighting and energy, how your car moves on the road and responds etc. It is at this juncture that spatial information will have its greatest (and most useful impact) but one for which we need to ask ourselves are we prepared? We are already witnessing the development of digital governance strategies for environmental affairs, driven by sensors and automated decision making, yet, how many of us operate with a knowledge of the quality of spatial information and data feeding these systems?
Has the time come for us to recognise the debate is not about consumer versus professional spatial data, but what we want to do and what we need to accomplish it, safely and with the knowledge it is the right information for the right place for the right time?
Jeff Thurston is editor of V1 Magazine and V1 Energy Magazine for Vector1 Media in Europe, Middle East and Africa.