Sensors and Systems
Breaking News
Maxar Secures NOAA Approval to Provide Non-Earth Imaging Services to Government and Commercial Customers
Rating12345WESTMINSTER, Colo.- Maxar Technologies (NYSE:MAXR) (TSX:MAXR), provider of comprehensive...
AEye Announces Groundbreaking Immersive Lidar Experience for Attendees at CES 2023
Rating12345DUBLIN, Calif.- AEye, Inc. (NASDAQ: LIDR), a global leader in...
WIMI Hologram Academy: Multi-Dimensional Holographic Vision Opens A New Chapter In Cyberspace Mapping
Rating12345HONG KONG – WIMI Hologram Academy, working in partnership...
  • Rating12345

Limp_Fred_thumbFred Limp has been on the forefront of spatial technology research and development projects as the founder and past director of the Center for Advanced Spatial Technologies at the University of Arkansas. Limp has written several prescient articles about the growing importance of 3D in the geospatial sphere, and continues to work on interesting research projects that involve the application of 3D data collection, integration and analysis tools to pressing problems. V1 Magazine editor Matt Ball spoke with Limp about his work and the trends in the 3D space.

Limp_FredFred Limp has been on the forefront of spatial technology research and development projects as the founder and past director of the Center for Advanced Spatial Technologies at the University of Arkansas. Limp has written several prescient articles about the growing importance of 3D in the geospatial sphere, and continues to work on interesting research projects that involve the application of 3D data collection, integration and analysis tools to pressing problems. V1 Magazine editor Matt Ball spoke with Limp about his work and the trends in the 3D space.

V1: You’ve been an early adopter of LIDAR technologies to capture heritage sites, and I imagine you’ve seen a great deal of innovation and progression to make those tools easier to use?

Limp: We have been doing quite a bit with LIDAR generally from two really different directions, and it’s been interesting. I always think that there are two cycles in technology adoption — using the new technology to do the old things better, faster and cheaper and then a whole set of things we now can do that we haven’t thought about.

Under the category of the old things, we’ve been doing a number of DEM analysis, modeling, and ways to take advantage of LIDAR. LIDAR, like lots of technologies, was used to produce products such as bare earth models, but there’s also a lot of value in the raw LIDAR data. So a lot of what we’ve been looking at is the utility of the data and what sort of possibilities are out there. We’ve looked at context like Oracle Spatial as a manager for those very large data sets and a toolkit to manipulate them. We’ve been looking at other third parties solutions too, because once you get to billions of observations you have a whole data management issue.

On the application side, we’ve been looking at forest canopy analysis and how we might use LIDAR for that. More relevant to the kind of things you’re interested in, though I think is the urban infrastructure issues and there are a whole number of really interesting possibilities there.

V1: Has LIDAR branched off into a lot of different specialties at this point and time? Are there different approaches that you take given different applications?

Limp: It really depends, and I think we have an interesting perspective because we’ve got one research project where we’re looking at multi-return LIDAR with canopy characterization and under story delineation, and we have another research project currently with LIDAR looking at a hydrologically-enforced DEM in a very low relief environment, which is an interesting problem itself. We have another LIDAR-based project where we’re trying to look at rapid strategies for urban structure and infrastructure delineation.

There are a lot of people working in LIDAR now, but they don’t necessarily notice the data because they work with products rather than with the data. The density of the points means there are some really interesting things you can do and you’ve also got information from the LIDAR return that gives you intensity information. For example a bare-earth LIDAR derived data set with (say) a 3-meter post spacing has raw data at nadir that is lots denser than 3 meters. I think there is a lot of content information in LIDAR data sets that can be very useful.

But, you have to have an environment to be able to manipulate raw data, which is the trick. We’re seeing those sorts of tools grow. Increasingly, there are some really nice LIDAR data processing toolkits that allow you to take raw data and extract information of relevance to you. So, actually, that’s sort of an emerging area, not just LIDAR as a source for a product but the actual data sets and people manipulating them for multiple purposes.

V1: What are some of the implications for increasing fidelity of LIDAR data and increasing frequency of LIDAR scans of an area?

Limp: A simple case is the LIDAR data acquisition over an area, where you have a community that already has building footprints. Literally within a few minutes, you have the tools to build a 3D massing study for the entire urban area. That’s very easy to do these days, and then to assess changes in that mass by looking at change delineation. Assessors are very interested in seeing what changes are taking place. You also have planners that are looking at change. If you mix a combination of some multi-spectral instrument and LIDAR, using tools such as Definiens’ eCognition you can not just see that something’s changed but you also get a sense of what sort of changes occurred.

V1: The analysis function is something that you’ve focused on in the past regarding the benefits of more 3D modeling.

Limp: The thing that we’ve noticed is that the industry tends to verticalize what it does according to a data stream. So, there’s aerial photography, there is LIDAR, there is terrestrial LIDAR, etc., etc. I think that’s perfectly understandable, but in fact what you really want to do is fuse those multiple input streams in effective ways and then that gives you a 3D product that you then use for analysis or other purposes.

One of the things that we’re really seeing is to understand terrestrial photogrammetry, you have to understand terrestrial LIDAR, you have to understand airborne LIDAR, and you have to understand multi-spectral instruments. If you deal with all of those, the product information value really grows dramatically, because which tool or combination of tools that you use depends on what you’re trying to do.

If you try to quickly create a 3D urban infrastructure for a community planning charrette, then the Google’s SketchUp tools and Street View in Google is probably the quickest, fastest way to do something like that, but that level of detail wouldn’t support lots of other things. Knowing the particular combination of tools and particular products, is increasingly becoming an interesting question that people need to be thinking about. In other words, if you’ve got LIDAR, you can do A, B, and C but actually C may be better done with a different tool and vice versa.

V1: I think it gets really interesting when the approach fuses different data collection methods.

Limp: Absolutely, and for example, the geometries and analytical operations that needed to be performed in terrestrial LIDAR are in fact really different than the analytical operations and characteristics that you’re looking at when you’re looking at airborne LIDAR. If you look at airborne LIDAR, you’re often actually operating in more or less a single plane. There’s a little relief because of trees, but you’re often working to get to bare earth – 2.5D. With terrestrial LIDAR, that instrument is measuring something close up, something at a middle distance and something farther out. It’s a true 3D data – not simply 2.5D.

All of the characteristics and the signals and the way you manipulate the data is very different between terrestrial and airborne LIDAR. You have to use a very numeric method if you want to pull those two together. Each of those domains has it’s own requirements, but if you can fuse those data streams you really have a much more powerful product.

V1: You’ve been doing the cross walk between CAD and GIS for some time.  Simulation and visualization seem like they should be part of both those tool sets, but aren’t yet.

Limp: There are certainly great people and a lot of places that are thinking about doing that, but it hasn’t gotten to the point where you would call visualization and simulations as routine product capabilities.

We have a large NSF-funded project to look at visualization strategies and a number of domain areas including urban infrastructure but as part of the project we’re also working with people that are doing molecular modeling. They do 3D modeling of the molecule structure with millions of molecules. As they zoom in and out of their representation of 200 million molecules they render 200 million molecules every time, so they require these large gigantic super computers. We’re saying to them that over in the geospatial arena we have thought of clever strategies like image pyramids and view frustums, and you don’t really have to render all 200 million molecules every time.

You’d think that those doing molecular modeling would be all over those sorts of things, but they’re not. They’ve been doing some really amazing things with parallel computing, visualization and structuring, but not on rendering tricks. You quickly see that every domain has their own technologies, their own strategies, their own approaches, their own understanding of information. You either assume that the other guys don’t know what you’re doing or that they know exactly what you’re doing, but in fact the two different groups may have some interesting holes and when you plug them together you get some really exciting synergies.

V1: One area that we’ve talked about before is the simulation of temporal aspects and monitoring change over time.

Limp: We’re working on the issue of collaboration in 3D for planning purposes. With the non-specialist, the consumer, the city planner, you want to model temporal changes but you also want them to be able to interact and look at what changes happened as the result of their actions. So, we see this as a really interesting combination of technologies.

Imagine if you will a very large high resolution stereo display, 20 feet across and 10 feet high, in front of it are a set of these collaborative multi-touch screens and as people interact with the touch screen the results of that change – whether it’s a new strategy for a street or whether it’s a molecular reorganization – are displayed to a variety of people on the large screen. We think that sort of interaction is going to be something that’s going to be very powerful for a lot of people.  It puts together existing technology pieces in a new way.

V1: The projects at CAST seem to be at the forefront of technology development, and in a lot of different domains. I think you’re very unique in terms of that grasp of technical evolution before it’s mainstream and practical for a lot of users.

Limp: We have a lot of a collaborative research projects with the computer science community and we found an interesting social process that goes on. The geospatial community, generally speaking, uses commercial software products. Computer scientists are very much not interested in projects that involve commercial software.

Primarily, their research applications deal with problems that require very large parallel applications with very large data sets, and many thousands of CPUs working simultaneously, and they’re developing code in an open-source environment. Almost all their work is in Linux, and they work almost exclusively in open-source. When we bring a geospatial problem to the computer science community, it’s an interesting question about how you move from a commercial environment into this research environment and in many instances, you really have to rewrite the algorithms or the code or rethink what your strategies are because, for example, what would 3,000 licenses of product X cost you?

That’s one of the issues, plus, you obviously can manipulate the interior elements of the code when you have source but can’t with a commercial product. What that means is that you have this interesting dynamic where the modern consumer of geospatial is working in a commercial software environment. But the research on the next-generation or next-after-next-generation of geospatial is being done in an open-source environment. And so, the question is how those two merge back so that the consumer gets the benefit of this research initiative.

The NSF, for example, is putting billions of dollars in high performance computing research. So, it’s an interesting question that we deal with because we have one foot in both of those communities. It’s not uncommon for us to think about a problem that you would approach with a commercial software product. But then we would typically have to rethink that and see if for example there might be an open source software module that does roughly the same thing or perhaps code a new one.

V1: I think back to the days when there were 20 new products to review in a given month. I don’t see that level of innovation, and it seems to me a lot is being done on the open-source side but around projects, primarily, and certainly the tool set but the projects drive the tool set innovation. It’s just a different paradigm.

Limp: The downside of that research paradigm is that you may get something that’s very effective, does something very interesting way, but would not be something that a consumer of the product is interested in. The interesting dynamic that I think is going to drive the economy of this whole area is how does a commercial software environment with installed base take advantage of this open-source work to cycle up their product lines, so that it has this new capability or new functionalities. I don’t have any answer to that, but I think it’s an important one because the really aggressive research is taking place particularly in an NSF-funded environment in this open-source setting, because that’s where the computer scientists work. For every NSF dollar for geoscience research there are 1,000s or perhaps 10,000s of computational research dollars.

V1: If we get back to the large vision, are we moving toward a kind of visualization and simulation environment that approaches the idea of a Digital Earth?

Limp: There are elements of that vision, but I’d say yes and no. One of the things that’s fascinating is what happens when scale changes and it’s not just a simple case of there’s more and bigger data sets. Different things happen.

One of the real challenges that I see to the whole geospatial community is that when you “zoom in,” you do a whole lot more than just zoom in. You have to change paradigms of analysis and sometimes the data structure. One of the things that I find particularly exciting that is helping us begin to think about this is that the work being done on CityGML and the use of the concept of levels of detail.

Our community has a bunch of verticals, there’s the remote sensing vertical, and the LIDAR vertical, and the terrestrial laser scanning vertical. But we also have all these horizontal applications that we have to decide what tools and what methods and what analysis to do at different scales. The level of detail concept in CityGML begins to give us a paradigm for moving seamlessly from scales of 1 to 50 or 1 to 100,000. We need to develop analysis and display systems where we know how to begin thinking about that.

I think that’s a really important idea and I don’t know if people have really addressed it. It’s about how you think about data as you move between different scales, and what analytical operations makes sense. To me, that whole scale issue is something that we really have to think about a lot because so much changes as you move through different scales.

V1: A lot of what our magazine is focused on has to do with global change, and climate change is the leading issue there and adaptation is very necessary. Are some of your projects focused on monitoring and managing change?

Limp: We did a carbon budget analysis for all of Central America that basically fed into the IPCC protocols. Again, this comes back to this issue of scale because these areas have relatively undeveloped digital infrastructure, with not many maps in digital form. How do you organize the question of global change that happens on a global basis when data form different locations differs so dramatically in scale?

In a human sense most individuals can’t really get their head wrapped around an entire global questions, it’s just too abstract. So, what we need to be able to do is move the global model down to a local representation. We really need to figure out ways to do that. When people start thinking about these larger problems but with an individual impact we will see real change, we need to represent and display the local consequences of a global process – that is very critical.

Jason Tullis, for example, is one of the faculty members here that has been looking at LIDAR modeling of coastal plain areas. Once you have a very good high resolution DEM of a coastal area, it becomes relatively straightforward to begin to show how the consequences of global warming might affect a particular location. There’s that playoff between these levels of information, and to me connecting those scales together is going to be critical for our field to be truly useful to people.

V1: It seems to me that the word fusion has a lot of play, with different levels of scale and all these different technologies coming together, and all these different disciplines coming together. It’s really interesting that with the online community and social interaction these things can come together. Do you think we’re making good progress in terms of how we’re capable of fusing technology and ideas?

Limp: There are some very effective ways, such as the virtual community of Second Life, but until relatively recently it was hard to get the real world into the virtual world because of the limiting constraints for rapid 3D presentation across the web. If you had a very complicated 3D world in reality, you basically had to recreate it in Second Life using the Second Life tools. Where we need to go, and where we’re going, is the seamless connection between virtual world and real world in terms of their representation states.We don’t really have that quite yet.

One of our research directions here is how do you make that connection where you’re able to move the real world or a representation of the virtual world and move basically back and forth. It’s a problem in technology but it’s also a problem in social process and mindset. People who are really excited about the Metaverse may not be similarly concerned about the current real world. We found that those conversations can be very productive when you get people who understand the geospatial community together with people who understand the virtual community. If you look at the metaverse and geospatial data visually, it all looks the same but the technology underlying them are really quite different.

We’re working on a number of projects where game engines are very powerful tools for exploring this virtual world. You can create very rich 3D settings with things like AutoDesk’s Revit or similar Bentley products. You can also work with Cinema 4D and View, software products that make very rich virtual worlds from data. But if you want to move information from any of them into a context where your can easily and dynamically interact you probably will use game engines. You have to really think through all sorts of subtle technology elements, like number of polygons and the representation of textures in particular data formats. But once you can figure out how to do that, then you can make it fast and effective. Game software developers have already addressed many technical questions about making displays rich, fast and interactive.

We have a faculty member here, Dave Frederick, who is recreating the interiors of Roman Villas in the Digital Pompeii Project. He’s using the Unity Game Engine, and he’s done a very cool application where with jus a web browser you can go in and walk around the villas of Pompeii. When you look at particular artistic element there, once you click on it, it does a database query, pops up other windows, tells you all sorts of things about it. The only thing the user needs is a browser. I think that’s another dramatic direction with integration of game engine technology into this whole process.

V1: What strikes me is that it’s all accessible, where we’ve gotten past the point where creating rich 3D data isn’t a huge hurdle. While it’s still not easy, it’s possible.

Limp: We’re at the possible but not easy stage. There’s a lot of human manipulation of data to move back and forth. The next step is to automate that process such that if you did have detailed virtual realities in an individual software package that the movement into this game engine environment wouldn’t be as hard as it currently is. It can take a month or two with an individual working on an all ready, fully realized, 3D model just to get it to the point where it works in the game engine.

That is not so bad at one level but, as you’re trying to do a lot of it, then it’s not a consumer solution yet. We’re still in the artistic pre-industrial phase.

V1: How excited are you right now in terms of what’s going on and the kind of work that you’re involved in?

Limp: It is a really exciting and interesting environment. We’ve been through this stuff for nearly 30 years at one level or another, and the level of energy and excitement that we see in 3D geospatial is equivalent to what I saw maybe 15 or 20 years ago in the GIS world.

Every day it’s like discovering a whole new thing. In the traditional GIS world we are essentially making incremental improvements, although I don’t want to diminish the good work that’s going on. In the 3D environment, overnight you find a whole new kind of “Oh, wow!” development discoveries. There’s a different kind of an excitement in this 3D world.

Leave a Reply

Your email address will not be published. Required fields are marked *