Sensors and Systems
Breaking News
Stampede to Distribute Delair UX11 Drone
Rating12345AMHERST, NEW YORK  —Stampede, the global leader in value...
Drone Aviation and LTE Advanced / 5G-NR Wireless Technology Provider ComSovereign Announce Merger
Rating12345JACKSONVILLE, Fla. – Drone Aviation Holding Corp. (OTCQB: DRNE) (“Drone Aviation”...
Nexit Launches as the Next Generation in Mobile Mapping With $10 Million in Funding
Rating12345NEW YORK – Nexit launches as the next generation...
  • Rating12345

UrtheCast PhotoUrtheCast is a company created to provide the first live high-definition video feed of Earth from space. Working jointly with Russian aerospace giant RSC Energia, the company is building, launching, installing, and operating two cameras on the Russian module of the International Space Station. Starting next year, video data of the Earth collected by these cameras will be down-linked to ground stations and then displayed in near-real time on UrtheCast’s Web site or distributed directly to the company’s partners and customers. Special correspondent Matteo Luccio spoke with Wade Larson, co-founder and chief technology officer (CTO) and Dr. George Tyc, chief technology advisor, UrtheCast about the company’s technology, business model and applications.

Tyc GeorgeGeorge TycLarson WadeWade Larson UrtheCast is a company created to provide the first live high-definition video feed of Earth from space. Working jointly with Russian aerospace giant RSC Energia, the company is building, launching, installing, and operating two cameras on the Russian module of the International Space Station. Starting next year, video data of the Earth collected by these cameras will be down-linked to ground stations and then displayed in near-real time on UrtheCast’s Web site or distributed directly to the company’s partners and customers. Special correspondent Matteo Luccio spoke with Wade Larson, co-founder and chief technology officer (CTO) and Dr. George Tyc, chief technology advisor, UrtheCast about the company’s technology, business model and applications.

ML: What motivated you to launch this project?

WL: George and I have been in the remote sensing sector for a very long time — 17 years for me, both in government and industry, and around 25 years for George — mostly on the space side, but certainly with a lot of interest and insight into the downstream side as well. Both of us used to work at MacDonald, Dettwiler, and Associates (MDA). A couple of years ago, we were in Moscow and had a meeting with RSC Energia. They raised the possibility that MDA could put an advanced technology synthetic aperture radar (SAR) payload on the International Space Station (ISS), as a kind of technology demonstration. At the time, MDA was working on some pretty interesting next-generation SAR payloads. But we knew that that idea was too expensive, that the non-recurring engineering was too high, and that politically there were other reasons why it wouldn’t work. So, instead, we proposed that we put a video camera on the space station and broadcast the feed over the Web. We proposed the idea to MDA and it just wasn’t a fit. Now fast forward two years. Both George and I have left MDA and we ventured out, in conjunction with some other people, to build this company. It is fundamentally an Internet company.

ML: What technological advances enabled you to launch this project?

GT: There are two cameras. One is a classic, push-broom, medium-resolution, four-band imager. It is built for space with our partner Rutherford Appleton Laboratory (RAL). They had this camera available, it could be built relatively inexpensively, it isn’t particularly large, and it is made for space, so it became a good fit. At the altitude at which the space station flies it gives you a 5.5 meter pixel size on the ground or ground sampling distance (GSD).

The other thing that made this project possible is that RSC Energia and Roscosmos, the Russian space agency, had already decided to experiment with a camera on the ISS. They had defined the experiment, rolled it all up, and obtained the approval of all the partners, but had not decided what the camera would be. So, when we came along, they loved the idea, and they already had, essentially, a programmatic way to enable it immediately. They had also developed a very high accuracy pointing platform that can point to plus or minus 175 degrees in each of two axes, which means that you can steer it to track a point on the ground. You can also flip it up and look at space targets. Once this project got traction with them — and they liked it a lot — they had a competition to decide who would get to use their pointing platform. We participated and they decided to give the platform to our project. We are very happy about that. That’s what enabled the video cam.

We are typically able to track a spot on the ground for a minute and a half and take video or images. To make it work and get good resolution you need to have that kind of pointing system. To build it from scratch would have been quite expensive. The video camera has an aperture of 32 centimeters, so it is essentially an astronomy telescope. It was built by an American company and over the past year has been ruggedized, so that it is suitable for space. The manufacturer already had the parts for this telescope because they built one for another customer and had some spare parts that we were able to buy right away. We were able to put it on a test stand to check it for vibration and work out whether it was possible to make it suitable for space, which we confirmed last summer. So, very quickly that gave us a telescope of the right size and performance and at the right price point for our project. As for the detector, we are are using a commercial 18 MegaPixel detector from a professional grade DSLR camera that we have radiation tested to ensure it will work in space. The camera electronics is built by RAL, who are leveraging their extensive know-how and technology they have developed for the numerous other optical instruments they built for space. There is nothing really special about the technology, other than that we couldn’t do a NASA-style development costing many tens of millions of dollars. It is a commercial project and it has to be affordable.

ML: What is your business model and who do you expect your audience to be?

WL: We are going after four revenue streams. The first is a typical, traditional remote sensing business model, where we would sell still images from the cameras. We don’t anticipate that our data is going to be the best in the world — it is certainly not going to massively compete with DigitalGlobe or GeoEye — but we think that there is some flex in the market for people who want good enough data at a lower price point. In that vertical we will use a very traditional approach of working with regional distributors who, in turn, will sell to their markets. So, we have a number of letters of interest from various regional players — in Asia, South America, North America, Europe, etc. Over the course of the next several months, we plan to mature those letters into full term sheets and contracts.

The second vertical is really a media play. The idea there is to sell the video content to a category of customers who traditionally have not played much in remote sensing — such as major news media outlets, gaming companies, or film studios, who might be interested in high-resolution, space-derived video data.

The third one is really a dot.com ad vertical. The notion there is to just bring as many eyeballs as we can to our Web site, to try to create as much content as we can, some of it crowd-sourced, in order to make the site as compelling as possible. We are looking at a variety of different ways to do that in an innovative and hopefully not obtrusive manner.

The final vertical is really potentially the most interesting, but it is the hardest one to anticipate where it goes. What we want to do there is open up our API and turn our Web site into a new Web platform in which we allow developers to take our content and produce all kinds of different applications. We are going to fund some apps internally and some externally, but we also want to let the creative juices of the market go nuts on this. We are trying to go after the mass market and do something different than what has been done in remote sensing in the past.

ML: What will the impact be?

WL: It is really hard to say. I think that there is some interesting potential to democratize the power and appeal of remote sensing. I think the immediacy of video from space will have a lot of visual and emotional impact on people. We are in discussion with the Russians about additional cameras — maybe wide angle cameras that would look out into the horizon and allow you to see sunsets and sunrises, as well as storms and the aurora borealis. I think it is going to have an interesting impact on environmental consciousness and have many educational benefits. We’ve signed a data agreement with an NGO that monitors deforestation in the tropics. So, in addition to creating a commercial venture, we are hoping that a host of ancillary social, environmental, and educational benefits will come out of this.

GT: We hope this will create a whole new way for people around the world to experience what is happening. Imagine that something occurs — a social event, a revolution, a tsunami, or what have you. You have your normal news feed and you have your various social media talking about it. But, if we’re successful, the first thing that people will want to do is come to UrtheCast to see what is going on. Imagine a Google Earth-type interface: you fly in over the area you are interested in, but you have data that is actually recent and it is being updated all the time. So you can see what’s going on and even move back through time to see whether some things have changed. When you are on that location, in addition to the normal news feeds, you will see users uploading information, videos, tweets. Now you are able to experience what is going on from the ground, see the videos from space, see recent data of what’s happening, and how things have changed over time. That kind of experience simply does not exist right now. That’s essentially our vision and we think that it will have a profound impact on how people interact and experience world events using the Web.

ML: How long will you be able to stay focused on one spot and how quickly will you be able to revisit it? How will you decide where to steer the camera and where to dwell?

GT: For the underlying image data, we will use our medium-resolution camera. We are also going to aggregate all kinds of data that are available out there and use state-of-the-art processing and mosaiking techniques to bring it together, so that to the user it looks like all one data source. Then we will update the world map, reasonably frequently. As we get more data sources, we’ll be able to improve our coverage. We are already talking with Roscosmos about the next generation cameras. They are putting up the new Russian module, so we will be putting up cameras to get us even more coverage in the next couple of years.

The revisit time for the video camera depends on the latitude. If you are around the equator you might get a revisit once every couple of days. So, if you have an event that lasts an hour, there’s a good chance that you might not get it. But if it is an event that lasts a few days or a week, there’s a good chance that you will get your shot. If you are at more northern latitudes, because of the space station’s orbit, there are periods when you can get up to four shots of an area in one day. It’s because you can point the camera quite a large distance off nadir, which is looking straight down. So, you’ll be able to see updates within a day, which is really unusual; most satellites can’t do that because of their orbit. But then there are gaps, when you might not be over a spot for as much as three or four weeks.

We are going to develop algorithms to help us prioritize what to look at. For example, there will be paying customers — such as media outlets — that will have the highest priority if they want to get exclusive footage of an event and are prepared to pay a reasonable amount for it. We’ll get over there as soon as we can and take the shot and make it available to them. We will also look at the number of tweets that are being sent about different events. So, we will use all kinds of different schemes to decide where to point the camera.

ML: What is the division of labor between the partners?

GT: We have four major partners. The first is RSC Energia, which is the prime contractor for the space station for the Russian space agency. They are a huge company — the Boeing of Russia. They do satellites and are the prime contractors for all the space station work that they do. We have a contract with them to install the cameras and certify our hardware. They have a very formal process to make sure that all the interfaces work. They have a full mock up of the space station module on the ground that they use to test the hardware. We also have agreements with Roscosmos. They are contributing many assets on the station — including the high speed downlink, use of the pointing platform, computers, data storage, and cosmonaut time to install equipment. Then we have we have a contract with the Rutherford Appleton Laboratory (RAL) in the U.K. It is a government agency that has had numerous scientific payloads in space. They have done close to 200 missions over the last 40 years or so. They have a space division, RAL Space. MDA is partners with RAL on space cameras. So, they are building out the two telescopes with the detector electronics.

We will be generating a huge amount of data, so we also need to have a compression engine, right there in the cameras, and also a controller that will enable us to control the camera. Inside the station, you need a computer that grabs those data, throws them into a solid state drive, manages the time line, checks the status of the cameras, and gets all the metadata that you need to process the imagery — including the attitude of the station and the pointing angles of the cameras — formats it, and passes it to the ISS computer for it to be down-linked to the ground. That computer, all that software, and the data compression units are being provided by MDA.

Our fourth partner is a company called Barl, which will be the operator for this project in Russia. They do ground stations and ground segments for some of the Russian missions and for commercial missions as well. Three or four ground stations will be built within Russia, but we’ll also have some outside of Russia. One of them could well be at RAL. Barl will process the data from space to create the raw images. We’ll do additional processing here at UrtheCast to prepare the data for the Web. So we’re also working with a second tier of contractors and consultants to implement this.

ML: How will you give the public access to your archived data?

WL: The public will be interacting through our Web site or through an application on their mobile devices. It will all be free. You are going to see what looks like a real-time feed from the station. It will be derived from our medium-resolution camera and it will just look like the world is sliding by. We call it near-real time because we get the data in lumps as we fly over the ground and then down-link it in pieces, as we go over the different ground stations. Then we assemble it and create this feed. It will look like a live video feed, but it will be several hours old. That’s just one of the features on the Web site that will make things interesting. You will have a Google Earth-type experience, including map layers, but the data will just be very current. We will be storing layers of this data over time, so you can go back in time and see how things have changed. Then, of course, you have all the social media layers that you put on top of all that. A really simplistic way to summarize it is a dynamic version of Google Earth, refreshed and updated frequently, with videos and YouTube-type search functionality — you will be able to search videos by location, by type, by interest category, etc.— with a social media layer. The data available on the site for free will be compressed, though it will still look very good. The raw footage, which we might try to sell to more institutional-type customers, will have much higher information content. Any user can become a data customer by just requesting to buy some data.

ML: Will any government be able to exercise shutter control? How will you handle political and commercial pressures to refrain from covering certain areas and/or events?

WL: We are a company registered and incorporated in Canada. We will be subject to Canadian law on shutter control. Several years ago, Parliament passed an act concerning the use and operation of remote sensing satellite systems, which basically pre-dated the radar RADARSAT-2 satellite. We will be 100 percent subject to and compliant with that law and the regulations that flow out of it. We’ll be very good citizens and stay well inside of the law. More generally speaking, the approach we are taking is to be in accord with the United Nations principles on remote sensing, which is the framework for international transparency in this sector. Are we going to voluntarily bow to particular corporate or government interests? Probably not. We want to be ethical, we want to act properly, within the law and within the regulatory framework that governs us, but I think that the general thrust of what we are doing is to promote transparency, to promote awareness. We are going to have to feel our way through this, obviously, and we don’t want to be naïve or simplistic, but I think that’s in general the approach that we want to take.

Leave a Reply

Your email address will not be published. Required fields are marked *