The Next Generation Emergency Operations Center and Other Carnegie Mellon University Silicon Valley Projects

October 10, 2011
Share

Sharing is Caring

Directions Magazine (DM): Please describe the "next generation EOC." What types of mapping and communication technologies are being incorporated into this operation? How is the information gathered by these technologies communicated to field personnel?

Art Botterell (AB): An emergency operations center is basically an information crossroads in a fast-moving situation. Traditional EOCs tend to be fixed locations with lots of protection... still a lot of the “cold war bomb shelter” about them. The next generation EOC isn't a building so much as a system. These days the EOC function is frequently constructed in the field, on the fly, out of a collection of mobile units and temporary shelters. So it's very much a custom assembly of plug-and-play elements that support command, operations, plans, logistics and finance. Mobility, flexibility and sustainability in the field are key... and those are the practical benefits of what we often refer to as "green" technologies.

So CMUSV’s next-gen EOC testbed is a mobile unit, solar powered, with a variety of network-based comms systems that can be tailored to the mission and the participating actors. Our phones are VoIP; we monitor news media over a network server; we plug in radio communications on a case-by-case basis using Internet Protocol interoperability tools and connections to communications interoperability vans owned by the California Emergency Management Agency (CalEMA). Our display systems are open source, using both public applications like Google Earth and onboard datasets using Geoserver and OpenLayers JavaScript software.

Over those base maps we display updated and real-time information using KML and assorted homegrown Web applications. We display the results on remote-controlled monitors at our breakout working areas (we call them "huddles") and on a large-screen display system using an array of low-power-consumption pico projectors in our briefing area. Right now we're working on using Microsoft Kinect 3D sensors to evaluate the usefulness of "Minority Report" style gestural interfaces. And we use webcams, screen sharing and DHS UICDS middleware to make those data available to folks in other vehicles and on portable tablet viewers in the field and at a number of Silicon Valley fixed EOCs over a regional wireless system called the Santa Clara Emergency Wireless Network.

DM: Can you describe the spatial resolution of the imagery and video (full motion video or FMV) used to develop a common operating picture? How is FMV technology integrated with geographic information systems?

AB: This is something the NASA folks can address in greater detail. Right now the images they're getting are multi band (visual and several bands of infrared) still images, with the emphasis on automated analysis of those image signatures over trying to depict motion. NASA's done a lot of interesting work on identifying a number of disaster phenomena that way, not just fires but also things like natural gas leaks. The emphasis isn't so much on capturing real-time motion as on detecting things that can elude simple "eyeball" analysis. The real trick, of course, is being able to georectify the imagery in real-time, which they do on the aerial platform, and then integrate it rapidly into a presentation that's useful for non-technical decision makers at command posts and EOCs on the ground.

Click for larger image.

DM: Can you explain where the analysis occurs and what is "sent back" to the EOC? The data are captured by sensors on the aircraft. Where is the processing done? On the aircraft? On servers in the cloud? What exactly is "sent back" to the EOC, the raw data or just a visualization?

Vince Ambrosia (VA): The real-time common operating picture we are using is our Collaborative Decision Environment (CDE), based on a Google Earth base set. In the CDE we serve up/mirror other sites’ data, creating a "mash-up" of pertinent data services from weather satellites, other NASA satellite data, other airborne asset data, ground weather station data, etc. We chose Google Earth because it is ubiquitous and most everyone knows how to use such... you can't walk into an incident command center and try to convince someone that they should use your new-fangled visualization tool!! We also provide all our data in geospatially relevant data formats... all are Open Geospatial Consortium (OGC)-compliant, and able to be readily ingested into other Web map services (Bing, Google Maps, etc.).

The wildfire information we derive comes from a multispectral line scanner instrument. The scanner, known as the Autonomous Modular Scanner (AMS) - Wildfire instrument, has 12 discrete spectral channels, from the visible wavelengths through the reflected infrared, middle infrared and into the thermal infrared regions. In other words, we can finely discriminate reflected and emitted light energy and thermal energy in specific bands or wavelengths.

Basically the sensor operates by collecting the 12 simultaneous bands of spectral data over the region where the acquiring aircraft platform is operating. The sensor basically "paints" a picture in 12 wavelengths of the earth surface phenomenon, in essence developing a long strip image that can be spectrally broken out into different wavelengths. This is not an FMV system. While the sensor is recording the spectral information on-board, we are also autonomously selecting "frames" of the data for real-time on-board processing to derive specific image information and relay that information to the ground via satellite uplink/down-link communications.

Besides developing three-channel spectral combination images that highlight certain fire characteristics (in this mission example), we also have the capability of running sophisticated spectral band algorithms on the data to derive/"tease-out" certain fire-related characteristics from the scene. One of those algorithms is a fire/hot-spot detection algorithm that develops an image shapefile that defines spectral pixels that exceed certain pre-programmed temperature thresholds. We use a sophisticated algorithm to reduce false-positives and provide a real-time feed of that info for incident teams on the ground. We have also implemented a myriad of other spectral processing algorithms that allow us to change collection criteria in mid-mission and focus on other elements of analysis and data delivery. Currently we have implemented some post-fire burn severity/assessment spectral algorithms to derive products that are useful to the post-fire assessment teams for rehabilitation of affected areas.

The AMS sensor also has affiliated with it a precision inertial measurement system known as the Applanix POS-AV. That instrument measures high-resolution sensor pointing geometries and aircraft attitude measurement capabilities to develop a high precision geometric correction on the sensor data. We have taken those POS-AV data and done something unique to airborne data collection; we have developed autonomous processing tools on a computer on-board to do all the image data collection georectification on the fly, autonomously. Most processing had previously been done on the ground post-mission.

This on-board georectification process also ingests earth surface topography data from the Shuttle Radar Topographic Missions (SRTM) to derive a complete, real-time, fit-to-terrain, dataset "drape" capability that can be ingested into any GIS or Web-based data visualization package (such as Google Earth). The processed image data, still on the aircraft, are then sent autonomously to an INMARSAT satellite communications system antenna, to the satellite and directly to any "server" address on the earth. We send all data to NASA-Ames for archive and real-time redistribution availability for incident teams. All this complicated processing takes about four minutes from sensor collection, through all on-board processing, to delivery of a Level II product at an incident command center for use by wildfire mitigation teams.

We are looking to expand the described capabilities by engaging other disaster partners to develop other pertinent spectral datasets for on-board processing and delivery, such as supporting real-time flood mapping, etc. The capabilities are limitless.

About Carnegie Mellon’s DMI

The Disaster Management Initiative was established at CMUSV in 2009 with a mission to provide open and interoperable, next-generation technical solutions for all-hazard multi-jurisdictional disasters.

Share

Sharing is Caring


Geospatial Newsletters

Keep up to date with the latest geospatial trends!

Sign up

Search DM

Get Directions Magazine delivered to you
Please enter a valid email address
Please let us know that you're not a robot by using reCAPTCHA.
Sorry, there was a problem submitting your sign up request. Please try again or email editors@directionsmag.com

Thank You! We'll email you to verify your address.

In order to complete the subscription process, simply check your inbox and click on the link in the email we have just sent you. If it is not there, please check your junk mail folder.

Thank you!

It looks like you're already subscribed.

If you still experience difficulties subscribing to our newsletters, please contact us at editors@directionsmag.com