Hurricane Harvey and Emergency Technology Ops in 2017

DISCLAIMER: The comments, suggestions, and viewpoints on the following post are my own and do not reflect the viewpoints of my agency nor of the state of Texas. Although I strive for accuracy above all else, I would be remiss if I did not mention that some items discussed may be limited to my point of view and opinions alone.

If you’re not into the genre of Emergency Management, then this post is probably not for you. If you’re a professed or closet ICS-purist, then this post is also probably not for you. Who is this post for? Quite a few areas honestly: local jurisdictions, responders at all levels from local to regional to state and even up to the Federal level. Elected Officials are covered here too. Most importantly, these are the ideas that I have based on what my teams and I experienced, and interactions with others as well. I am sure that there are things that will not be touched on with the depth and detail that they deserve, but that is more likely due to my lack of specific knowledge in something. In those cases, I will detail as much as I can and leave the rest to the real subject matter experts out there. I intend to break this post into the solutions that were used by what worked well, what could have worked better, and what we intend to do in the future if a change is warranted.

Hurricane Harvey (August 17, 2017 – September 2, 2017)

Image of Hurricane Harvey

Why focus on Hurricane Harvey? That is an easy one: tied for the costliest storm in US History with Katrina, Harvey was a Category 4 Hurricane that also impacted the City of Houston and the surrounding areas. Due to where it made landfall and the following amounts of water that it dropped on Texas, there are many different things to discuss. My expertise is within the realm of Critical Information Systems (CIS). For those unfamiliar with the term, CIS is the technology focused on response and recovery at strategic (State Operations Center), command (regional Texas Highway Patrol Disaster District), and tactical (boots on the ground) levels. CIS focuses on how to get information flowing in a secured and known area both up and down from locals to state and ultimately in some cases to the Federal government. The benefit to a dedicated CIS section is that as operations folks that are doing emergency management daily, we see and can help guide decisions in a way that vendors or even a traditional IT department cannot (this is especially true when dealing with other jurisdictional IT departments).

120 hours. 5 days. That is what usual exercises give Texas to prepare. The National Weather Service and Hurricane Center will begin to track storms off the coast of Africa as they make their way across the Atlantic, usually disorganized and weak. As the storms pick up, the models will put them in certain directions and gain confidence as to what is going to occur. The pattern set to become Harvey was no different, and as we tracked its movement towards the Yucatan Peninsula we felt good that it was going to be another storm that would make landfall south of us in Mexico. Things were going well. We held calls and things looked good overall.

Then suddenly all of that changed. The new models made some changes to the path, they all started to converge in agreement, and we were faced with a much more aggressive and abbreviated timeline. Our plans, set usually for 5 days, were now adjusting rapidly downward to account for location and response periods. It was around this time that we began to see how the new technology systems and ideologies would perform. The reason for this was that the last large hurricane season that tested Texas was back in 2008 with Ike, and a lot had changed in the 9 years following. Back then the large jurisdictions in the state had led tying things together. The Texas Division of Emergency Management (TDEM) saw this and adopted similar methods, including bringing CIS as a whole up to the state level. It was time to see how that would pay off.

72 hours. 3 days. You never know how long things actually take to do until you try to do them during a real event, and the scale this time around was staggering. The first systems to be tested were our State of Texas Assistance Request (STAR) and Emergency Tracking Network (ETN) processes followed closely after by the Disaster Summary Outlines (DSO)s. All of these systems are maintained and were created by my CIS section. They were about to have a large audience of users on over 30 systems tied together with ours including the Federal Emergency Management Agency (FEMA)s system. As a sidebar, this was the first time FEMA had ever connected to a state in this manner and it was going to be tested as well.

The State of Texas Assistance Request (STAR)

Image of a STAR form

I will give a little context here in that when a jurisdiction needs help, they will create a STAR and push it up from the city and county affected to the DDC and then if need be to the state. There are many means of communicating needs, and some previous methods were phone calls, faxes, emails, paper forms, and face to face meetings. We chose to use an automated method that we call the STAR that generates a unique number that can be used to track down what a request is, where it’s currently being worked, who is working it, and final resolution including if it needs to be returned back to the sender which is called demobilizing. I will go into what I feel went right with the STAR first: requests had a unique tracking number that everyone from requester on up could find and look into. Information could be added, and approvals and routing completed the picture that the right eyes had been put on each and every request. We were able to add responses (called Actions) to a STAR so that requests were filled, and later on while activated we published out a method to retrieve outstanding items via a simplified demob process.  Total numbers of requests for Harvey alone exceeded 8,000 and we did not ‘lose’ any of them which had happened in the past when other methods failed such as emails (delivery rejected, mailbox full), phone calls (on another call, voicemail full) or faxes (paper out, toner out, who even looks at faxes anymore??). The things that we’ll do better is training and ease of request. It became clear that there were a few logjam areas that STAR had created, and giving those areas additional support thanks to the demographic information the STAR provides should resolve that in the future. The current upgrades will give locals a request process at their level that is not the STAR (it lives and is filled locally only), as well as resource management (what’s downrange, who signed for it, and when is it ready to come home for demob?), resource tracking (it’s en route to requester, then it arrives, now it’s ready to come home, now it arrived back home), and action tracking (each action of every request adds to the burn rate in some manner and having that data update in real time automatically is a large benefit too. One final bit is that we will work with FEMA to tie our STAR process in with their RRF (request) process.

Emergency Tracking Network (ETN)

During any sort of incident there is always a need to know where citizens that need to be evacuated are at to be picked up, those numbers, and the vehicles that transport them. To answer this question in Texas, the Emergency Tracking Network (ETN) was created. In a nutshell, ETN takes into account the full process of evacuation and repopulation and puts it into 7 steps. Those currently are embarkation (collecting evacuees into areas for ease of transport), transport, reception into sheltering jurisdictions, the actual sheltering of survivors, re-embarkation home, transport home, and repopulation back into the affected jurisdictions. Why is this important? Many reasons, from allowing everyone to know how many are evacuating via state transport means to giving sheltering communities an idea of headcount for feeding needs, to tracking numbers of self-evacuees that at some point also require transport home — with the most important need of knowing who has been evacuated and to where as the most used feature during Harvey. Without naming the jurisdiction, one large coastal community that had trained with ETN prior to Harvey had citizens that were both in ETN shelters and non-ETN shelters. When that community was ready to receive certain areas of the city back they were able to find those evacuees in ETN shelters (via zip code) in under 20 minutes. Non-ETN shelters were never able to give that information back which meant that survivors in non ETN shelters may not have known that their neighborhood was ready to receive them again. This is why I believe that solutions such as ETN are the path forward for all evacuees, and can not only save time for those survivors directly affected by an evacuation, but also saves money by not having to pay for sheltering or putting into hotels those same evacuees for longer than need be.

State of Texas Emergency Assistance Registry (STEAR)

If you call 211 in Texas and ask for registration into the system for emergency or otherwise supported evacuation, you are using the STEAR registry which is also maintained by my CIS section. I believe that this is “pre-registration” to any incident that would end up using ETN mentioned previously, and I want to be sure that at the end of the day local decision makers will know where all folks that need assistance out of harm’s way are at, and can determine how best to manage their resources to the best extent to get everyone to safety. Tying STEAR and ETN together is something that will not only help them but the ESF9 Search and Rescue (SAR) components of any incident of any size. I am sure that an additional layer to this will be rescue requests, but for this part I focused only on what we the state control.

Disaster Summary Outline (DSO) + Preliminary Damage Assessments (PDAs)

This needs to be more integrated so that initial and follow up assessments occur with the end goal in mind and how this helps all levels of government as well as our citizens. My focus will be to take DSO data and push it into PDAs so that when we go from generalized affected numbers into actual location specific data it flows better. Overall effected will help meet thresholds for the state, and local effected split out of that total will help meet thresholds for those counties.

Mapping, and all things Geospatial Information Systems (GIS)

Geospatial information systems (GIS) map technology is utilized in Texas to support all aspects of emergency management.  TDEM and several of the State Emergency Management Council agencies and many local jurisdictions are using GIS to identify and map the location of people, critical infrastructure and property at risk from disasters. Although used to great effect everywhere in the world and in every recent response of any size, there needs to be greater adaptation of this data at all levels. Too often do you see data localized at the federal, state, regional or local levels, and also it is sequestered away by agencies that would share if there was a means and place to do so. My idea on how best to achieve this will be tackled in the final area of this article, Looking to the Future.

Rescue Requests / Requests for Help by the public

This is a touchy and tricky subject. I will not dwell on this too much because there are many ways that things worked well here, but what I will say is that the fact that so many stepped up on all sides to help was excellent. The fact that people use social media on a daily basis and will turn to that when other traditional means are not available tells me that, like water flowing down a hill, we cannot block it, but we need to understand and help guide it. Exceptional effort by all involved. My quick comment is that there needs to be a “CAD” like intake of requests for help and dispatch of those that can help.

Overall use of mobile apps

This one is just a natural progression of technology becoming more and more ubiquitous and second nature to those that are responding, and for solution providers and first responders looking for ways to complete their mission faster and with more accuracy. Adding to this the fact that crowd-sourcing is actually a usually valid and needed thing, that data needs to be collated into a “CAD-like” method as discussed and responded to by all of those that can provide support and especially those that can give the most immediate help. I believe that this is the start of next steps for all of those in the emergency management community in the 21st century.

Looking to the Future

Steps are in place to get better. Acknowledging that we need to have a 5th section in ICS (my prior now gone post about the “FLOP-It” with It being Technology is now possibly coming to fruiting with a draft-named “Communications Section” that deals with most of the needs for efficient and effective response in the 21st century. Having the right vision at a high enough level at the federal, state, and tribal levels while not overlooking or missing the needs at the local levels is key, and the logical next discussion. Integrating so that on a daily basis everyone from citizens to first responders, emergency management staff to elected officials has the same information and data is a good idea I think, and not getting married to any one technology path (CIMS over GIS over COMMS) is key too. There are many things that the next 1,2, and 5 years will bring, and my hope is to take this large incident as well as those of Irma and Maria and use them to focus on how we all can do things smarter and not harder.

Thank you for reading this. If you have comments, questions or other views I welcome responses.

Finally Have My Site Back Up

Just wanting to let everyone know that my blog about Emergency Management Technology and those sorts of things is back up. Long story short, had a nice conflict within the database stemming from importing many many moons ago that precluded an easy fix when I added a couple of cool plugins back in September. Ended up just starting fresh. Trying to post to see if it hits my FB, Twitter, LinkedIn and other feeds automatically.

and now back to the regularly scheduled blog …

After starting my site back in 1998, my wonderful database of older posts had bad messages that I didn’t think about when upgrading that database since I had migrated to wordpress years ago.

Yeah, big mistake. Indeed, Smithers, it mucked things up I am too lazy to fix it. So as it stands, you get this new empty blog that I will fill up with stuff in the future.

But for now, nothing. 🙂