Resilience Through Impact Modeling
February 18th 2021
by Peter Watson, CTO
Before and After Nighttime Lights Images of Texas During February 2021 Outage Event
The Resilience Curve1
Dynamic Models for Granular Risk Management
February 4th 2021
by Peter Watson, CTO
The Promise (and Realities) of AI / ML
January 20th 2021
by Vijay Jayachandran, CEO
Artificial Intelligence has been getting a bad rap of late, with numerous opinion pieces and articles describing how it has struggled to live up to the hype. Arguments have centered around computational cost, lack of high-quality data, and the difficulty in getting past the high nineties in percent accuracy, all resulting in the continued need to have humans in the loop.
None of this is new for those of us who have been doing simulation and optimization for some time. When I started my career, I had to contend with naysayers who liked to poke holes in my models and complain about their accuracy. For me (and other believers), it was never about achieving a perfect match between the model’s prediction and the ground truth. Models were simply a means to get new insights that could take us in the general direction of goodness.
All of this brings us to a philosophical question: why do we use models? In my opinion, we use models to explore complex phenomena that are too difficult to wrap our heads around.
Let’s be clear – the human brain is a remarkable evolutionary creation capable of many things that we cannot possibly model (e.g., empathetic and ethical decision making). However, there are certain things we can do with mathematical models that the human brain cannot do. A good example is weather forecasts, which come from large and complex computational models that consider a huge number of atmospheric characteristics. While we often complain about their accuracy, we also appreciate that they are much better than what we would predict without their help.
In the same vein, AI & ML are simply tools for building complex (and sometimes non-linear) models that consider large amounts of information. They are most potent in applications where their pattern finding power significantly exceeds human capability. If we adjust our attitude and expectations, we can leverage their power to bring about all sorts of tangible outcomes for humanity.
With this type of re-calibration, our mission at ACW Analytics is to use AI to help human decision makers, rather than replace them. We are using machine learning to build weather and climate impact models that help infrastructure managers allocate their resources efficiently. While our models do not perfectly match the ground truth, they are much more accurate and precise than simple heuristics, and can help infrastructure managers save millions of dollars through more efficient capital allocation.
Reacting to the Unpredictable
January 5th 2021
by Peter Watson, CTO
There were quite a few notable weather events in the United States over the course of 2020. One of particular note was a derecho that developed on the 10th of August in Iowa. After forming and rapidly intensifying, it headed east, sweeping across much of the Midwest causing widespread damage. In total, the storm caused about $7 billion in damage, and featured wind gusts in excess of 100 mph. It was likely the most damaging thunderstorm event in US history. For more details see: https://www.weather.gov/dvn/summary_081020
Derechos and thunderstorms inflict a huge amount of damage but have traditionally been very difficult to predict and prepare for. How, when, and where the convective energy that powers these events is released depends on a lot of different factors that are difficult for weather forecasters to predict with confidence and precision. In the best case scenario, the National Weather Service’s Storm Prediction Center is able to issue the appropriate watches and warnings several hours before the storm arrives, but even that is very little time for emergency managers to get prepared.
Additionally, because they are so sudden, the confusion and uncertainty from before big convective storms lingers well after they are over. Emergency managers at municipalities and utility companies can be unsure about the exact locations and levels of damage many days after the events have passed. And because such events can occur up to 15 times a year, they can adversely impact utility reliability metrics like CAIDI, SAIDI and SAIFI.
Given the difficulty in forecasting these events, the focus must shift to interpreting them as soon as they have occurred. There are many real-time data sources that can be used to reconstruct events and estimate their impacts. Due to the highly non-linear nature of the interaction between weather and infrastructure assets, machine learning is becoming an increasingly powerful tool for modeling such events after the fact. Insights gained from these models can help emergency managers react quickly and decisively.
Decision support tools based on radar and other real-time weather observations are now a reality. They can eliminate post-storm uncertainty and give emergency responders the situational awareness they need to react to sudden storms like the Derecho on August 10, 2020.
Using AI-based Impact Models to Drive Infrastructure Resilience and Adaptation
December 17th 2020
by Peter Watson, CTO
I recently saw this figure in a report (linked here) developed by the National Infrastructure Advisory Council, and thought that it captured the different phases of incident response very nicely. Preparing well before an event, reacting quickly when it happens, and restoring efficiently after the event, all contribute to infrastructure resilience. And after the dust settles, it is equally important to perform retrospectives, learn lessons, and make adaptive changes so that you will be more resilient in the future.
Implementing a robust system of preparation, reaction, restoration, and adaptation is easier said than done. In reality, these activities are hampered by uncertainty, with lack of information causing problems each step along the way. Forecasts can be inaccurate, so preparations can be off. Situational awareness can be lacking, so reactions can be tenuous. Post-storm information can sparse, so restorations can be slow and confused. And long after such events, infrastructural systems can seem so large and complex that it can be very difficult to know what interventions or adaptations would really make a difference during the next storm. Even if such adaptations are made, it can be hard to know if or how much of an improvement the changes actually made.
All of these difficulties can be addressed with high-quality Impact Modeling. Impact forecasts can inform preparations and improve their accuracy. Impact models forced with real-time observations of hazards can generate situational intelligence during and immediately after events - this can inform operational reactions and speed up the recovery process. After the fact, counterfactual models can be leveraged to evaluate the effectiveness of different adaptations which can in turn be used to prioritize infrastructure investments.
Impact models can be an engine for resilience and adaptation. The broad application of this technology could allow us to create a future where humanity not only survives but thrives under climate change.
Robust Impact Modeling
December 1st 2020
by Peter Watson, CTO
Hello everyone. Welcome to the ACW Analytics blog! We’ll be posting here about how we’re applying machine learning and other cutting-edge analytical techniques to understand and predict the impacts of natural hazards and other related topics.
Using models to estimate the damage or impacts of natural hazards isn’t new, but now there’s a great opportunity to use modern data science to improve upon the status quo and create robust modeling frameworks that can help us create a resilient future. Established approaches to modeling impacts are often relatively simple, focusing on several main contributors to damage, e.g., the max wind speed of a hurricane, or the magnitude of an earthquake on the Richter scale. But there’s no need for that simplicity, and it can limit the effectiveness of an empirical model if the information considered is not comprehensive. For example, a hurricane impact model that ignored precipitation would never be able to estimate the impacts of wet and slow storms like Hurricane Harvey, and an earthquake impact model that ignored soil characteristics would never be able to quantify the effects of liquefaction.
A robust impact model should contain comprehensive information about all of the pertinent factors, including:
The Hazard: the thing that causes the impacts (hurricane, tornado, earthquake, etc)
The Target: things that could be impacted (power lines, cell towers, homes, etc)
The Risk Factors: things that could contribute to the risk of impacts (trees, soil types, service history, etc)
Observed Impacts: what happened as a result of the combination of Hazard, Target, and Risk Factors
If you can assemble a database of all of these factors based on historical events in a way that also describes their intersection in space and time, that database would be the foundation for a robust model that could predict the impacts of future hazards.
It should be noted that only recently has this data-rich approach for impact modeling become technically feasible. Once you start considering all potentially relevant aspects of a natural hazard, the infrastructure, soils, surrounding vegetation, etc, the data quickly becomes very large and complicated. But with recent developments in data science and machine-learning, creating robust non-linear, non-parametric models that considers a very large number of variables is feasible.
At ACW Analytics, we’re leveraging these modern techniques to create high-dimension, robust impact models that can understand the incredible and complex world of natural hazards and will be the foundation of a more resilient future.