Session 4: Building Damage and Recovery Analyses with Local Data#


  • Gain an in-depth understanding of the damage analysis mechanism, fragility curves, and mapping by creating and using local objects.

  • Conduct an analysis similar to that in session 3, but with the advantage of utilizing local objects. Some of the benefits includes:

    1. Flexibility: Local datasets allow for easy experimentation and testing with different fragilities, hazards, datasets during analysis development. It’s ideal for rapid prototyping, quick data swapping, and parameter testing.

    2. Custom Preprocessing: You have full control over data preprocessing to suit your analysis needs.

    3. Data Privacy: If it is sensitive data.

Local objects provide control, flexibility, and security in data analysis, especially during development and testing phases.


1. Introduction to Local Objects#

Hazard In pyIncore, any analysis can utilize either a Hazard Object, or a hazard identifier/hazard type from incore-services as input for analysis. Currently, we support a range of hazards, including earthquakes, floods, hurricanes, tornadoes, and tsunamis. This tutorial provides examples of how to create a pyIncore Hazard Object for Tornadoes using local shapefiles.

To see more examples of create and use local hazard

Dataset For any analysis in pyIncore, the default input is a Dataset Object. This tutorial introduces users to the fundamental concept of creating and using Dataset Objects, which involves loading data from local files. We also demonstrate essential preprocessing, postprocessing, and visualization steps.

To see more examples of create and use local dataset

DFR3 Curves and DFR3 Mapping DFR3 stands for Damage, Functionality, Repair, Restoration, Recovery (DFR3). For example, Damage curves (aka Fragility Curves) are commonly used in disaster models to define the probability of exceeding a given damage state as a function of environmental change. DFR3 mapping sets serve the purpose of establishing a connection between an item within a given inventory and corresponding DFR3 curves. For example, a particular set of fragility curves should be applied to a building based on its specific characteristics. This tutorial hands-on tutorial demonstrates how to create DFR3 and Mapping Objects for fragility and repair curves, along with guidance on their local utilization within pyIncore.

To see more examples of create and use local DFR3 Object

2. Creating Tornado Hazard from Shapefile#

from pyincore import Tornado

2.1. Step 1: Describe the tornado#

tornado_metadata = {
  "name": "Joplin Hindcast Tornado",
  "description": "Joplin tornado hazard",
  "tornadoType": "dataset",
  "threshold": None,
  "thresholdUnit": "mph",
    "hazardDatasets": [
        "demandType": "wind",
        "demandUnits": "mph",
        "threshold": None

# create the tornado object
tornado = Tornado(tornado_metadata)

2.2. Step 2: Attach hazard dataset from local file#

tornado.hazardDatasets[0].from_file("data/tornado/joplin_path_wgs84.shp", data_type="incore:tornadoWindfield")

2.3. Step 3: Visualize the created tornado#

from pyincore_viz.geoutil import GeoUtil as viz
import geopandas as gpd
# Read tornado shapefile into geopandas dataframe. 
# Explanation about geopandas see below section.
tornado_dataset = tornado.hazardDatasets[0].dataset
tornado_gdf = tornado_dataset.get_dataframe_from_shapefile()

##### pyincore-viz method rendering static Map with colors and labels
viz.plot_gdf_map(tornado_gdf, 'ef_rating', basemap=True)
# interactive map leveraging geopandas dataframe explore
    column="ef_rating",  # make choropleth based on "BoroName" column
    tooltip="ef_rating",  # show "BoroName" value in tooltip (on hover)
    popup=True,  # show all values in popup (on click)
    cmap="Set1"  # use "Set1" matplotlib colormap)

3. Creating Building Dataset from given CSV file#

There are many ways to create building inventory dataset; for example 1) using GIS software (e.g. ArcGIS, QGIS, etc), 2) using python code. In this exerercise, we will create a buiding inventory dataset in ESRI Shapefile from CSV file by using GeoPandas library.

GeoPandas is an open source project to make working with geospatial data in python easier. GeoPandas extends the datatypes used by pandas to allow spatial operations on geometric types. For more information, please follow GeoPandas’s user guide

3.1 Building Inventory Explained#

The schema (columns) of building Inventory has been going through several iteration (v5, v6, v7), and here is a list of supported column names in version 6. Having a fully populated Building Inventory, with no additional columns, will hold all the data needed to perform all analyses that are compatible with that version of the Building Inventory. Here are some of the definition. For the full schema please visit buildingInventoryVer6.

column name




added by IN-CORE



building structure archetype



Broad HAZUS Occupancy Category (e.g. RES3 - multi-family residential)



Appraised value for the building


If you would like to find other dataset types schema, you can explore our Dataset Type Viewer

import pandas as pd
import geopandas as gpd

3.2 Create Joplin Commercial Building Dataset from tabular data (csv)#

3.2.1. Step 1: Convert CSV file to ESRI Shapefile#

# Load to Pandas Dataframe
df = pd.read_csv('data/joplin_commercial_bldg_v6_sample.csv')
# Turn Pandas Dataframe into Geopandas dataframe
gdf = gpd.GeoDataFrame(df, crs='epsg:4326', geometry=gpd.points_from_xy(df.X, df.Y))

3.2.2. Step 2: Add GUID#

Each built-in infrastructure needs a GUID. A GUID is an acronyom that stands for Globally Unique Identifier, they are also referred to as UUIDs or Universaly Unique Identifiers. Technically they are 128-bit unique reference numbers used in computing which are highly unlikely to repeat when generated.

# pyincore has utility methods help user easily add GUID
from pyincore import GeoUtil

GeoUtil.add_guid("joplin_commercial_bldg_v6_sample.shp", "joplin_commercial_bldg_v6_sample_w_guid.shp")

3.2.3. Step 3: Create a local dataset for IN-CORE#

The following code create a local Dataset object with ESRI shapefile for buiding inventory

# use the method "from_file" in Dataset class to construct dataset object from local file
from pyincore import Dataset

buildings = Dataset.from_file("joplin_commercial_bldg_v6_sample_w_guid.shp", data_type="ergo:buildingInventoryVer6")

2.3.4. Step 4: Visualize created building dataset#

For your reference, Details about Joplin building archetypes see below table


Building Type

1, 5

Residential wood building


Business and retail building


Light industrial building


Heavy industrial building


Elementary/middle school


High school


Fire/police station




Community center/church


Government building


Large big-box


Small big-box


Mobile home


Shopping center


Office building

viz.plot_map(buildings, column="archetype", category="True")
# interactive map leveraging geopands explore
    marker_kwds={"radius": 10},
    column="archetype",  # make choropleth based on "BoroName" column
    tooltip="archetype",  # show "BoroName" value in tooltip (on hover)
    popup=True,  # show all values in popup (on click)
    cmap="Set1",  # use "Set1" matplotlib colormap)

2.3.5. Overlay tornado on building#

tornado_dataset = tornado.hazardDatasets[0].dataset

# fill in appropriate metadata
buildings.metadata = {"title": "Joplin Commercial Buiding Sampled"}
tornado_dataset.metadata = {"title": "Joplin Hindcast Tornado"}

# plot interactive map
viz.get_gdf_map([buildings, tornado_dataset])

4. Creating local DFR3 objects#

4.1. Create Fragility Objectcs#

In this section we are going to explore the creation of fragility curve sets used by the pyIncore in Building Damage analyses. We provide examples of getting the curves into your project as well as basic use of pyIncore’s functions to print and visualize various attributes and variables.

from pyincore import FragilityService, FragilityCurveSet, RepairService, RepairCurveSet, Mapping, MappingSet
import copy
from pprint import pprint
fragility_curve_template = {
    "description": "",
    "rules": [],
    "returnType": {
        "type": "",
        "unit": "",
        "description": "",
    "curveParameters": None

curve_rule_template =  {
    "condition": [],
    "expression": ""

curve_parameter_template = {
    "name": "",
    "unit": "",
    "description": "",
    "fullName": "",
    "expression": None

fragility_definition_template = {
    "description": "",
    "authors": [],
    "paperReference": None,
    "resultUnit": None,
    "resultType": "Limit State",
    "hazardType": "",
    "inventoryType": "",
    "curveParameters": [],
    "demandTypes": [],
    "demandUnits": [],
    "fragilityCurves": []

4.1.1. Step 1: Define Curve Paramters#

For every parameter mentioned in the equation, it is essential to provide a precise definition. For example:

scipy.stats.norm.cdf((math.log(wind) - (4.075))/(0.21))

Note that in the equation, we haven’t adequately defined the term “wind.”

Table 1. Default Curve Parameters

Parameter name






Wind value from hazard service


curve_parameter = copy.deepcopy(curve_parameter_template)
curve_parameter["name"] = "wind"
curve_parameter["unit"] = "mps"
curve_parameter["description"] = "wind value from hazard servic"
curve_parameter["fullName"] = "wind"

4.1.2. Step 2: Define each fragility curve#

For given fragility curveset

Table 2. Curve Information

Limit State





wind > 0

scipy.stats.norm.cdf((math.log(wind) - (3.625))/(0.11))

legacy - StandardFragilityCurve - Moderate


wind > 0

scipy.stats.norm.cdf((math.log(wind) - (3.895))/(0.11))

legacy - StandardFragilityCurve - Extensive


wind > 0

scipy.stats.norm.cdf((math.log(wind) - (4.075))/(0.21))

legacy - StandardFragilityCurve - Complete

Caution: It is crucial to verify the validity and safety of your equation. Our platform supports Scipy, Numpy, and Math libraries for this purpose.

# instantiate a copy from template
LS_0_curve_rule = copy.deepcopy(curve_rule_template)

# add condition
LS_0_curve_rule["condition"] = ["wind > 0"]

# add equation
LS_0_curve_rule["expression"] = "scipy.stats.norm.cdf((math.log(wind) - (3.625))/(0.11))"

# add metadata
LS_0_curve = copy.deepcopy(fragility_curve_template)
LS_0_curve["description"] = "legacy - StandardFragilityCurve - Moderate"
LS_0_curve["returnType"]["type"] = "Limit State"
LS_0_curve["returnType"]["unit"] = ""
LS_0_curve["returnType"]["description"] = "LS_0"

# append the aforementioned rule
# Similarly construct LS_1
LS_1_curve_rule = copy.deepcopy(curve_rule_template)
LS_1_curve_rule["condition"] = ["wind > 0"]
LS_1_curve_rule["expression"] = "scipy.stats.norm.cdf((math.log(wind) - (3.895))/(0.11))"

LS_1_curve = copy.deepcopy(fragility_curve_template)
LS_1_curve["description"] = "legacy - StandardFragilityCurve - Extensive"
LS_1_curve["returnType"]["type"] = "Limit State"
LS_1_curve["returnType"]["unit"] = ""
LS_1_curve["returnType"]["description"] = "LS_1"

# append the aforementioned rule
# QUIZ 1: Define limit state 2 curve
# Similarly construct LS_2
LS_2_curve_rule = copy.deepcopy(curve_rule_template)
LS_2_curve_rule["condition"] = ["wind > 0"]
LS_2_curve_rule["expression"] = "scipy.stats.norm.cdf((math.log(wind) - (4.075))/(0.21))"

LS_2_curve = copy.deepcopy(fragility_curve_template)
LS_2_curve["description"] = "legacy - StandardFragilityCurve - Complete"
LS_2_curve["returnType"]["type"] = "Limit State"
LS_2_curve["returnType"]["unit"] = ""
LS_2_curve["returnType"]["description"] = "LS_2"

# append the aforementioned rule

4.1.3. Step 3: Piece together the complete fragility curve set#

fragility_definition_archetype_6 = copy.deepcopy(fragility_definition_template)
fragility_definition_archetype_6["description"] = "Business and retail building (strip mall)"
fragility_definition_archetype_6["authors"].append("M. Memari N. Attary H. Masoomi J.W. van de Lindt  S.F. Pilkington M.R. Ameri & H. Mahmoud")
fragility_definition_archetype_6["hazardType"] = "tornado"
fragility_definition_archetype_6["inventoryType"] = "building"

# fill in the curves

# fill in the curve parameters

4.1.4. Step 4: Create fragility curve set object#

fragility_archetype_6 = FragilityCurveSet(fragility_definition_archetype_6)

4.1.5. Alternative: Construct fragility curves from string#

# fragiity definition as string
fragility_definition_archetype_7 = """{
    "description": "Light industrial",
    "authors": [
        "M. Memari N. Attary H. Masoomi J.W. van de Lindt  S.F. Pilkington M.R. Ameri & H. Mahmoud"
    "paperReference": null,
    "resultUnit": null,
    "resultType": "Limit State",
    "hazardType": "tornado",
    "inventoryType": "building",
    "creator": "incore",
    "owner": "incore",
    "curveParameters": [
            "name": "wind",
            "unit": "mps",
            "description": "wind value from hazard service",
            "fullName": "wind",
            "expression": null
    "demandTypes": [
    "demandUnits": [
    "fragilityCurves": [
            "description": "legacy - StandardFragilityCurve - Moderate",
            "rules": [
                    "condition": [
                        "wind > 0"
                    "expression": "scipy.stats.norm.cdf((math.log(wind) - (3.695))/(0.1))"
            "returnType": {
                "type": "Limit State",
                "unit": "",
                "description": "LS_0"
            "curveParameters": null
            "description": "legacy - StandardFragilityCurve - Extensive",
            "rules": [
                    "condition": [
                        "wind > 0"
                    "expression": "scipy.stats.norm.cdf((math.log(wind) - (3.785))/(0.1))"
            "returnType": {
                "type": "Limit State",
                "unit": "",
                "description": "LS_1"
            "curveParameters": null
            "description": "legacy - StandardFragilityCurve - Complete",
            "rules": [
                    "condition": [
                        "wind > 0"
                    "expression": "scipy.stats.norm.cdf((math.log(wind) - (3.865))/(0.1))"
            "returnType": {
                "type": "Limit State",
                "unit": "",
                "description": "LS_2"
            "curveParameters": null

# create fragility curve set object
fragility_archetype_7 = FragilityCurveSet.from_json_str(fragility_definition_archetype_7)

4.1.6. Visualize fragility curves#

Use pyincore-viz library to visualize fragility curve set. Details can be found on pyincore-viz’s Documentation

from pyincore_viz.plotutil import PlotUtil as plot
plt = plot.get_fragility_plot(fragility_archetype_6, start=0, end=100)

4.1.7. Calculate limit state from local fragility#

hazard_value = {"wind":40}
limit_states = fragility_archetype_6.calculate_limit_state(hazard_value)

4.1.8. Derivie damage state from limit state#


4.2. Create Fragility Mapping#

DFR3 mapping sets serve the purpose of establishing a connection between an item within a given inventory and corresponding DFR3 curves. For example, a particular set of fragility curves should be applied to a building based on its specific characteristics.

Local Image

4.2.1. Step 1: Establish mapping rules#

fragility_entry_archetype_6 = {"Non-Retrofit Fragility ID Code": fragility_archetype_6}
fragility_rules_archetype_6 = {"OR":["int archetype EQUALS 6"]}
fragility_mapping_archetype_6 = Mapping(fragility_entry_archetype_6, fragility_rules_archetype_6)
# Quiz: write rules for archetype 7
fragility_entry_archetype_7 = {"Non-Retrofit Fragility ID Code": fragility_archetype_7}
fragility_rules_archetype_7 = {"OR":["int archetype EQUALS 7"]}
fragility_mapping_archetype_7 = Mapping(fragility_entry_archetype_7, fragility_rules_archetype_7)

4.2.2 Step 2: Add metadata#

fragility_mapping_set_definition = {
    "id": "N/A",
    "name": "local joplin tornado fragility mapping object",
    "hazardType": "tornado",
    "inventoryType": "building",
    'mappings': [
    "mappingType": "fragility"

4.2.3. Step 3: Create MappingSet Object#

fragility_mapping_set = MappingSet(fragility_mapping_set_definition)

5. Building Damage Analysis#

This analysis computes building damage based on a particular hazard. In this secion, you will perform building damage anlayis for tornado in this tutorial.

The process for computing the structural damage is similar to other parts of the built environment. First, a fragility is obtained based on the hazard type and attributes of the building. Based on the fragility, the hazard intensity at the location of the building is computed. Using this information, the probability of exceeding each limit state is computed, along with the probability of damage. For the case of an earthquake hazard, soil information can be used to modify the damage probabilities to include damage due to liquefaction.

The outputs of this analysis are CSV file with probabilities of damage and JSON file with information about hazard and fragilities. The detail information about the analysis in our manual: Building damage analysis

from pyincore.analyses.buildingdamage import BuildingDamage
from pyincore import IncoreClient
client = IncoreClient(offline=True)
# Create building damage
bldg_dmg = BuildingDamage(client)

# Load input dataset
bldg_dmg.set_input_dataset("buildings", buildings)

# Load fragility mapping
fragility_service = FragilityService(client)
bldg_dmg.set_input_dataset("dfr3_mapping_set", fragility_mapping_set)

# Set hazard
bldg_dmg.set_input_hazard("hazard", tornado)

# Set analysis parameters
bldg_dmg.set_parameter("result_name", "joplin_tornado_commerical_bldg_dmg")
bldg_dmg.set_parameter("num_cpu", 2)
# Run building damage analysis

5.1. Explore building damage resutls#

# display building damage
bldg_dmg_df = bldg_dmg.get_output_dataset("ds_result").get_dataframe_from_csv()

5.1.1. Joining dataset#

Data preparation and data post-processing are common procedures. Prior to using pyIncore, users often encounter situation that they need to reshape their own dataset to make it compliant with the input dataset format of pyIncore.

After acquiring outputs from pyIncore analyses, often time user would need to perform data aggregation to gain statitical insights. The below tutorial gives a few examples on how to join datasets and generate some basic visualizations.

# getting geodataframe of building dataset 
bldg_gdf = buildings.get_dataframe_from_shapefile()

# merge/join two dataframe
# you can choose columns to be merged
bldg_gdf_merged = bldg_gdf[['guid', 'archetype', 'geometry']].merge(bldg_dmg_df, on='guid')

5.1.2. Show statistical summary on a column#


5.1.3. Show table sorted by LS_0 (decending) and archetype (ascending)#

bldg_gdf_merged.sort_values(['LS_0', 'archetype'], ascending=[0,1]).head(15)

5.1.4. Show table group by archetype#

grouped_bldg_dmg = bldg_gdf_merged.groupby(by=['archetype'], as_index=True).agg({'DS_0': 'mean', 'DS_1':'mean', 'DS_2': 'mean', 'DS_3': 'mean', 'guid': 'count'})
grouped_bldg_dmg.rename(columns={'guid': 'total_count'}, inplace=True)

5.2. Visualization building damage#

# Hazard Exposure
count = bldg_gdf_merged['haz_expose'].value_counts()
plt = count.plot(kind='bar')
# Plot Damage state by archetype
ax = grouped_bldg_dmg[["DS_0", "DS_1", "DS_2", "DS_3"]].plot.barh(stacked=True)
ax.set_title("Stacked Bar Chart of Damage State Grouped by Archetype Type", fontsize=12)
ax.set_xlabel("complete damage value", fontsize=12)
ax.legend(loc='center left', bbox_to_anchor=(1.0, 0.5)) #here is the magic
# Plot a map with GeoDataframe
viz.plot_gdf_map(bldg_gdf_merged, "DS_3", basemap=True)
# leveraging geopandas explore
    marker_kwds={"radius": 10},
    column="DS_3",  # make choropleth based on "BoroName" column
    tooltip="DS_3",  # show "BoroName" value in tooltip (on hover)
    popup=True,  # show all values in popup (on click)
    cmap="Set1",  # use "Set1" matplotlib colormap)

6. Monte Carlo Simulation#

Researchers can use Monte Carlo Simulation to estimate the probability of each building being in a particular damage state. This example uses 500 iterations to determine the failure probability of buildings reaching damage state 2, damage state 3, and damage state 4. Users can run 10000 samples or even more for a more accurate Monte Carlo Simulation to determine the building failure probabilities. Note that this takes several minutes and we are working on developing a more efficient algorithm.

For more details, please refer to Monte Carlo Simulation Analysis Documentation

from pyincore.analyses.montecarlofailureprobability import MonteCarloFailureProbability 
mc_bldg = MonteCarloFailureProbability(client)

# get bldg dmg result from previous step of building damage
bldg_dmg_result = bldg_dmg.get_output_dataset("ds_result")
mc_bldg.set_input_dataset("damage", bldg_dmg_result)

# set input parameters
mc_bldg.set_parameter("num_cpu", 8)
mc_bldg.set_parameter("num_samples", 500)
mc_bldg.set_parameter("damage_interval_keys", ["DS_0", "DS_1", "DS_2", "DS_3"])
mc_bldg.set_parameter("failure_state_keys", ["DS_1", "DS_2", "DS_3"])

mc_bldg.set_parameter("result_name", "joplin_tornado_mc_failure_probability_buildings")
# Run the Monte Carlo Simulation module to obtain the building failure probabilities. The building failure   
# probabilities herein only consider the physical damage without the interdependency.
# list all the outputs

6.1. Get buildings non-functional probabilities#

bldg_failure_probability = mc_bldg.get_output_dataset('failure_probability')
bldg_failure_probability_df = bldg_failure_probability.get_dataframe_from_csv()

6.2. Get buildings sample damage states#

bldg_damage_mcs_samples = mc_bldg.get_output_dataset('sample_damage_states')
bldg_damage_mcs_samples_df = bldg_damage_mcs_samples.get_dataframe_from_csv()

6.3. Visualize probability of non-functionality#

# getting geodataframe of building dataset and merge with output
bldg_gdf = buildings.get_dataframe_from_shapefile()
bldg_failure_probability_gdf = bldg_gdf.merge(bldg_failure_probability_df, on="guid")

viz.plot_gdf_map(bldg_failure_probability_gdf, column="failure_probability")
    marker_kwds={"radius": 10},
    column="failure_probability",  # make choropleth based on "BoroName" column
    tooltip="failure_probability",  # show "BoroName" value in tooltip (on hover)
    popup=True,  # show all values in popup (on click)
    cmap="Set1",  # use "Set1" matplotlib colormap)

7. Commercial Building Recovery Analysis#

Commercial building recovery analysis computes the recovery time needed for each commercial building from any damage states to receive the full restoration. Currently, supported hazards are tornadoes. The methodology incorporates the multi-layer Monte Carlo simulation approach and determines the two-step recovery time that includes delay and repair. The delay model was modified based on the REDi framework and calculated the end-result outcomes resulted from delay impeding factors such as post-disaster inspection, insurance claim, financing and government permit. The repair model followed the FEMA P-58 approach and was controlled by fragility functions. The outputs of this analysis is a CSV file with time-stepping recovery probabilities at the building level.

To read more about this analysis, please visit Commercial Building Recovery Documentation

from pyincore.analyses.commercialbuildingrecovery.commercialbuildingrecovery import CommercialBuildingRecovery

commercial_recovery = CommercialBuildingRecovery(client)

7.1. Step 1: Prepare repair curve sets and repair curve mappings#

# repair curve set
repair_archetype_6 = RepairCurveSet.from_json_file("data/dfr3/repair_archetype_6.json")
repair_archetype_7 = RepairCurveSet.from_json_file("data/dfr3/repair_archetype_7.json")
# define mapping rules
repair_entry_archetype_6 = {"Repair ID Code": repair_archetype_6}
repair_rules_archetype_6 = {"OR":["int archetype EQUALS 6"]}
repair_mapping_archetype_6 = Mapping(repair_entry_archetype_6, repair_rules_archetype_6)

repair_entry_archetype_7 = {"Repair ID Code": repair_archetype_7}
repair_rules_archetype_7 = {"OR":["int archetype EQUALS 7"]}
repair_mapping_archetype_7 = Mapping(repair_entry_archetype_7, repair_rules_archetype_7)

# construct mapping
repair_mapping_set_definition = {
    "id": "N/A",
    "name": "Building Mean structural repair cost mapping for archetypes 6 & 7",
    "hazardType": "tornado",
    "inventoryType": "building",
    'mappings': [
    "mappingType": "repair"
repair_mapping_set = MappingSet(repair_mapping_set_definition)
# set recovery mapping
commercial_recovery.set_input_dataset('dfr3_mapping_set', repair_mapping_set)

7.2. Step 2: Chain with MCS output#

commercial_recovery.set_input_dataset("sample_damage_states", bldg_damage_mcs_samples)
commercial_recovery.set_input_dataset("mcs_failure", bldg_failure_probability)

7.3. Step 3: Prepare additional input datasets#

# delay factor
delay_factors = Dataset.from_file("data/additional/Dataset1_REDi_business_new.csv", data_type="incore:buildingRecoveryFactors")
commercial_recovery.set_input_dataset("delay_factors", delay_factors)

# building inventory
commercial_recovery.set_input_dataset("buildings", buildings)
# set other parameters
commercial_recovery.set_parameter("result_name", "joplin_tornado_commercial_bldg_recovery")
commercial_recovery.set_parameter("num_samples", 10)
commercial_recovery.set_input_dataset("building_dmg", bldg_dmg_result)

7.4. Step 4: Run commercial recovery analysis#

time_stepping_recovery = commercial_recovery.get_output_dataset("time_stepping_recovery")
time_stepping_recovery_df = time_stepping_recovery.get_dataframe_from_csv()

7.5. Visualize recovery grouped by archetypes#

# merging with building inventory
bldg_gdf = buildings.get_dataframe_from_shapefile()
time_stepping_recovery_df = bldg_gdf[["guid", "archetype"]].merge(time_stepping_recovery_df, on="guid").drop(["guid"], axis=1)
time_stepping_recovery_grouped = time_stepping_recovery_df.groupby("archetype").agg("mean")
df = time_stepping_recovery_grouped.copy(deep=True) 
df = df.T
ax = df.plot(kind='line', figsize=(12, 3))
ax.set_ylabel('Recovery Rate')
ax.set_title('Joplin Commercial Building Recovery Curves by Archetype')