analyses

analyses/bridgedamage

class bridgedamage.bridgedamage.BridgeDamage(incore_client)

Computes bridge structural damage for earthquake, tsunami, tornado, and hurricane hazards.

Parameters:

incore_client (IncoreClient) – Service authentication.

bridge_damage_analysis_bulk_input(bridges, hazard, hazard_type, hazard_dataset_id)

Run analysis for multiple bridges.

Parameters:
  • bridges (list) – Multiple bridges from input inventory set.

  • hazard (obj) – Hazard object.

  • hazard_type (str) – Type of hazard.

  • hazard_dataset_id (str) – ID of hazard.

Returns:

A list of ordered dictionaries with bridge damage values and other data/metadata.

Return type:

list

bridge_damage_concurrent_future(function_name, num_workers, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • num_workers (int) – Maximum number workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

A list of ordered dictionaries with bridge damage values and other data/metadata.

Return type:

list

get_spec()

Get specifications of the bridge damage analysis.

Returns:

A JSON object of specifications of the bridge damage analysis.

Return type:

obj

run()

Executes bridge damage analysis.

class bridgedamage.bridgeutil.BridgeUtil

Utility methods for the bridge damage analysis.

static get_retrofit_code(target_fragility_key)

Get retrofit code by looking up BRIDGE_FRAGILITY_KEYS dictionary.

Parameters:

target_fragility_key (str) – Fragility key describing the type of fragility.

Returns:

A retrofit code.

Return type:

str

static get_retrofit_cost(target_fragility_key)

Calculates retrofit cost estimate of a bridge.

Parameters:

target_fragility_key (str) – Fragility key describing the type of fragility.

Note

This function is not completed yet. Need real data example on the following variable private FeatureDataset bridgeRetrofitCostEstimate

Returns:

Retrofit cost estimate.

Return type:

float

static get_retrofit_type(target_fragility_key)

Get retrofit type by looking up BRIDGE_FRAGILITY_KEYS dictionary.

Parameters:

target_fragility_key (str) – Fragility key describing the type of fragility.

Returns:

A retrofit type.

Return type:

str

analyses/buildingdamage

class buildingdamage.buildingdamage.BuildingDamage(incore_client)

Building Damage Analysis calculates the probability of building damage based on different hazard type such as earthquake, tsunami, and tornado.

Parameters:

incore_client (IncoreClient) – Service authentication.

building_damage_analysis_bulk_input(buildings, hazards, hazard_types, hazard_dataset_ids)

Run analysis for multiple buildings.

Parameters:
  • buildings (list) – Multiple buildings from input inventory set.

  • hazards (list) – List of hazard objects.

  • hazard_types (list) – List of Hazard type, either earthquake, tornado, or tsunami.

  • hazard_dataset_ids (list) – List of id of the hazard exposure.

Returns:

A list of ordered dictionaries with building damage values and other data/metadata.

Return type:

list

building_damage_concurrent_future(function_name, parallelism, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • parallelism (int) – Number of workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

A list of ordered dictionaries with building damage values and other data/metadata.

Return type:

list

get_spec()

Get specifications of the building damage analysis.

Returns:

A JSON object of specifications of the building damage analysis.

Return type:

obj

run()

Executes building damage analysis.

class buildingdamage.buildingutil.BuildingUtil

Utility methods for the building damage analysis.

analyses/buildingeconloss

class buildingeconloss.buildingeconloss.BuildingEconLoss(incore_client)

Direct Building Economic Loss analysis calculates the building loss based on building appraisal value, mean damage and an inflation multiplier from user’s input. We are not implementing any inflation calculation based on consumer price indices at the moment. A user must supply the inflation percentage between a building appraisal year and year of interest (current, date of hazard etc.)

Parameters:

incore_client (IncoreClient) – Service authentication.

add_multipliers(dmg_set_df, occ_mult_df)

Add occupancy multipliers to damage dataset.

Parameters:
  • dmg_set_df (pd.DataFrame) – Building inventory dataset with guid and mean damages.

  • occ_mult_df (pd.DataFrame) – Occupation multiplier set.

Returns:

Merged inventory.

Return type:

pd.DataFrame

get_inflation_mult()

Get inflation multiplier from user’s input.

Returns:

Inflation multiplier.

Return type:

float

get_spec()

Get specifications of the building damage analysis.

Returns:

A JSON object of specifications of the building damage analysis.

Return type:

obj

run()

Executes building economic damage analysis.

analyses/buildingfunctionality

class buildingfunctionality.buildingfunctionality.BuildingFunctionality(incore_client)

The building functionality analysis can be used to calculate building functionality probabilities considering two situations: buildings are in at least a damage state 2 or greater or buildings are not damaged but electric power is not available to the building. Whether buildings can receive electrical power is assumed to depend on the interdependency between buildings and substations, and between buildings and poles in close proximity. If both the nearest pole to the building and the substation where buildings belong to its service area are functional, buildings are considered to be able to receive electric power.

Parameters:

incore_client (IncoreClient) – Service authentication.

functionality(building_guid, buildings, substations, poles, interdependency)
Parameters:
  • building_guid (str) – A building defined by its guid.

  • buildings (pd.DataFrame) – A list of buildings.

  • substations (pd.DataFrame) – A list of substations.

  • poles (pd.DataFrame) – A list of poles.

  • interdependency (dict) – An interdependency between buildings and substations and poles.

Returns:

A building guid. str: A functionality sample that is a string of “0,0,1…”. str: A probability [0,1] of building being functional.

Return type:

str

get_spec()

Get specifications of the building functionality analysis.

Returns:

A JSON object of specifications of the building functionality analysis.

Return type:

obj

run()

Executes building functionality analysis

analyses/buildingportfolio

analyses/capitalshocks

class pyincore.analyses.capitalshocks.CapitalShocks(incore_client)
Capital stock shocks for an individual building is equal to the functionality probability multiplied by value

of the building. This gives us the capital stock loss in the immediate aftermath of a natural disaster for a single building. We aggregate each of these individual losses to their associated economic sector to calculate the total capital stock lost for that sector. However, the capital stock shocks that are used as inputs into the CGE model are scalars for embodying the percent of capital stock remaining. We get this by dividing the total capital stock remaining by the total capital stock before the natural disaster.”

Parameters:

incore_client (IncoreClient) – Service authentication.

get_spec()

Get basic specifications.

Note

The get_spec will be called exactly once per instance (during __init__), so children should not assume that they can do weird dynamic magic during this call. See the example spec at the bottom of this file.

Returns:

A JSON object of basic specifications of the analysis.

Return type:

obj

analyses/combinedwindwavesurgebuildingdamage

class pyincore.analyses.combinedwindwavesurgebuildingdamage.CombinedWindWaveSurgeBuildingDamage(incore_client)

Determines overall building maximum damage state from wind, flood and surge-wave damage and uses the maximum damage probabilities from the 3 damages to determine overall damage

Parameters:

incore_client (IncoreClient) – Service authentication.

get_combined_damage(wind_dmg: DataFrame, sw_dmg: DataFrame, flood_dmg: DataFrame)

Calculates overall building damage Determines the overall building damage probabilities from the 3 hazards by taking the maximum.

Parameters:
  • wind_dmg (pd.DataFrame) – Table of wind damage for the building inventory

  • sw_dmg (pd.DataFrame) – Table of surge-wave damage for the building inventory

  • flood_dmg (pd.DataFrame) – Table of flood damage for the building inventory

Returns:

An table of combined damage probabilities for the building inventory

Return type:

pd.DataFrame

get_spec()

Get specifications of the combined wind, wave, and surge building damage analysis.

Returns:

A JSON object of specifications of the combined wind, wave, and surge building damage analysis.

Return type:

obj

run()

Executes combined wind, wave, surge building damage analysis.

analyses/combinedwindwavesurgebuildingloss

class pyincore.analyses.combinedwindwavesurgebuildingloss.CombinedWindWaveSurgeBuildingLoss(incore_client)

This analysis computes the building structural and content loss from wind, flood and surge-wave damage

Contributors
Science: Omar Nofal, John W. van de Lindt, Trung Do, Guirong Yan, Sara Hamideh, Daniel Cox, Joel Dietrich
Implementation: Jiate Li, Chris Navarro and NCSA IN-CORE Dev Team
Related publications

Nofal, Omar & Lindt, John & Do, Trung & Yan, Guirong & Hamideh, Sara & Cox, Daniel & Dietrich, Joel. (2021). Methodology for Regional Multi-Hazard Hurricane Damage and Risk Assessment. Journal of Structural Engineering. 147. 04021185. 10.1061/(ASCE)ST.1943-541X.0003144.

Parameters:

incore_client (IncoreClient) – Service authentication.

get_combined_loss(wind_dmg: DataFrame, sw_dmg: DataFrame, flood_dmg: DataFrame, buildings: DataFrame, content_cost: DataFrame, structure_cost: DataFrame)

Calculates structural and content loss from wind, surge-wave and flood damage

Parameters:
  • wind_dmg (pd.DataFrame) – Table of wind damage for the building inventory

  • sw_dmg (pd.DataFrame) – Table of surge-wave damage for the building inventory

  • flood_dmg (pd.DataFrame) – Table of flood damage for the building inventory

  • buildings (pd.DataFrame) – Table of building attributes

  • content_cost (pd.DataFrame) – Table of content cost ratios for each archetype

  • structure_cost (pd.DataFrame) – Table of structural cost ratio for each archetype and loss type

Returns:

An table of structural and content loss for each building

Return type:

pd.DataFrame

get_spec()

Get specifications of the combined wind, wave, and surge building loss analysis.

Returns:

A JSON object of specifications of the combined wind, wave, and surge building damage analysis.

Return type:

obj

run()

Executes combined wind, wave, surge building loss analysis.

analyses/commercialbuildingrecovery

class commercialbuildingrecovery.commercialbuildingrecovery.CommercialBuildingRecovery(incore_client)

This analysis computes the recovery time needed for each commercial building from any damage states to receive the full restoration. Currently, supported hazards are tornadoes.

The methodology incorporates the multi-layer Monte Carlo simulation approach and determines the two-step recovery time that includes delay and repair. The delay model was modified based on the REDi framework and calculated the end-result outcomes resulted from delay impeding factors such as post-disaster inspection, insurance claim, financing and government permit. The repair model followed the FEMA P-58 approach and was controlled by fragility functions.

The outputs of this analysis is a CSV file with time-stepping recovery probabilities at the building level.

Contributors
Science: Wanting Lisa Wang, John W. van de Lindt
Implementation: Wanting Lisa Wang, and NCSA IN-CORE Dev Team
Related publications

Wang, W.L., Watson, M., van de Lindt, J.W. and Xiao, Y., 2023. Commercial Building Recovery Methodology for Use in Community Resilience Modeling. Natural Hazards Review, 24(4), p.04023031.

Parameters:

incore_client (IncoreClient) – Service authentication.

commercial_recovery(buildings, sample_damage_states, mcs_failure, redi_delay_factors, building_dmg, num_samples)

Calculates commercial building recovery for buildings

Parameters:
  • buildings (list) – Buildings dataset

  • sample_damage_states (pd.DataFrame) – Sample damage states

  • redi_delay_factors (pd.DataFrame) – Delay factors based on REDi framework

  • mcs_failure (pd.DataFrame) – Building inventory failure probabilities

  • building_dmg (pd.DataFrame) – Building damage states

  • num_samples (int) – number of sample scenarios to use

Returns:

dictionary with id/guid and commercial recovery for each quarter

Return type:

dict

get_spec()

Get specifications of the commercial building recovery analysis.

Returns:

A JSON object of specifications of the commercial building recovery analysis.

Return type:

obj

recovery_rate(buildings, sample_damage_states, total_delay)

Gets total time required for each commercial building to receive full restoration. Determined by the combination of delay time and repair time

Parameters:
  • buildings (list) – List of buildings

  • sample_damage_states (pd.DataFrame) – Samples’ damage states

  • total_delay (pd.DataFrame) – Total delay time of financial delay and other factors from REDi framework.

Returns:

Recovery time of all commercial buildings for each sample

Return type:

pd.DataFrame

run()

Executes the commercial building recovery analysis.

Returns:

True if successful, False otherwise.

Return type:

bool

static time_stepping_recovery(recovery_results)

Converts results to a time frame. Currently gives results for 16 quarters over 4 year.

Parameters:

recovery_results (pd.DataFrame) – Total recovery time of financial delay and other factors from REDi framework.

Returns:

Time formatted recovery results.

Return type:

pd.DataFrame

static total_delay(buildings, sample_damage_states, mcs_failure, redi_delay_factors, damage, num_samples)

Calculates total delay by combining financial delay and other factors from REDi framework

Parameters:
  • buildings (list) – List of buildings

  • sample_damage_states (pd.DataFrame) – Building inventory damage states.

  • mcs_failure (pd.DataFrame) – Building inventory failure probabilities

  • redi_delay_factors (pd.DataFrame) – Delay impeding factors such as post-disaster inspection, insurance claim, financing, and government permit based on building’s damage state.

  • damage (pd.DataFrame) – Damage states for building structural damage

  • num_samples (int) – number of sample scenarios to use

Returns:

Total delay time of all impeding factors from REDi framework.

Return type:

pd.DataFrame

analyses/cumulativebuildingdamage

class cumulativebuildingdamage.cumulativebuildingdamage.CumulativeBuildingDamage(incore_client)

This analysis computes the cumulative building damage for a combined earthquake and tsunami event. The process for computing the structural damage is done externally and the results for earthquake and tsunami are passed to this analysis. The damage intervals are then calculated from combined limit state probabilities for the two hazards.

cumulative_building_damage(eq_building_damage, tsunami_building_damage)

Run analysis for building damage results.

Parameters:
  • eq_building_damage (obj) – A JSON description of an earthquake building damage.

  • tsunami_building_damage (obj) – Set of all tsunami building damage results.

Returns:

A dictionary with building damage values and other data/metadata.

Return type:

OrderedDict

cumulative_building_damage_bulk_input(eq_building_damage_set, tsunami_building_damage_set)

Run analysis for building damage results.

Parameters:
  • eq_building_damage_set (obj) – A set of earthquake building damage results.

  • tsunami_building_damage_set (obj) – A set of all tsunami building damage results.

Returns:

A list of ordered dictionaries with multiple damage values and other data/metadata.

Return type:

list

cumulative_building_damage_concurrent_future(function_name, num_workers, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • num_workers (int) – Maximum number workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

A list of ordered dictionaries with cumulative damage values and other data/metadata.

Return type:

list

get_spec()

Get specifications of the damage analysis.

Returns:

A JSON object of specifications of the cumulative damage analysis.

Return type:

obj

static load_csv_file(file_name)

Load csv file into Pandas DataFrame.

Args (str): Input csv file name.

Returns:

A table from the csv with headers and values.

Return type:

pd.DataFrame

run()

Executes Cumulative Building Damage Analysis

analyses/electricpowerfacilityrestoration

analyses/epfdamage

class epfdamage.epfdamage.EpfDamage(incore_client)

Computes electric power facility structural damage for an earthquake, tsunami, tornado, and hurricane hazards.

Parameters:

incore_client (IncoreClient) – Service authentication.

epf_damage_analysis_bulk_input(epfs, hazard, hazard_type, hazard_dataset_id)

Run analysis for multiple epfs.

Parameters:
  • epfs (list) – Multiple epfs from input inventory set.

  • hazard (obj) – A hazard object.

  • hazard_type (str) – A type of hazard exposure (earthquake, tsunami, tornado, or hurricane).

  • hazard_dataset_id (str) – An id of the hazard exposure.

Returns:

A list of ordered dictionaries with epf damage values and other data/metadata.

Return type:

list

epf_damage_concurrent_future(function_name, num_workers, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • num_workers (int) – Maximum number workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

A list of ordered dictionaries with epf damage values and other data/metadata.

Return type:

list

get_spec()

Get specifications of the epf damage analysis.

Returns:

A JSON object of specifications of the epf damage analysis.

Return type:

obj

run()

Executes electric power facility damage analysis.

class epfdamage.epfutil.EpfUtil

Utility methods for the electric power facility damage analysis.

analyses/epfrepaircost

class epfrepaircost.epfrepaircost.EpfRepairCost(incore_client)

Computes electric power facility repair cost.

Parameters:

incore_client (IncoreClient) – Service authentication.

epf_repair_cost_bulk_input(epfs)

Run analysis for multiple epfs.

Parameters:

epfs (list) – Multiple epfs from input inventory set.

Returns:

A list of ordered dictionaries with epf repair cost values and other data/metadata.

Return type:

list

epf_repair_cost_concurrent_future(function_name, num_workers, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • num_workers (int) – Maximum number workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

A list of ordered dictionaries with epf repair cost values and other data/metadata.

Return type:

list

get_spec()

Get specifications of the epf repair cost analysis.

Returns:

A JSON object of specifications of the epf repair cost analysis.

Return type:

obj

run()

Executes electric power facility repair cost analysis.

analyses/epnfunctionality

class epnfunctionality.epnfunctionality.EpnFunctionality(incore_client)

Computes electric power infrastructure functionality. :param incore_client: Service client with authentication info

epf_functionality(distribution_sub_nodes, gate_station_node_list, num_samples, sampcols, epf_sample_df1, G_ep)

Run EPN functionality analysis.

Parameters:
  • distribution_sub_nodes (list) – distribution nodes

  • gate_station_node_list (list) – gate station nodes

  • num_samples (int) – number of simulations

  • sampcols (list) – list of number samples. e.g. “s0, s1,…”

  • epf_sample_df1 (dataframe) – epf mcs failure sample dataframe with added field “weight”

  • G_ep (networkx object) – constructed network

Returns:

A list of dictionary with id/guid and failure state for N samples fp_results (list): A list dictionary with failure probability and other data/metadata.

Return type:

fs_results (list)

get_spec()

Get specifications of the EPN functionality analysis. :returns: A JSON object of specifications of the EPN functionality analysis. :rtype: obj

run()

Execute electric power facility functionality analysis

class epnfunctionality.epnfunctionality.EpnFunctionalityUtil

analyses/example

class example.exampleanalysis.ExampleAnalysis(incore_client)

Example Analysis demonstrates how to use the base analysis class by loading in building data and computing some mock damage output and writing the result dataset

building_damage_analysis(building)

Calculates building damage results for a single building.

Parameters:

building (obj) – A JSON mapping of a geometric object from the inventory: current building.

Returns:

A dictionary with building damage values and other data/metadata.

Return type:

OrderedDict

get_spec()

Get specifications of the building damage analysis.

Returns:

A JSON object of specifications of the building damage analysis.

Return type:

obj

run()

Executes building damage analysis.

analyses/galvestoncge

class galvestoncge.galvestoncge.GalvestonCGEModel(incore_client)

A computable general equilibrium (CGE) model is based on fundamental economic principles. A CGE model uses multiple data sources to reflect the interactions of households, firms and relevant government entities as they contribute to economic activity. The model is based on (1) utility-maximizing households that supply labor and capital, using the proceeds to pay for goods and services (both locally produced and imported) and taxes; (2) the production sector, with perfectly competitive, profit-maximizing firms using intermediate inputs, capital, land and labor to produce goods and services for both domestic consumption and export; (3) the government sector that collects taxes and uses tax revenues in order to finance the provision of public services; and (4) the rest of the world.

Parameters:

incore_client (IncoreClient) – Service authentication.

galveston_cge(iNum, sam, bb, jobcr, misch, employ, outcr, sector_shocks)
Parameters:
  • iNum (int) –

  • sam (pd.DataFrame) –

  • bb (str) –

  • jobcr

  • misch

  • employ

  • outcr

  • sector_shocks

Returns:

get_spec()

Get basic specifications.

Note

The get_spec will be called exactly once per instance (during __init__), so children should not assume that they can do weird dynamic magic during this call. See the example spec at the bottom of this file.

Returns:

A JSON object of basic specifications of the analysis.

Return type:

obj

run()

Returns:

class galvestoncge.equationlib.VarContainer

All matrix variable(tables) in the GAMS model is flatten to a array to make a better interface to the solver.

AllVarList stores all initial values of varibles used in the GAMS model in an array. It also has a indexing system for looking up.

namelist

A dictionary with all stored GAMS variables and its information.

nvars

The length of the array, i.e. the size of all matrix variables summed up.

initialVals

Stored initial values of all variables

Initialize to an empty list

add(name, rows=None, cols=None)
Parameters:
  • name

  • rows

  • cols

Returns:

get(name, x=None)

Returns a Dataframe, Series, or a variable based on the given name and the result array returned from the solver

Parameters:

name – GAMS variable name

Returns:

if x is not given, it returns the initial values if x is set to the result, returns the result variable value

getIndex(name, row=None, col=None)

Look up the index by providing the variable name and label information

Parameters:
  • name – name of GAMS variable you want to look up

  • row – row label of the position you want to look up index for(if it has row labels)

  • col – column label of the position you want to look up index for(if it has column labels)

Returns:

the index of the position in the array

getInfo(name)

Get the information about a GAMS variable

Parameters:

name(str) – name of GAMS variable you want to look up

Returns:

a dictionary with all information

getLabel(index)

Look up variable name and label information by providing the index

Parameters:

index – the index in the array

Returns:

its information including the variable name, row label and column label if applicable

inList(name)

Check if a GAMS varible is added to the container

Parameters:

name(str) – name of GAMS variable you want to look up

Returns:

Boolean, whether the variable is added.

init(name, initialValue)

Flatten the table variable and add to the list. Also set the initial variable values array.

Parameters:
  • name – Name of the variable in GAMS

  • initialValue – a pandas DataFrame or pandas Series with initial values

Returns:

None.

lo(name, value)

Set the LOs of a GAMS variable providing the LOs with a Dataframe, Series, int or float

Parameters:
  • name – GAMS variable name

  • value – The lower bound to be set

Returns:

None

set_value(name, values, target)

An internal method for setting the initial values or UPs and LOs for variables

Parameters:
  • name – Name of the variable in GAMS

  • value – a pandas DataFrame, pandas Series, int or float with initial values

  • target – target array to be set

Returns:

None

up(name, value)

Set the UPs of a GAMS variable providing the LOs with a Dataframe, Series, int or float

Parameters:
  • name – GAMS variable name

  • value – The upper bound to be set

Returns:

None

write(filename)

Write(append) the variables to a file, in the format of setting ipopt model variables

Parameters:

filename – the output filename

Returns:

None

class galvestoncge.equationlib.ExprItem(v, const=1)

You can construct it with a variable, a constant or a deepcopy of another ExprItem

class galvestoncge.equationlib.Expr(item)
class galvestoncge.equationlib.ExprM(vars, name=None, rows=None, cols=None, m=None, em=None)

Three ways to create a ExprMatrix: 1. Give it the variable name, selected rows and cols(could be empty),

The constructor will create a Expression matrix from the variable matrix

  1. Give it a pandas Series or DataFrame, it will create the Expression matrix

with the content in the Series or DataFrame as constants

  1. Give it a ExprMatrix, will return a deep copy of it

__invert__()

Return the transpose of a Expression matrix

__xor__(rhs)

create 2d list out of 2 single lists

loc(rows=None, cols=None)

get a subset of the matrix by labels

analyses/housingrecovery

class housingrecovery.housingrecovery.HousingRecovery(incore_client)

The analysis predicts building values and value changes over time following a disaster event. The model is calibrated with respect to demographics, parcel data, and building value trajectories following Hurricane Ike (2008) in Galveston, Texas. The model predicts building value at the parcel level for 8 years of observation. The models rely on Census (Decennial or American Community Survey, ACS) and parcel data immediately prior to the disaster event (year -1) as inputs for prediction.

The Galveston, TX example makes use of 2010 Decennial Census and Galveston County Appraisal District (GCAD) tax assessor data and outputs from other analysis (i.e., Building Damage, Housing Unit Allocation, Population Dislocation) .

The CSV outputs of the building values for the 6 years following the disaster event (with year 0 being the impact year).

Contributors
Science: Wayne Day, Sarah Hamideh
Implementation: Michal Ondrejcek, Santiago Núñez-Corrales, and NCSA IN-CORE Dev Team
Related publications

Hamideh, S., Peacock, W. G., & Van Zandt, S. (2018). Housing recovery after disasters: Primary versus seasonal/vacation housing markets in coastal communities. Natural Hazards Review.

Parameters:

incore_client (IncoreClient) – Service authentication.

assemble_phm_coefs(hru, hse_rec)

Assemble Primary Housing Market (PHM) data for full inventory and all damage-related years. .

Args:

hru (obj): Housing recovery utility. hse_rec (pd.DataFrame): Area inventory including losses.

Returns:

np.array: Final coefficients for all damage years.

assemble_svhm_coefs(hru, hse_rec)

Assemble Seasonal/Vacation housing market (SVHM) data for full inventory and all damage-related years. .

Args:

hru (obj): Housing recovery utility. hse_rec (pd.DataFrame): Area inventory including losses.

Returns:

np.array: Final coefficients for all damage years.

get_owneship(popd)
Filter ownership based on the vacancy codes

Assumption: Where ownershp is “missing”, let vacancy codes 0/3/4 be considered owner-occupied, and 1/2/5/6/7 be considered renter-occupied. It is uncertain whether vacancy codes 3,4,5,6,7 will become owner- or renter-occupied or primarily one or the other.

.
Args:

popd (pd.DataFrame): Population dislocation results with ownership information.

Returns:

pd.DataFrame: Ownership data.

get_spec()

Get basic specifications.

Note

The get_spec will be called exactly once per instance (during __init__), so children should not assume that they can do weird dynamic magic during this call. See the example spec at the bottom of this file.

Returns:

A JSON object of basic specifications of the analysis.

Return type:

obj

get_vac_season_housing(vac_status)
Calculate the percent vacation or seasonal housing of all housing units within a census tract and

add dummy variable for census tract as a seasonal/vacation housing submarket.

.
Args:

vac_status (obj): Seasonal/vacation housing Census ACS data from json reader.

Returns:

pd.DataFrame: Seasonal/vacation housing data.

merge_add_inv(hse_rec, addl_struct)

Merge study area and additional structure information. .

Args:

hse_rec (pd.DataFrame): Area inventory. addl_struct (pd.DataFrame): Additional infrastructure inventory.

Returns:

pd.DataFrame: Final merge of two inventories.

merge_block_data(hse_rec, bg_mhhinc)

Merge block group level median household income. .

Args:

hse_rec (pd.DataFrame): Area inventory. bg_mhhinc (pd.DataFrame): Block data.

Returns:

pd.DataFrame: Final merge of two inventories.

merge_seasonal_data(hse_rec, vac_status)

Merge study area and with seasonal/vacation housing Census ACS data. .

Args:

hse_rec (pd.DataFrame): Area inventory. vac_status (pd.DataFrame): Seasonal/vacation housing Census ACS data.

Returns:

pd.DataFrame: Final merge of two inventories.

run()

Executes the Housing recovery analysis.

Returns:

True if successful, False otherwise.

Return type:

bool

value_loss(hse_rec)

Estimate value_loss for each parcel based on parameters from Bai, Hueste, & Gardoni (2009). .

Args:

hse_rec (pd.DataFrame): Area inventory.

Returns:

pd.DataFrame: Inventory with value losses.

class housingrecovery.housingrecoveryutil.HousingRecoveryUtil

analyses/housingrecoverysequential

class housingrecoverysequential.housingrecoverysequential.HousingRecoverySequential(incore_client)

This analysis computes the series of household recovery states given a population dislocation dataset, a transition probability matrix (TPM) and an initial state vector.

The computation operates by segregating household units into five zones as a way of assigning social vulnerability. Using this vulnerability in conjunction with the TPM and the initial state vector, a Markov chain computation simulates most probable states to generate a stage history of housing recovery changes for each household.

The output of the computation is the history of housing recovery changes for each household unit in CSV format.

Contributors
Science: Elaina Sutley, Sara Hamideh
Implementation: Nathanael Rosenheim, Santiago Núñez-Corrales, and NCSA IN-CORE Dev Team
Related publications

Sutley, E.J. and Hamideh, S., 2020. Postdisaster housing stages: a Markov chain approach to model sequences and duration based on social vulnerability. Risk Analysis, 40(12), pp.2675-2695.

Parameters:

incore_client (IncoreClient) – Service authentication.

static compute_regressions(markov_stages, household, lower, upper)

Compute regressions for a given household in the interval near t using the interval [lower, upper], and adjust for stage inversion at the upper boundary.

Parameters:
  • markov_stages (np.Array) – Markov chain stages for all households.

  • household (int) – Index of the household.

  • lower (int) – Lower index to check past history.

  • upper (int) – Upper index to check past history.

Returns:

Number of regressions between a given time window.

Return type:

int

compute_social_vulnerability_values(households_df, num_households, rng)

Compute the social vulnerability score of a household depending on its zone :param households_df: Information about household zones. :type households_df: pd.DataFrame :param num_households: Number of households. :type num_households: int :param rng: Random state to draw pseudo-random numbers from. :type rng: np.RandomState

Returns:

social vulnerability scores.

Return type:

pd.Series

static compute_social_vulnerability_zones(sv_result, households_df)

Compute the social vulnerability score based on dislocation attributes. Updates the dislocation dataset by adding a new Zone column and removing values with missing Zone.

Parameters:
  • sv_result (pd.DataFrame) – output from social vulnerability analysis

  • households_df (pd.DataFrame) – Vector position of a household.

Returns:

Social vulnerability score.

Return type:

pd.DataFrame

get_spec()

Get specifications of the housing serial recovery model.

Returns:

A JSON object of specifications of the housing serial recovery model.

Return type:

obj

hhrs_concurrent_future(function_name, parallelism, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • parallelism (int) – Number of workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

outcome DataFrame containing the results from the concurrent function.

Return type:

pd.DataFrame

housing_serial_recovery_model(households_df, t_delta, t_final, tpm, initial_prob)

Performs the computation of the model as indicated in Sutley and Hamide (2020).

Parameters:
  • households_df (pd.DataFrame) – Households with population dislocation data.

  • t_delta (float) – Time step size.

  • t_final (float) – Final time.

  • tpm (np.Array) – Transition probability matrix.

  • initial_prob (pd.DataFrame) – Initial probability Markov vector.

Returns:

outcome of the HHRS model for a given household dataset.

Return type:

pd.DataFrame

run()

Execute the HHRS analysis using parameters and input data.

analyses/housingunitallocation

class housingunitallocation.housingunitallocation.HousingUnitAllocation(incore_client)
compare_columns(table, col1, col2, drop)

Compare two columns. If not equal create Tru/False column, if equal rename one of them with the base name and drop the other.

Parameters:
  • table (pd.DataFrame) – Data Frame table

  • col1 (pd.Series) – Column 1

  • col2 (pd.Series) – Column 2

  • drop (bool) – rename and drop column

Returns:

Table with True/False column

Return type:

pd.DataFrame

compare_merges(table1_cols, table2_cols, table_merged)

Compare two lists of columns and run compare columns on columns in both lists. It assumes that suffixes are _x and _y

Parameters:
  • table1_cols (Index) – columns in table 1

  • table2_cols (Index) – columns in table 2

  • table_merged (pd.DataFrame) – merged table

  • Returns – pd.DataFrame: Merged table

get_iteration_probabilistic_allocation(housing_unit_inventory, address_point_inventory, building_inventory, seed)

Merge inventories

Parameters:
  • housing_unit_inventory (pd.DataFrame) – Housing Unit Inventory

  • address_point_inventory (pd.DataFrame) – Address Point inventory

  • building_inventory (pd.DataFrame) – Building inventory

  • seed (int) – random number generator seed for reproducibility

  • Returns – pd.DataFrame: Merged table

get_spec()

Get basic specifications.

Note

The get_spec will be called exactly once per instance (during __init__), so children should not assume that they can do weird dynamic magic during this call. See the example spec at the bottom of this file.

Returns:

A JSON object of basic specifications of the analysis.

Return type:

obj

merge_infrastructure_inventory(address_point_inventory, building_inventory)

Merge order to Building and Address inventories.

Parameters:
  • address_point_inventory (pd.DataFrame) – address point inventory

  • building_inventory (pd.DataFrame) – building inventory

Returns:

merged address and building inventories

Return type:

pd.DataFrame

merge_inventories(sorted_housing_unit: DataFrame, sorted_infrastructure: DataFrame)

Merge (Sorted) Housing Unit Inventory and (Sorted) Infrastructure Inventory.

Parameters:
  • sorted_housing_unit (pd.DataFrame) – Sorted Housing Unit Inventory

  • sorted_infrastructure (pd.DataFrame) – Sorted infrastructure inventory. This includes Building inventory and Address point inventory.

Returns:

Final merge of all four inventories

Return type:

pd.DataFrame

prepare_housing_unit_inventory(housing_unit_inventory, seed)

Merge order to Building and Address inventories.

Parameters:
  • housing_unit_inventory (pd.DataFrame) – Housing unit inventory.

  • seed (int) – Random number generator seed for reproducibility.

Returns:

Sorted housing unit inventory.

Return type:

pd.DataFrame

prepare_infrastructure_inventory(seed_i: int, critical_bld_inv: DataFrame)

Assign Random merge order to Building and Address inventories. Use main seed value.

Parameters:
  • seed_i (int) – random number generator seed for reproducibility

  • critical_bld_inv (pd.DataFrame) – Merged inventories

Returns:

Sorted merged critical infrastructure

Return type:

pd.DataFrame

run()
Merges Housing Unit Inventory, Address Point Inventory and Building Inventory.

The results of this analysis are aggregated per structure/building. Generates one csv result per iteration.

Returns:

True if successful, False otherwise

Return type:

bool

analyses/indp

class indp.indp.INDP(incore_client)

This class runs INDP or td-INDP for a given number of time steps and input parameters.This analysis takes a decentralized approach to solve the Interdependent Network Design Problem (INDP), a family of centralized Mixed-Integer Programming (MIP) models, which find the optimal restoration strategy of disrupted networked systems subject to budget and operational constraints.

Contributors
Science: Hesam Talebiyan
Implementation: Chen Wang and NCSA IN-CORE Dev Team
Parameters:

incore_client (IncoreClient) – Service authentication.

get_spec()

Get basic specifications.

Note

The get_spec will be called exactly once per instance (during __init__), so children should not assume that they can do weird dynamic magic during this call. See the example spec at the bottom of this file.

Returns:

A JSON object of basic specifications of the analysis.

Return type:

obj

indp(N, v_r, T=1, layers=None, controlled_layers=None, functionality=None, fixed_nodes=None, print_cmd=True, co_location=True)

INDP optimization problem in Pyomo. It also solves td-INDP if T > 1.

Parameters:
  • N (InfrastructureNetwork) – An InfrastructureNetwork instance.

  • v_r (dict) – Dictionary of the number of resources of different types in the analysis. If the value is a scalar for a type, it shows the total number of resources of that type for all layers. If the value is a list for a type, it shows the total number of resources of that type given to each layer.

  • T (int, optional) – Number of time steps to optimize over. T=1 shows an iINDP analysis, and T>1 shows a td-INDP. The default is 1.

  • layers (list, optional) – Layer IDs in N included in the optimization.

  • controlled_layers (list, optional) – Layer IDs that can be recovered in this optimization. Used for decentralized optimization. The default is None.

  • functionality (dict, optional) – Dictionary of nodes to functionality values for non-controlled nodes. Used for decentralized optimization. The default is None.

  • fixed_nodes (dict, optional) – It fixes the functionality of given elements to a given value. The default is None.

  • print_cmd (bool, optional) – If true, analysis information is written to the console. The default is True.

  • co_location (bool, optional) – If false, exclude geographical interdependency from the optimization. The default is True.

Returns:

  • list

  • A list of the form [m, results] for a successful optimization where m is the optimization model and – results is a INDPResults object generated using collect_results(). If solution_pool is set to a number, the function returns [m, results,  sol_pool_results] where sol_pool_results is dictionary of solution that should be retrieved from the optimizer in addition to the optimal one collected using collect_solution_pool().

run_indp(params, layers=None, controlled_layers=None, functionality=None, T=1, save=True, suffix='', forced_actions=False, save_model=False, print_cmd_line=True, co_location=True)

This function runs iINDP (T=1) or td-INDP for a given number of time steps and input parameters.

Parameters:
  • params (dict) – Parameters that are needed to run the INDP optimization.

  • layers (list) – List of layers in the interdependent network. The default is ‘None’, which sets the list

  • [1 (to) –

  • 2

  • 3].

  • controlled_layers (list) – List of layers that are included in the analysis. The default is ‘None’,

  • layers. (which sets the list equal to) –

  • functionality (dict) – This dictionary is used to assign functionality values elements in the network

  • The (before the analysis starts.) –

  • 'None'. (default is) –

  • T (int) – Number of time steps to optimize over. T=1 shows an iINDP analysis, and T>1 shows a td-INDP.

  • 1. (The default is) –

  • it (TODO save & suffice aare not exposed to outside should remove) –

  • save (bool) – If the results should be saved to file. The default is True.

  • suffix (str) – The suffix that should be added to the output files when saved. The default is ‘’.

  • forced_actions (bool) – If True, the optimizer is forced to repair at least one element in each time step.

  • False. (The default is) –

  • parameter (TODO expose this) –

  • save_model (bool) – If the optimization model should be saved to file. The default is False.

  • parameter

  • print_cmd_line (bool) – If full information about the analysis should be written to the console. The default

  • True. (The default is) –

  • parameter

  • co_location (bool) – If co-location and geographical interdependency should be considered in the analysis.

  • True.

Returns:

~indputils.INDPResults object containing the optimal restoration decisions.

Return type:

indp_results (INDPResults)

run_method(fail_sce_param, v_r, layers, method, t_steps=10, misc=None, save_model=False)

This function runs restoration analysis based on INDP or td-INDP for different numbers of resources.

Parameters:
  • fail_sce_param (dict) – information about damage scenarios.

  • v_r (list) – number of resources, if this is a list of floats, each float is interpreted as a different

  • resources (total number of) –

  • INDP (and) –

  • floats (is run given the total number of resources. If this is a list of lists of) –

  • interpreted (each list is) –

  • use (as fixed upper bounds on the number of resources each layer can) –

  • layers (list) – List of layers.

  • method (str) – Algorithm type.

  • t_steps (int) – Number of time steps of the analysis.

  • misc (dict) – A dictionary that contains miscellaneous data needed for the analysis

  • save_model (bool) – Flag indicates if the model should be saved or not

Returns:

class indp.dislocationutils.DislocationUtil
static create_dynamic_param(params, pop_dislocation, dt_params, T=1, N=None)

This function computes the change of demand values over time based on population dislocation models.

Parameters:
  • params (dict) – Parameters that are needed to run the INDP optimization.

  • T (int, optional) – Number of time steps to optimize over. T=1 shows an iINDP analysis, and T>1 shows a td-INDP. The default is 1.

  • N (InfrastructureNetwork) – The object containing the network data.

Returns:

dynamic_params – Dictionary of dynamic demand value for nodes

Return type:

dict

class indp.indpcomponents.INDPComponents

This class stores components of a network

components

List of components of the network

Type:

list

num_components

Number of components of the network

Type:

int

gc_size

Size of the largest component of the network

Type:

int

add_component(members, excess_supply)

This function adds a components

Parameters:
  • members (list) – List of nodes in the component

  • excess_supply (float) – Excess supply within the component

Return type:

None.

classmethod calculate_components(m, net, t=0, layers=None)

Find the components and the corresponding excess supply

Parameters:
  • m (gurobi.Model) – The object containing the solved optimization problem.

  • net (networkx.DiGraph) – The networkx graph object that stores node, arc, and interdependency information

  • t (int) – Time step. The default is zero.

  • layers (list) – List of layers in the analysis

Returns:

indp_components – The object containing the components

Return type:

INDPComponents

classmethod from_csv_string(csv_string)

This functions reads the components data from a string as a INDPComponents object

Parameters:

csv_string (str) – The string containing the component data

Returns:

indp_components – The object containing the components

Return type:

INDPComponents

to_csv_string()

Convert the list of components to a string

Returns:

  • str

  • List of components as a string

class indp.indpresults.INDPResults(layers=None)

This class saves INDP results including actions, costs, run time, and components

results

Dictionary containing INDP results including actions, costs, run time, and components

Type:

dict

layers

List of layers in the analysis

Type:

list

results_layer

Dictionary containing INDP results for each layer including actions, costs, run time, and components

Type:

int

add_action(t, action, save_layer=True)

This function adds restoration actions to the results.

Parameters:
  • t (int) – The time steps to which the actions should be added

  • action (list) – List of actions that are added

  • save_layer (bool) – If the actions should be added for each layer. The default is True.

Return type:

None.

add_components(t, components)

This function adds the components to the results

Parameters:
  • t (int) – The time steps to which the number of components should be added

  • components (list) – The list of components that is added

Return type:

None.

add_cost(t, cost_type, cost, cost_layer=None)

This function adds cost values to the results.

Parameters:
  • t (int) – The time steps to which the costs should be added

  • cost_type (str) – The cost types that is added. The options are: “Space Prep”, “Arc”, “Node”, “Over Supply”, “Under Supply”, “Flow”, “Total”, “Under Supply Perc”

  • cost (float) – The cost value that is added

  • cost_layer (dict) – The cost value that is added for each layer. The default is None, which adds no value for layers.

Return type:

None.

add_gc_size(t, gc_size)

This function adds the giant component size to the results

Parameters:
  • t (int) – The time steps to which the giant component size should be added

  • gc_size (int) – The giant component size that is added

Return type:

None.

add_num_components(t, num_components)

This function adds the number of components to the results

Parameters:
  • t (int) – The time steps to which the number of components should be added

  • num_components (int) – The number of components that is added

Return type:

None.

add_run_time(t, run_time, save_layer=True)

This function adds run time to the results.

Parameters:
  • t (int) – The time steps to which the run time should be added

  • run_time (float) – The run time that is added

  • save_layer (bool) – If the run time should be added for each layer. The default is True.

Return type:

None.

extend(indp_result, t_offset=0, t_start=0, t_end=0)

This function extends the results to accommodate a new time step.

Parameters:
  • indp_result (INDPResults) – The current INDPResults object before extension

  • t_offset (int) – The number of time steps that the current results should be shifted forward. The default is 0.

  • t_start (int) – The starting time step. The default is 0.

  • t_end (int) – The ending time step. The default is 0.

Return type:

None.

classmethod from_csv(out_dir, sample_num=1, suffix='')

This function reads the results from file.

Parameters:
  • out_dir (str) – Output directory from which the results should be read

  • sample_num (int) – The sample number corresponding to the results, The default is 1.

  • suffix (str) – The suffix of the file that is being read. The default is “”.

Returns:

indp_result – The INDPResults object containing the read results

Return type:

INDPResults

to_csv_layer(out_dir, sample_num=1, suffix='')

This function writes the results to file for each layer. The file for each layer are distinguished by “_L” + the layer number.

Parameters:
  • out_dir (str) – Output directory to which the results should be written.

  • sample_num (int) – The sample number corresponding to the results, The default is 1.

  • suffix (str) – The suffix of the file that is being written. The default is “”.

Return type:

None.

class indp.indputil.INDPUtil
static apply_recovery(N, indp_results, t)

This function applies the restoration decisions (solution of INDP) to a Gurobi model by changing the state of repaired elements to functional

Parameters:
  • N (InfrastructureNetwork) – The model of the interdependent network.

  • indp_results (INDPResults) – A INDPResults object containing the optimal restoration decisions.

  • t (int) – The time step to which the results should apply.

Return type:

None.

static collect_results(model, controlled_layers, coloc=True)

This function computes the results (actions and costs) of the optimal results and writes them to a INDPResults object.

Parameters:
  • model (pyomo.model) – The object containing the the solved optimization problem.

  • controlled_layers (list) – Layer IDs that can be recovered in this optimization.

  • coloc (bool, optional) – If false, exclude geographical interdependency from the results. The default is True.

Returns:

  • indp_results (INDPResults)

  • A INDPResults object containing the optimal restoration decisions.

static collect_solution_pool(m, T, n_hat_prime, a_hat_prime)

This function collects the result (list of repaired nodes and arcs) for all feasible solutions in the solution pool.

Parameters:
  • m (gurobi.Model) – The object containing the solved optimization problem.

  • T (int) – Number of time steps in the optimization (T=1 for iINDP, and T>=1 for td-INDP).

  • n_hat_prime (list) – List of damaged nodes in controlled networks.

  • a_hat_prime (list) – List of damaged arcs in controlled networks.

Returns:

  • sol_pool_results (dict)

  • A dictionary containing one dictionary per solution that contains list of repaired node and arcs in the

  • solution.

static get_resource_suffix(params)

This function generates the part of the suffix of result folders that pertains to resource cap(s).

Parameters:

params (dict) – Parameters that are needed to run the INDP optimization.

Returns:

The part of the suffix of result folders that pertains to resource cap(s).

Return type:

out_dir_suffix_res (str)

static initialize_network(power_nodes, power_arcs, water_nodes, water_arcs, interdep, cost_scale=1.0, extra_commodity=None)

This function initializes a InfrastructureNetwork object based on network data.

Parameters:
  • cost_scale (float) – Scales the cost to improve efficiency. The default is 1.0:

  • extra_commodity (dict) – Dictionary of commodities other than the default one for each layer of the

  • 'None' (network. The default is) –

  • layer. (which means that there is only one commodity per) –

Returns:

~infrastructure.InfrastructureNetwork The object containing the network data.

Return type:

interdep_net (class)

static save_indp_model_to_file(model, out_model_dir, t, layer=0, suffix='')

This function saves pyomo optimization model to file.

Parameters:
  • model (Pyomo.Model) – Pyomo optimization model

  • out_model_dir (str) – Directory to which the models should be written

  • t (int) – The time step corresponding to the model

  • layer (int) – The layer number corresponding to the model. The default is 0, which means the model includes all layers in the analysis

  • suffix (str) – The suffix that should be added to files when saved. The default is ‘’.

Return type:

None.

static time_resource_usage_curves(power_arcs, power_nodes, water_arcs, water_nodes, wf_restoration_time_sample, wf_repair_cost_sample, pipeline_restoration_time, pipeline_repair_cost, epf_restoration_time_sample, epf_repair_cost_sample)

This module calculates the repair time for nodes and arcs for the current scenario based on their damage state, and writes them to the input files of INDP. Currently, it is only compatible with NIST testbeds.

Parameters:
  • power_arcs (dataframe) –

  • power_nodes (dataframe) –

  • water_arcs (dataframe) –

  • water_nodes (dataframe) –

  • wf_restoration_time_sample (dataframe) –

  • wf_repair_cost_sample (dataframe) –

  • pipeline_restoration_time (dataframe) –

  • pipeline_repair_cost (dataframe) –

  • epf_restoration_time_sample (dataframe) –

  • epf_repair_cost_sample (dataframe) –

Returns:

water_arcs: power_nodes: power_arcs:

Return type:

water_nodes

class indp.infrastructurearc.InfrastructureArc(source, dest, layer, is_interdep=False)

This class models an arc in an infrastructure network

source

Start (or head) node id

Type:

int

dest

End (or tail) node id

Type:

int

layer

The id of the layer of the network to which the arc belong

Type:

int

failure_probability

Failure probability of the arc

Type:

float

functionality

Functionality state of the node

Type:

bool

repaired

If the arc is repaired or not

Type:

bool

flow_cost

Unit cost of sending the main commodity through the arc

Type:

float

reconstruction_cost

Reconstruction cost of the arc

Type:

float

capacity

Maximum volume of the commodities that the arc can carry

Type:

float

space

The id of the geographical space where the arc is

Type:

int

resource_usage

The dictionary that shows how many resource (of each resource type) is employed to repair the arc

Type:

dict

extra_com

The dictionary that shows flow_cost corresponding to commodities other than the main commodity

Type:

dict

is_interdep

If arc represent a normal arc (that carry commodity within a single layer) or physical interdependency between nodes from different layers

Type:

bool

in_space(space_id)

This function checks if the arc is in a given space or not

Parameters:

space_id – The id of the space that is checked

Returns:

  • bool

  • Returns 1 if the arc is in the space, and 0 otherwise.

set_extra_commodity(extra_commodity)

This function initialize the dictionary for the extra commodities

Parameters:

extra_commodity (list) – List of extra commodities

Return type:

None.

set_resource_usage(resource_names)

This function initialize the dictionary for resource usage per all resource types in the analysis

Parameters:

resource_names (list) – List of resource types

Return type:

None.

class indp.infrastructureinterdeparc.InfrastructureInterdepArc(source, dest, source_layer, dest_layer, gamma)

This class models a physical interdependency between nodes from two different layers. This class inherits from InfrastructureArc, where source attributes corresponds to the dependee node, and dest corresponds to the depender node. The depender node is non-functional if the corresponding dependee node is non-functional.

source_layer

The id of the layer where the dependee node is

Type:

int

dest_layer

The id of the layer where the depender node is

Type:

int

gamma

The strength of the dependency, which is a number between 0 and 1.

Type:

float

class indp.infrastructurenetwork.InfrastructureNetwork(id)

Stores information of the infrastructure network

G

The networkx graph object that stores node, arc, and interdependency information

Type:

networkx.DiGraph

S

List of geographical spaces on which the network lays

Type:

list

id

Id of the network

Type:

int

copy()

This function copies the current InfrastructureNetwork object

Returns:

new_net – Copy of the current infrastructure network object

Return type:

InfrastructureNetwork

gc_size(layer)

This function finds the size of the largest component in a layer of the network

Parameters:

layer (int) – The id of the desired layer

Returns:

  • int

  • Size of the largest component in the layer

get_clusters(layer)

This function find the clusters in a layer of the network

Parameters:

layer (int) – The id of the desired layer

Returns:

  • list

  • List of layer components

to_csv(filename='infrastructure_adj.csv')

This function writes the object to a csv file

Parameters:

filename (str) – Name of the file to which the network should be written

Return type:

None.

to_game_file(layers=None)

This function writes the multi-defender security games.

Parameters:

layers (list) – List of layers in the game.

Return type:

None.

update_with_strategy(player_strategy)

This function modify the functionality of node and arc per a given strategy

Parameters:

player_strategy (list) – Given strategy, where the first list item shows the functionality of nodes, and the second one is for arcs

Return type:

None.

class indp.infrastructurenode.InfrastructureNode(id, net_id, local_id='')

This class models a node in an infrastructure network

id

Node id

Type:

int

net_id

The id of the layer of the network to which the node belong

Type:

int

local_id

Local node id

Type:

int

failure_probability

Failure probability of the node

Type:

float

functionality

Functionality state of the node

Type:

bool

repaired

If the node is repaired or not

Type:

bool

reconstruction_cost

Reconstruction cost of the node

Type:

float

oversupply_penalty

Penalty per supply unit of the main commodity that is not used for the the node

Type:

float

undersupply_penalty

Penalty per demand unit of the main commodity that is not satisfied for the the node

Type:

float

demand

Demand or supply value of the main commodity assigned to the node

Type:

float

space

The id of the geographical space where the node is

Type:

int

resource_usage

The dictionary that shows how many resource (of each resource type) is employed to repair the node

Type:

dict

extra_com

The dictionary that shows demand, oversupply_penalty, and undersupply_penalty corresponding to commodities other than the main commodity

Type:

dict

in_space(space_id)

This function checks if the node is in a given space or not

Parameters:

space_id – The id of the space that is checked

Returns:

  • bool

  • Returns 1 if the node is in the space, and 0 otherwise.

set_extra_commodity(extra_commodity)

This function initialize the dictionary for the extra commodities

Parameters:

extra_commodity (list) – List of extra commodities

Return type:

None.

set_failure_probability(failure_probability)

This function sets the failure probability of the node

Parameters:

failure_probability (float) – Assigned failure probability of node

Return type:

None.

set_resource_usage(resource_names)

This function initialize the dictionary for resource usage per all resource types in the analysis

Parameters:

resource_names (list) – List of resource types

Return type:

None.

class indp.infrastructurespace.InfrastructureSpace(id, cost)

This class models a geographical space.

id

The id of the space

Type:

int

cost

The cost of preparing the space for a repair action

Type:

float

class indp.infrastructureutil.InfrastructureUtil
static add_from_csv_failure_scenario(G, sample, initial_node, initial_link)

This function reads initial damage data from file in the from_csv format, and applies it to the infrastructure network. This format only considers one magnitude value (0), and there can be as many samples from that magnitude.

Parameters:
  • G (InfrastructureNetwork) – The object containing the network data.

  • sample (int) – Sample number of the initial damage scenario,

Return type:

None.

static load_infrastructure_array_format_extended(power_nodes, power_arcs, water_nodes, water_arcs, interdep, cost_scale=1.0, extra_commodity=None)

This function reads the infrastructure network from file in the extended format

Parameters:
  • power_nodes

  • power_arcs

  • water_nodes

  • water_arcs

  • interdep

  • cost_scale (float) – The factor by which all cost values has to multiplied. The default is 1.0.

  • extra_commodity (dict) – ( only for extended format of input data) List of extra-commodities in the

  • None (analysis. The default is) –

  • commodity. (which only considers a main) –

Returns:

~infrastructure.InfrastructureNetwork): The object containing the network data.

Return type:

G (class

analyses/joplincge

class joplincge.joplincge.JoplinCGEModel(incore_client)

A computable general equilibrium (CGE) model is based on fundamental economic principles. A CGE model uses multiple data sources to reflect the interactions of households, firms and relevant government entities as they contribute to economic activity. The model is based on (1) utility-maximizing households that supply labor and capital, using the proceeds to pay for goods and services (both locally produced and imported) and taxes; (2) the production sector, with perfectly competitive, profit-maximizing firms using intermediate inputs, capital, land and labor to produce goods and services for both domestic consumption and export; (3) the government sector that collects taxes and uses tax revenues in order to finance the provision of public services; and (4) the rest of the world.

Parameters:

incore_client (IncoreClient) – Service authentication.

get_spec()

Get basic specifications.

Note

The get_spec will be called exactly once per instance (during __init__), so children should not assume that they can do weird dynamic magic during this call. See the example spec at the bottom of this file.

Returns:

A JSON object of basic specifications of the analysis.

Return type:

obj

class joplincge.equationlib.VarContainer

All matrix variable(tables) in the GAMS model is flatten to a array to make a better interface to the solver.

AllVarList stores all initial values of variables used in the GAMS model in an array. It also has a indexing system for looking up.

namelist

A dictionary with all stored GAMS variables and its information.

nvars

The length of the array, i.e. the size of all matrix variables summed up.

initialVals

Stored initial values of all variables

Initialize to an empty list

add(name, rows=None, cols=None)
Parameters:
  • name (str) – Name of the variable in GAMS.

  • rows (obj) – Rows.

  • cols (obj) – Columns.

get(name, x=None)

Returns a Dataframe, Series, or a variable based on the given name and the result array returned from the solver.

Parameters:

name (str) – Name of the variable in GAMS.

Returns:

If x is not given, it returns the initial values if x is set to the result,

returns the result variable value.

Return type:

obj

get_index(name, row=None, col=None)

Look up the index by providing the variable name and label information.

Parameters:
  • name (str) – Name of the variable in GAMS.

  • row (obj) – A row label of the position you want to look up index for (if it has row labels).

  • col (obj) – A column label of the position you want to look up index for (if it has column labels)

Returns:

The index of the position in the array.

Return type:

int

get_info(name: str)

Get the information about a GAMS variable.

Parameters:

name (str) – Name of the variable in GAMS.

Returns:

A dictionary with all information.

Return type:

dict

get_label(index)

Look up variable name and label information by providing the index.

Parameters:

index (int) – The index in the array.

Returns:

Its information including the variable name, row label and column label if applicable.

Return type:

list

in_list(name: str)

Check if a GAMS variable is added to the container.

Parameters:

name (str) – Name of the variable in GAMS.

Returns:

Whether the variable is added.

Return type:

bool

init(name, initial_value)

Flatten the table variable and add to the list. Also set the initial variable values array.

Parameters:
  • name (str) – Name of the variable in GAMS.

  • initial_value (obj) – A pandas DataFrame or pandas Series with initial values.

lo(name, value)

Set the LOs of a GAMS variable providing the LOs with a Dataframe, Series, int or float.

Parameters:
  • name (str) – Name of the variable in GAMS.

  • value (obj) – The lower bound to be set.

set_value(name, values, target)

An internal method for setting the initial values or UPs and LOs for variables.

Parameters:
  • name (str) – Name of the variable in GAMS.

  • values (obj) – a pandas DataFrame, pandas Series, int or float with initial values.

  • target (obj) – target array to be set.

up(name, value)

Set the UPs of a GAMS variable providing the LOs with a Dataframe, Series, int or float.

Parameters:
  • name (str) – GAMS variable name.

  • value (obj) – The upper bound to be set.

write(filename)

Write (append) the variables to a file, in the format of setting ipopt model variables.

Parameters:

filename (str) – The output filename.

class joplincge.equationlib.Variable(gams_vars, name, row=None, col=None)

A single variable, initialized by given the GAMS variable and its label.

Initialize it with a variable container, the GAMS name, the labels.

Parameters:
  • gams_vars (obj) – The variable container that already added the GAMS variable.

  • name (str) – GAMS variable name.

  • row (obj) – GAMS rows label if there is.

  • col (obj) – GAMS columns label if there is.

__str__()

Returns the variable in the format of “model.x#” if gets printed, with # being the index in the array in the container.

class joplincge.equationlib.ExprItem(v, const=1)

You can construct it with a variable, a constant or a deepcopy of another ExprItem.

class joplincge.equationlib.Expr(item)
class joplincge.equationlib.ExprM(vars, name=None, rows=None, cols=None, m=None, em=None)

Three ways to create a ExprMatrix:

1. Give it the variable name, selected rows and cols(could be empty), The constructor will create a Expression matrix from the variable matrix. 2. Give it a pandas Series or DataFrame, it will create the Expression matrix with the content in the Series or DataFrame as constants. 3. Give it a ExprMatrix, will return a deep copy of it.

__invert__()

Return the transpose of a Expression matrix

__xor__(rhs)

create 2d list out of 2 single lists

loc(rows=None, cols=None)

Get a subset of the matrix by labels

analyses/joplinempiricalrestoration

class joplinempiricalrestoration.joplinempiricalrestoration.JoplinEmpiricalRestoration(incore_client)

Joplin Empirical Restoration Model generates a random realization for the restoration time of a building damaged in a tornado event to be restored to a certain functionality level. Functionality levels in this model are defined according to Koliou and van de Lindt (2020) and range from Functionality Level 4 (FL4, the lowest functionality) to Functionality Level 0 (FL0, full functionality).

Parameters:

incore_client (IncoreClient) – Service authentication.

get_restoration_days(seed_i, building_func)

Calculates restoration days.

Parameters:
  • seed_i (int) – Seed for random number generator to ensure replication if run as part of a stochastic analysis, for example in connection with housing unit allocation analysis.

  • building_func (pd.DataFrame) – Building damage dataset with guid, limit states, hazard exposure and a target level column.

Returns:

Initial functionality level based on damage state np.array: Building restoration days.

Return type:

np.array

get_spec()

Get specifications of the Joplin empirical restoration analysis.

Returns:

A JSON object of specifications of the Joplin empirical restoration analysis.

Return type:

obj

run()

Executes Joplin empirical restoration model analysis.

Returns:

True if successful, False otherwise.

Return type:

bool

class joplinempiricalrestoration.joplinempirrestor_util.JoplinEmpirRestorUtil

Utility methods for the Joplin restoration analysis.

analyses/meandamage

class meandamage.meandamage.MeanDamage(incore_client)
Parameters:

incore_client (IncoreClient) – Service authentication.

get_spec()

Get specifications of the mean damage calculation.

Returns:

A JSON object of specifications of the mean damage calculation.

Return type:

obj

mean_damage(dmg, dmg_ratio_tbl, damage_interval_keys, is_bridge)

Calculates mean damage based on damage probabilities and damage ratios

Parameters:
  • dmg (obj) – dmg analysis output for a single entity in the built environment

  • dmg_ratio_tbl (list) – dmg ratio table.

  • damage_interval_keys (list) – damage interval keys

  • is_bridge (bool) – a boolean to indicate if the inventory type is bridge.

  • damage (Bridge has its own way of calculating mean) –

Returns:

A dictionary with mean damage, deviation, and other data/metadata.

Return type:

OrderedDict

mean_damage_bulk_input(damage, dmg_ratio_tbl)

Run analysis for mean damage calculation

Parameters:
  • damage (obj) – output of building/bridge/waterfacility/epn damage that has damage interval

  • dmg_ratio_tbl (list) – damage ratio table

Returns:

A list of ordered dictionaries with mean damage, deviation, and other data/metadata.

Return type:

list

mean_damage_concurrent_future(function_name, parallelism, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • parallelism (int) – Number of workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

A list of ordered dictionaries with building damage values and other data/metadata.

Return type:

list

run()

Executes mean damage calculation.

analyses/montecarlofailureprobability

class montecarlofailureprobability.montecarlofailureprobability.MonteCarloFailureProbability(incore_client)
Parameters:

incore_client (IncoreClient) – Service authentication.

calc_probability_failure_value(ds_sample, failure_state_keys)

Lisa Wang’s approach to calculate a single value of failure probability.

Parameters:
  • ds_sample (dict) – A dictionary of damage states.

  • failure_state_keys (list) – Damage state keys that considered as failure.

Returns:

Failure state on each sample 0 (failed), 1 (not failed). float: Failure probability (0 - 1).

Return type:

float

get_spec()

Get specifications of the monte carlo failure probability analysis.

Returns:

A JSON object of specifications of the monte carlo failure probability analysis.

Return type:

obj

monte_carlo_failure_probability(dmg, damage_interval_keys, failure_state_keys, num_samples, seed)

Calculates building damage results for a single building.

Parameters:
  • dmg (obj) – Damage analysis output for a single entry.

  • damage_interval_keys (list) – A list of the name of the damage intervals.

  • failure_state_keys (list) – A list of the name of the damage state that is considered as failed.

  • num_samples (int) – Number of samples for mc simulation.

  • seed (int) – Random number generator seed for reproducibility.

Returns:

A dictionary with id/guid and failure state for N samples dict: A dictionary with failure probability and other data/metadata. dict: A dictionary with id/guid and damage states for N samples

Return type:

dict

monte_carlo_failure_probability_bulk_input(damage, seed_list)

Run analysis for monte carlo failure probability calculation

Parameters:
  • damage (obj) – An output of building/bridge/waterfacility/epn damage that has damage interval.

  • seed_list (list) – Random number generator seed per building for reproducibility.

Returns:

A list of dictionary with id/guid and failure state for N samples fp_results (list): A list dictionary with failure probability and other data/metadata.

Return type:

fs_results (list)

monte_carlo_failure_probability_concurrent_future(function_name, parallelism, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • parallelism (int) – Number of workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

A list of dictionary with id/guid and failure state for N samples. list: A list dictionary with failure probability and other data/metadata.

Return type:

list

run()

Executes mc failure probability analysis.

sample_damage_interval(dmg, damage_interval_keys, num_samples, seed)

Dylan Sanderson code to calculate the Monte Carlo simulations of damage state.

Parameters:
  • dmg (dict) – Damage results that contains dmg interval values.

  • damage_interval_keys (list) – Keys of the damage states.

  • num_samples (int) – Number of simulation.

  • seed (int) – Random number generator seed for reproducibility.

Returns:

A dictionary of damage states.

Return type:

dict

analyses/multiobjectiveretrofitoptimization

class multiobjectiveretrofitoptimization.multiobjectiveretrofitoptimization.MultiObjectiveRetrofitOptimization(incore_client)

This analysis computes a series of linear programming models for single- and multi-objective optimization related to the effect of extreme weather on a community in terms of three objective functions. The three objectives used in this program are to minimize economic loss, minimize population dislocation, and maximize building functionality. The computation proceeds by iteratively solving constrained linear models using epsilon steps.

The output of the computation a collection of optimal resource allocations.

Contributors
Science: Charles Nicholson, Yunjie Wen
Implementation: Dale Cochran , Tarun Adluri , Jorge Duarte, Santiago Núñez-Corrales, Diego Calderon and NCSA IN-CORE Dev Team

Related publications

Parameters:

incore_client (IncoreClient) – Service authentication.

configure_model(budget_available, scaling_factor, building_related_data, strategy_costs)

Configure the base model to perform the multiobjective optimization.

Parameters:
  • budget_available (float) – available budget

  • scaling_factor (float) – value to scale monetary input data

  • building_related_data (DataFrame) – table containing building functionality data

  • strategy_costs (DataFrame) – table containing retrofit strategy costs data

Returns:

a base, parameterized cost/functionality model

Return type:

ConcreteModel

configure_model_objectives(model)

Configure the model by adding objectives

Parameters:

model (ConcreteModel) – a base cost/functionality model

Returns:

a model extended with objective functions

Return type:

ConcreteModel

get_spec()

Get specifications of the multiobjective retrofit optimization model.

Returns:

A JSON object of specifications of the multiobjective retrofit optimization model.

Return type:

obj

multiobjective_retrofit_optimization_model(model_solver, num_epsilon_steps, budget_available, scaling_factor, inactive_submodels, building_related_data, strategy_costs)

Performs the computation of the model.

Parameters:
  • model_solver (str) – model solver to use for analysis

  • num_epsilon_steps (int) – number of epsilon values for the multistep optimization algorithm

  • budget_available (float) – budget constraint of the optimization analysis

  • scaling_factor (float) – scaling factor for Q and Sc matrices

  • inactive_submodels (list) – submodels to avoid during the computation

  • building_related_data (pd.DataFrame) – building repairs after a disaster event

  • strategy_costs (pd.DataFrame) – strategy cost data per building

run()

Execute the multiobjective retrofit optimization analysis using parameters and input data.

analyses/ncifunctionality

class ncifunctionality.ncifunctionality.NciFunctionality(incore_client)

This analysis computes the output of the Leontief equation for functional dependencies between two interdependent networks having functionality information per node. These dependencies capture cascading dependencies on infrastructure functionality, expressed in terms of discrete points.

The output of the computation consists of two datasets, one per each labeled network, with new cascading functionalities accompanying the original discrete ones.

Contributors
Science: Milad Roohi, John van de Lindt
Implementation: Milad Roohi, Santiago Núñez-Corrales and NCSA IN-CORE Dev Team

Related publications

Roohi M, van de Lindt JW, Rosenheim N, Hu Y, Cutler H. (2021) Implication of building inventory accuracy on physical and socio-economic resilience metrics for informed decision-making in natural hazards. Structure and Infrastructure Engineering. 2020 Nov 20;17(4):534-54.

Milad Roohi, Jiate Li, John van de Lindt. (2022) Seismic Functionality Analysis of Interdependent Buildings and Lifeline Systems 12th National Conference on Earthquake Engineering (12NCEE), Salt Lake City, UT (June 27-July 1, 2022).

Parameters:

incore_client (IncoreClient) – Service authentication.

get_spec()

Get specifications of the network cascading interdependency functionality analysis. :returns: A JSON object of specifications of the NCI functionality analysis. :rtype: obj

nci_functionality(discretized_days, epf_network_nodes, epf_network_links, wds_network_nodes, wds_network_links, epf_wds_intdp_table, wds_epf_intdp_table, epf_subst_failure_results, epf_inventory_rest_map, epf_time_results, wds_dmg_results, wds_inventory_rest_map, wds_time_results, epf_damage)

Compute EPF and WDS cascading functionality outcomes

Parameters:
  • discretized_days (List[int]) – a list of discretized days

  • epf_network_nodes (pd.DataFrame) – network nodes for EPF network

  • epf_network_links (pd.DataFrame) – network links for EPF network

  • wds_network_nodes (pd.DataFrame) – network nodes for WDS network

  • wds_network_links (pd.DataFrame) – network links for WDS network

  • epf_wds_intdp_table (pd.DataFrame) – mapping from EPF to WDS networks

  • wds_epf_intdp_table (pd.DataFrame) – mapping from WDS to EPF networks

  • epf_subst_failure_results (pd.DataFrame) – substation failure results for EPF network

  • epf_inventory_rest_map (pd.DataFrame) – inventory restoration map for EPF network

  • epf_time_results (pd.DataFrame) – time results for EPF network

  • wds_dmg_results (pd.DataFrame) – damage results for WDS network

  • wds_inventory_rest_map (pd.DataFrame) – inventory restoration map for WDS network

  • wds_time_results (pd.DataFrame) – time results for WDS network

  • epf_damage (pd.DataFrame) – limit state probabilities and damage states for each guid

Returns:

results for EPF and WDS networks

Return type:

(pd.DataFrame, pd.DataFrame)

static solve_leontief_equation(graph, functionality_nodes, discretized_days)

Computes the solution to the Leontief equation for network interdependency given a

Parameters:
  • graph (networkx object) – graph containing the integrated EPN-WDS network

  • functionality_nodes (pd.DataFrame) – dataframe containing discretized EFP/WDS restoration results

  • node (per) –

  • discretized_days (list) – days used for discretization of restoration analyses

Returns:

pd.DataFrame

Update network links with functionality attributes

Parameters:

wds_network_links (pd.DataFrame) – WDS network links

Returns:

pd.DataFrame

analyses/nonstructbuildingdamage

class nonstructbuildingdamage.nonstructbuildingdamage.NonStructBuildingDamage(incore_client)

Computes non-structural structural building damage for an earthquake hazard.

Parameters:

incore_client (IncoreClient) – Service authentication.

building_damage_analysis_bulk_input(buildings, hazard, hazard_type, hazard_dataset_id)

Run analysis for multiple buildings.

Parameters:
  • buildings (list) – Multiple buildings from input inventory set.

  • hazard (obj) – Hazard object.

  • hazard_type (str) – Hazard type.

  • hazard_dataset_id (str) – Hazard dataset id.

Returns:

An ordered dictionary with building damage values. dict: An ordered dictionary with building data/metadata.

Return type:

dict

building_damage_concurrent_future(function_name, num_workers, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • num_workers (int) – Maximum number workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

An ordered dictionary with building damage values. dict: An ordered dictionary with building data/metadata.

Return type:

dict

get_spec()

Get specifications of the building damage analysis.

Returns:

A JSON object of specifications of the building damage analysis.

Return type:

obj

run()

Executes building damage analysis.

class nonstructbuildingdamage.nonstructbuildingutil.NonStructBuildingUtil

Utility methods for the non-structural building damage analysis.

static adjust_damage_for_liquefaction(limit_state_probabilities, ground_failure_probabilities)

Adjusts building damage probability based on liquefaction ground failure probability with the liq_dmg, we know that it is 3 values, the first two are the same. The 3rd might be different. We always want to apply the first two to all damage states except the highest.

Parameters:
  • limit_state_probabilities (obj) – Limit state probabilities.

  • ground_failure_probabilities (list) – Ground failure probabilities.

Returns:

Adjusted limit state probability.

Return type:

OrderedDict

static determine_haz_exposure(hazard_exposure_as, hazard_exposure_ds)

Determine the hazard exposure of the building based on the :param hazard_exposure_as: :param hazard_exposure_ds:

Returns:

analyses/pipelinedamage

class pipelinedamage.pipelinedamage.PipelineDamage(incore_client)

Computes pipeline damage for an earthquake or a tsunami).

Parameters:

incore_client – Service client with authentication info.

get_spec()

Get specifications of the pipeline damage analysis.

Returns:

A JSON object of specifications of the pipeline damage analysis.

Return type:

obj

pipeline_damage_analysis_bulk_input(pipelines, hazard, hazard_type, hazard_dataset_id)

Run pipeline damage analysis for multiple pipelines.

Parameters:
  • pipelines (list) – Multiple pipelines from pipeline dataset.

  • hazard (obj) – Hazard object.

  • hazard_type (str) – Hazard type (earthquake or tsunami).

  • hazard_dataset_id (str) – An id of the hazard exposure.

Returns:

An ordered dictionaries with pipeline damage values. dict: An ordered dictionaries with other pipeline data/metadata.

Return type:

dict

pipeline_damage_concurrent_future(function_name, num_workers, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • num_workers (int) – Maximum number workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

An ordered dictionaries with pipeline damage values. dict: An ordered dictionaries with other pipeline data/metadata.

Return type:

dict

run()

Execute pipeline damage analysis

analyses/pipelinedamagerepairrate

class pipelinedamagerepairrate.pipelinedamagerepairrate.PipelineDamageRepairRate(incore_client)

Computes pipeline damage for a hazard.

Parameters:

incore_client – Service client with authentication info

get_spec()

Get specifications of the pipeline damage analysis.

Returns:

A JSON object of specifications of the pipeline damage analysis.

Return type:

obj

pipeline_damage_analysis_bulk_input(pipelines, hazard, hazard_type, hazard_dataset_id)

Run pipeline damage analysis for multiple pipelines.

Parameters:
  • pipelines (list) – multiple pipelines from pieline dataset.

  • hazard (obj) – Hazard object

  • hazard_type (str) – Hazard type

  • hazard_dataset_id (str) – An id of the hazard exposure.

Returns:

A list of ordered dictionaries with pipeline damage values and other data/metadata. damage_results (list): A list of ordered dictionaries with pipeline damage metadata.

Return type:

ds_results (list)

pipeline_damage_concurrent_future(function_name, num_workers, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • num_workers (int) – Maximum number workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

A list of ordered dictionaries with building damage values and other data/metadata.

Return type:

list

run()

Execute pipeline damage analysis

class pipelinedamagerepairrate.pipelineutil.PipelineUtil

Utility methods for pipeline analysis

static convert_result_unit(result_unit: str, result: float)

Convert values between different units.

Parameters:
  • result_unit (str) – Resulting unit.

  • result (float) – Input value.

Returns:

Converted value.

Return type:

float

static get_pipe_diameter(pipeline)

Get pipe diameter.

Parameters:

pipeline (obj) – A JSON-like description of pipeline properties.

Returns:

Pipe diameter.

Return type:

float

static get_pipe_length(pipeline)

Get pipe length.

Parameters:

pipeline (obj) – A JSON-like description of pipeline properties.

Returns:

Pipe length.

Return type:

float

analyses/pipelinefunctionality

class pipelinefunctionality.pipelinefunctionality.PipelineFunctionality(incore_client)

This analysis computes pipeline functionality using repair rate calculations from pipeline damage analysis (earthquake). The computation operates by computing Monte Carlo samples derived from Poisson sample deviates from the damage analysis as input to Bernoulli experiments, later used to determine average functionality. The output of the computation is the average pipeline functionality.

Contributors
Science: Neetesh Sharma, Armin Tabandeh, Paolo Gardoni
Implementation: Neetesh Sharma, Chen Wang, and NCSA IN-CORE Dev Team
Related publications

Sharma, N., Tabandeh, A., & Gardoni, P. (2019). Regional resilience analysis: A multi-scale approach to model the recovery of interdependent infrastructure. In P. Gardoni (Ed.), Handbook of sustainable and resilient infrastructure (pp. 521–544). New York, NY: Routledge. Sharma, N., Tabandeh, A., & Gardoni, P. (2020). Regional resilience analysis: A multi-scale approach to optimize the resilience of interdependent infrastructure. Computer‐Aided Civil and Infrastructure Engineering, 35(12), 1315-1330. Sharma, N., & Gardoni, P. (2022). Mathematical modeling of interdependent infrastructure: An object-oriented approach for generalized network-system analysis. Reliability Engineering & System Safety, 217, 108042.

Args: incore_client (IncoreClient): Service authentication.

get_spec()

Get specifications of the pipeline functionality analysis.

Returns:

A JSON object of specifications of the pipeline functionality analysis.

Return type:

obj

pipeline_functionality(pipeline_dmg_df, num_samples)

Run pipeline functionality analysis for multiple pipelines.

Parameters:
  • pipeline_dmg_df (dataframe) – dataframe of pipeline damage values and other data/metadata

  • num_samples (int) – number of samples

Returns:

A list of dictionary with id/guid and failure state for N samples fp_results (list): A list dictionary with failure probability and other data/metadata.

Return type:

fs_results (list)

analyses/pipelinerepaircost

class pipelinerepaircost.pipelinerepaircost.PipelineRepairCost(incore_client)

Computes pipeline repair cost.

Parameters:

incore_client (IncoreClient) – Service authentication.

get_spec()

Get specifications of the pipeline repair cost analysis.

Returns:

A JSON object of specifications of the pipeline repair cost analysis.

Return type:

obj

pipeline_repair_cost_bulk_input(pipelines)

Run analysis for multiple pipelines.

Parameters:

pipelines (list) – Multiple pipelines from input inventory set.

Returns:

A list of ordered dictionaries with pipeline repair cost values and other data/metadata.

Return type:

list

pipeline_repair_cost_concurrent_future(function_name, num_workers, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • num_workers (int) – Maximum number workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

A list of ordered dictionaries with pipeline repair cost values and other data/metadata.

Return type:

list

run()

Executes pipline facility repair cost analysis.

analyses/pipelinerestoration

class pipelinerestoration.pipelinerestoration.PipelineRestoration(incore_client)
Parameters:

incore_client (IncoreClient) – Service authentication.

get_spec()

Get specifications of the Pipeline Restoration analysis.

Returns:

A JSON object of specifications of the pipeline restoration analysis.

Return type:

obj

pipeline_restoration_bulk_input(damage)

Run analysis for pipeline restoration calculation

Parameters:

damage (obj) – An output of pipeline damage with repair rate

Returns:

A list of dictionary restoration times and inventory details

Return type:

restoration_results (list)

pipeline_restoration_concurrent_future(function_name, parallelism, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • parallelism (int) – Number of workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

A list of dictionary with restoration details

Return type:

list

static restoration_time(dmg, num_available_workers, restoration_set)

Calculates restoration time for a single pipeline.

Parameters:
  • dmg (obj) – Pipeline damage analysis output for a single entry.

  • num_available_workers (int) – Number of available workers working on the repairs.

  • restoration_set (obj) – Restoration curve(s) to be be used

Returns:

A dictionary with id/guid and restoration time, along with some inventory metadata

Return type:

dict

run()

Executes pipeline restoration analysis.

analyses/populationdislocation

class populationdislocation.populationdislocation.PopulationDislocation(incore_client)

Population Dislocation Analysis computes dislocation for each residential structure based on the direct economic damage. The dislocation is calculated from four probabilities of dislocation based on a random normal distribution of the four damage factors presented by Bai, Hueste, Gardoni 2009.

These four damage factors correspond to value loss. The sum of the four probabilities multiplied by the four probabilities of damage states was used as the probability for dislocation.

This is different from Lin 2008 http://hrrc.arch.tamu.edu/publications/research%20reports/08-05R%20Dislocation%20Algorithm%203.pdf which calculates a value loss which is the sum of the four damage factors times the four probabilities of damage. The two approaches produce different results.

Parameters:

incore_client (IncoreClient) – Service authentication.

get_dislocation(seed_i: int, inventory: DataFrame, value_loss: DataFrame)

Calculates dislocation probability.

Probability of dislocation, a binary variable based on the logistic probability of dislocation. A random number between 0 and 1 was assigned to each household. If the random number was less than the probability of dislocation then the household was determined to dislocate. This follows the logic that households with a greater chance of dislocated were more likely to have a random number less than the probability predicted.

Parameters:
  • seed_i (int) – Seed for random number generator to ensure replication if run as part

  • analysis (of a stochastic) –

  • analysis. (for example in connection with housing unit allocation) –

  • inventory (pd.DataFrame) – Merged building, housing unit allocation and block group inventories

  • value_loss (pd.DataFrame) – Table used for value loss estimates, beta distribution

Returns:

An inventory with probabilities of dislocation in a separate column

Return type:

pd.DataFrame

get_spec()

Get basic specifications.

Note

The get_spec will be called exactly once per instance (during __init__), so children should not assume that they can do weird dynamic magic during this call. See the example spec at the bottom of this file.

Returns:

A JSON object of basic specifications of the analysis.

Return type:

obj

run()

Executes the Population dislocation analysis.

Returns:

True if successful, False otherwise.

Return type:

bool

class populationdislocation.populationdislocationutil.PopulationDislocationUtil
static compare_columns(table, col1, col2, drop)

Compare two columns. If not equal create Tru/False column, if equal rename one of them with the base name and drop the other.

Parameters:
  • table (pd.DataFrame) – Data Frame table

  • col1 (str) – name of column 1

  • col2 (str) – name of column 2

  • drop (bool) – rename and drop column

Returns:

Table with True/False column

Return type:

pd.DataFrame

static compare_merges(table1_cols, table2_cols, table_merged)

Compare two lists of columns and run compare columns on columns in both lists. It assumes that suffixes are _x and _y

Parameters:
  • table1_cols (list) – columns in table 1

  • table2_cols (list) – columns in table 2

  • table_merged (pd.DataFrame) – merged table

  • Returns – pd.DataFrame: Merged table

static get_disl_probability(value_loss: array, d_sf: array, percent_black_bg: array, percent_hisp_bg: array)

Calculate dislocation, the probability of dislocation for the household and population. Probability of dislocation Damage factor, based on current IN-COREv1 algorithm and Bai et al. 2009 damage factors.

The following variables are need to predict dislocation using logistic model see detailed explanation https://opensource.ncsa.illinois.edu/confluence/ display/INCORE1/Household+and+Population+Dislocation? preview=%2F66224473%2F68289561%2FAlgorithm+3+Logistic.pdf

Parameters:
  • value_loss (np.array) – Value loss.

  • d_sf (np.array) – ‘Dummy’ parameter.

  • percent_black_bg (np.array) – Block group data, percentage of black minority.

  • percent_hisp_bg (np.array) – Block group data, percentage of hispanic minority.

Returns:

Dislocation probability for the household and population.

Return type:

np.array

static get_random_loss(seed_i: int, df: DataFrame, damage_state: str, size: int)

Calculates value loss for each structure based on random beta distribution Value loss based on damage state is an input to the population dislocation model.

Parameters:
  • seed_i (int) – Seed for random normal to ensure replication if run as part of a stochastic analysis, for example in connection with housing unit allocation analysis.

  • df (pd.DataFrame) – Sata frame that includes the alpha, beta, lower bound, upper bound for each required damage state

  • damage_state (str) – Damage state to calculate value loss for.

  • size (int) – Size of array to be generated.

Returns:

random distribution of value loss for each structure

Return type:

np.array

static merge_damage_housing_block(building_dmg: DataFrame, hua_inventory: DataFrame, block_data: DataFrame)

Load CSV files to pandas Dataframes, merge them and drop unused columns.

Parameters:
  • building_dmg (pd.DataFrame) – A building damage file in csv format.

  • hua_inventory (pd.DataFrame) – A housing unit allocation inventory file in csv format.

  • block_data (pd.DataFrame) – A block data file in csv format.

Returns:

A merged table of all three inputs.

Return type:

pd.DataFrame

analyses/residentialbuildingrecovery

class residentialbuildingrecovery.residentialbuildingrecovery.ResidentialBuildingRecovery(incore_client)

This analysis computes the recovery time needed for each residential building from any damage states to receive the full restoration. Currently, supported hazards are tornadoes.

The methodology incorporates the multi-layer Monte Carlo simulation approach and determines the two-step recovery time that includes delay and repair. The delay model was modified based on the REDi framework and calculated the end-result outcomes resulted from delay impeding factors such as post-disaster inspection, insurance claim, and government permit. The repair model followed the FEMA P-58 approach and was controlled by fragility functions.

The outputs of this analysis is a CSV file with time-stepping recovery probabilities at the building level.

Contributors
Science: Wanting Lisa Wang, John W. van de Lindt
Implementation: Wanting Lisa Wang, Gowtham Naraharisetty, and NCSA IN-CORE Dev Team
Related publications

Wang, Wanting Lisa, and John W. van de Lindt. “Quantitative Modeling of Residential Building Disaster Recovery and Effects of Pre-and Post-event Policies.” International Journal of Disaster Risk Reduction (2021): 102259.

Parameters:

incore_client (IncoreClient) – Service authentication.

static financing_delay(household_aggregated_income_groups, financial_resources)

Gets financing delay, the percentages calculated are the probabilities of housing units financed by different resources.

Parameters:
  • household_aggregated_income_groups (pd.DataFrame) – Household aggregation of income groups at the building level.

  • financial_resources (pd.DataFrame) – Financial resources by household income groups.

Returns:

Results of financial delay

Return type:

pd.DataFrame

get_spec()

Get specifications of the residential building recovery analysis.

Returns:

A JSON object of specifications of the residential building recovery analysis.

Return type:

obj

static household_aggregation(household_income_predictions)

Gets household aggregation of income groups at the building level.

Parameters:

household_income_predictions (pd.DataFrame) – Income group prediction for each household

Returns:

Results of household aggregation of income groups at the building level.

Return type:

pd.DataFrame

static household_income_prediction(income_groups, num_samples)

Get Income group prediction for each household

Parameters:
  • income_groups (pd.DataFrame) – Socio-demographic data with household income group prediction.

  • num_samples (int) – Number of sample scenarios.

Returns:

Income group prediction for each household

Return type:

pd.DataFrame

recovery_rate(buildings, sample_damage_states, total_delay)

Gets total time required for each building to receive full restoration. Determined by the combination of delay time and repair time

Parameters:
  • buildings (list) – List of buildings

  • sample_damage_states (pd.DataFrame) – Samples’ damage states

  • total_delay (pd.DataFrame) – Total delay time of financial delay and other factors from REDi framework.

Returns:

Recovery rates of all buildings for each sample

Return type:

pd.DataFrame

residential_recovery(buildings, sample_damage_states, socio_demographic_data, financial_resources, redi_delay_factors, num_samples)

Calculates residential building recovery for buildings

Parameters:
  • buildings (list) – Buildings dataset

  • sample_damage_states (pd.DataFrame) – Sample damage states

  • socio_demographic_data (pd.DataFrame) – Socio-demographic data for household income groups

  • financial_resources (pd.DataFrame) – Financial resources by household income groups

  • redi_delay_factors (pd.DataFrame) – Delay factors based on REDi framework

  • num_samples (int) – number of sample scenarios to use

Returns:

dictionary with id/guid and residential recovery for each quarter

Return type:

dict

run()

Executes the residential building recovery analysis.

Returns:

True if successful, False otherwise.

Return type:

bool

static time_stepping_recovery(recovery_results)

Converts results to a time frame. Currently gives results for 16 quarters over 4 year.

Parameters:

recovery_results (pd.DataFrame) – Total recovery time of financial delay and other factors from REDi framework.

Returns:

Time formatted recovery results.

Return type:

pd.DataFrame

static total_delay(sample_damage_states, redi_delay_factors, financing_delay)

Calculates total delay by combining financial delay and other factors from REDi framework

Parameters:
  • sample_damage_states (pd.DataFrame) – Building inventory damage states.

  • redi_delay_factors (pd.DataFrame) – Delay impeding factors such as post-disaster inspection, insurance claim, and government permit based on building’s damage state.

  • financing_delay (pd.DataFrame) – Financing delay, the percentages calculated are the probabilities of housing units financed by different resources.

Returns:

Total delay time of financial delay and other factors from REDi framework.

Return type:

pd.DataFrame

analyses/roaddamage

class roaddamage.roaddamage.RoadDamage(incore_client)

Road Damage Analysis calculates the probability of road damage based on an earthquake or tsunami hazard.

Parameters:

incore_client (IncoreClient) – Service authentication.

get_spec()

Get specifications of the road damage analysis.

Returns:

A JSON object of specifications of the road damage analysis.

Return type:

obj

road_damage_analysis_bulk_input(roads, hazard, hazard_type, hazard_dataset_id, use_hazard_uncertainty, geology_dataset_id, fragility_key, use_liquefaction)

Run analysis for multiple roads.

Parameters:
  • roads (list) – Multiple roads from input inventory set.

  • hazard (obj) – A hazard object.

  • hazard_type (str) – A hazard type of the hazard exposure (earthquake or tsunami).

  • hazard_dataset_id (str) – An id of the hazard exposure.

  • use_hazard_uncertainty (bool) – Flag to indicate use uncertainty or not

  • geology_dataset_id (str) – An id of the geology for use in liquefaction.

  • fragility_key (str) – Fragility key describing the type of fragility.

  • use_liquefaction (bool) – Liquefaction. True for using liquefaction information to modify the damage, False otherwise.

Returns:

A list of ordered dictionaries with road damage values and other data/metadata. list: A list of ordered dictionaries with other road data/metadata.

Return type:

list

road_damage_concurrent_future(function_name, num_workers, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • num_workers (int) – Number of workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

A list of ordered dictionaries with road damage values output_dmg: A list of ordered dictionaries with other road data/metadata.

Return type:

output_ds

run()

Executes road damage analysis.

analyses/seasidecge

class seasidecge.seasidecge.SeasideCGEModel(incore_client)
A computable general equilibrium (CGE) model is based on fundamental economic principles.

A CGE model uses multiple data sources to reflect the interactions of households, firms and relevant government entities as they contribute to economic activity. The model is based on (1) utility-maximizing households that supply labor and capital, using the proceeds to pay for goods and services (both locally produced and imported) and taxes; (2) the production sector, with perfectly competitive, profit-maximizing firms using intermediate inputs, capital, land and labor to produce goods and services for both domestic consumption and export; (3) the government sector that collects taxes and uses tax revenues in order to finance the provision of public services; and (4) the rest of the world.

Parameters:

incore_client (IncoreClient) – Service authentication.

get_spec()

Get basic specifications.

Note

The get_spec will be called exactly once per instance (during __init__), so children should not assume that they can do weird dynamic magic during this call. See the example spec at the bottom of this file.

Returns:

A JSON object of basic specifications of the analysis.

Return type:

obj

analyses/saltlakecge

class saltlakecge.saltlakecge.SaltLakeCGEModel(incore_client)

A computable general equilibrium (CGE) model is based on fundamental economic principles. A CGE model uses multiple data sources to reflect the interactions of households, firms and relevant government entities as they contribute to economic activity. The model is based on (1) utility-maximizing households that supply labor and capital, using the proceeds to pay for goods and services (both locally produced and imported) and taxes; (2) the production sector, with perfectly competitive, profit-maximizing firms using intermediate inputs, capital, land and labor to produce goods and services for both domestic consumption and export; (3) the government sector that collects taxes and uses tax revenues in order to finance the provision of public services; and (4) the rest of the world.

Parameters:

incore_client (IncoreClient) – Service authentication.

get_spec()

Get basic specifications.

Note

The get_spec will be called exactly once per instance (during __init__), so children should not assume that they can do weird dynamic magic during this call. See the example spec at the bottom of this file.

Returns:

A JSON object of basic specifications of the analysis.

Return type:

obj

run()

Returns:

salt_lake_city_cge(iNum, SAM, BB, JOBCR, MISCH, EMPLOY, OUTCR, sector_shocks)
Parameters:
  • iNum (int) –

  • SAM (pd.DataFrame) –

  • BB (str) –

  • JOBCR

  • MISCH

  • EMPLOY

  • OUTCR

  • sector_shocks

Returns:

class saltlakecge.equationlib.VarContainer

All matrix variable(tables) in the GAMS model is flatten to a array to make a better interface to the solver.

AllVarList stores all initial values of varibles used in the GAMS model in an array. It also has a indexing system for looking up.

namelist

A dictionary with all stored GAMS variables and its information.

nvars

The length of the array, i.e. the size of all matrix variables summed up.

initialVals

Stored initial values of all variables

Initialize to an empty list

add(name, rows=None, cols=None)
Parameters:
  • name

  • rows

  • cols

Returns:

get(name, x=None)

Returns a Dataframe, Series, or a variable based on the given name and the result array returned from the solver

Parameters:

name – GAMS variable name

Returns:

if x is not given, it returns the initial values if x is set to the result, returns the result variable value

getIndex(name, row=None, col=None)

Look up the index by providing the variable name and label information

Parameters:
  • name – name of GAMS variable you want to look up

  • row – row label of the position you want to look up index for(if it has row labels)

  • col – column label of the position you want to look up index for(if it has column labels)

Returns:

the index of the position in the array

getInfo(name)

Get the information about a GAMS variable

Parameters:

name(str) – name of GAMS variable you want to look up

Returns:

a dictionary with all information

getLabel(index)

Look up variable name and label information by providing the index

Parameters:

index – the index in the array

Returns:

its information including the variable name, row label and column label if applicable

inList(name)

Check if a GAMS varible is added to the container

Parameters:

name(str) – name of GAMS variable you want to look up

Returns:

Boolean, whether the variable is added.

init(name, initialValue)

Flatten the table variable and add to the list. Also set the initial variable values array.

Parameters:
  • name – Name of the variable in GAMS

  • initialValue – a pandas DataFrame or pandas Series with initial values

Returns:

None.

lo(name, value)

Set the LOs of a GAMS variable providing the LOs with a Dataframe, Series, int or float

Parameters:
  • name – GAMS variable name

  • value – The lower bound to be set

Returns:

None

set_value(name, values, target)

An internal method for setting the initial values or UPs and LOs for variables

Parameters:
  • name – Name of the variable in GAMS

  • value – a pandas DataFrame, pandas Series, int or float with initial values

  • target – target array to be set

Returns:

None

up(name, value)

Set the UPs of a GAMS variable providing the LOs with a Dataframe, Series, int or float

Parameters:
  • name – GAMS variable name

  • value – The upper bound to be set

Returns:

None

write(filename)

Write(append) the variables to a file, in the format of setting ipopt model variables

Parameters:

filename – the output filename

Returns:

None

class saltlakecge.equationlib.ExprItem(v, const=1)

You can construct it with a variable, a constant or a deepcopy of another ExprItem

class saltlakecge.equationlib.Expr(item)
class saltlakecge.equationlib.ExprM(vars, name=None, rows=None, cols=None, m=None, em=None)

Three ways to create a ExprMatrix: 1. Give it the variable name, selected rows and cols(could be empty),

The constructor will create a Expression matrix from the variable matrix

  1. Give it a pandas Series or DataFrame, it will create the Expression matrix

with the content in the Series or DataFrame as constants

  1. Give it a ExprMatrix, will return a deep copy of it

__invert__()

Return the transpose of a Expression matrix

__xor__(rhs)

create 2d list out of 2 single lists

loc(rows=None, cols=None)

get a subset of the matrix by labels

analyses/socialvulnerability

class socialvulnerability.socialvulnerability.SocialVulnerability(incore_client)

This analysis computes a social vulnerability score for per associated zone in census data.

The computation extracts zoning and a social vulnerability score obtained by computing demographic features of interest against national average values.

The output of the computation is a dataset CSV format.

Contributors
Science: Elaina Sutley, Amin Enderami
Implementation: Amin Enderami, Santiago Núñez-Corrales, and NCSA IN-CORE Dev Team

Related publications

Parameters:

incore_client (IncoreClient) – Service authentication.

static compute_svs(df, df_navs)

Computation of the social vulnerability score and corresponding zoning

Parameters:
  • df (pd.DataFrame) – dataframe for the census geographic unit of interest

  • df_navs (pd.DataFrame) – dataframe containing national average values

Returns:

Social vulnerability score and corresponding zoning data

Return type:

pd.DataFrame

get_spec()

Get specifications of the housing serial recovery model.

Returns:

A JSON object of specifications of the social vulnerability model.

Return type:

obj

run()

Execute the social vulnerability analysis using known parameters.

social_vulnerability_model(df_navs, df_dem)
Parameters:
  • df_navs (pd.DataFrame) – dataframe containing the national average values for vulnerability factors

  • df_dem (pd.DataFrame) – dataframe containing demographic factors required for the vulnerability score

Returns:

analyses/tornadoepndamage

class tornadoepndamage.tornadoepndamage.TornadoEpnDamage(incore_client)

Computes electric power network (EPN) probability of damage based on a tornado hazard. The process for computing the structural damage is similar to other parts of the built environment. First, fragilities are obtained based on the hazard type and attributes of the network tower and network pole. Based on the fragility, the hazard intensity at the location of the infrastructure is computed. Using this information, the probability of exceeding each limit state is computed, along with the probability of damage.

get_damage(network_dataset, tornado_dataset, tornado_id)
Parameters:
  • network_dataset (obj) – Network dataset.

  • tornado_dataset (obj) – Tornado dataset.

  • tornado_id (str) – Tornado id.

get_spec()

Get basic specifications.

Note

The get_spec will be called exactly once per instance (during __init__), so children should not assume that they can do weird dynamic magic during this call. See the example spec at the bottom of this file.

Returns:

A JSON object of basic specifications of the analysis.

Return type:

obj

analyses/transportationrecovery

class transportationrecovery.transportationrecovery.TransportationRecovery(incore_client)
get_spec()

Get specifications of the transportation recovery model.

Returns:

A JSON object of specifications of the transportation recovery model.

Return type:

obj

run()

Executes transportation recovery analysis

class transportationrecovery.transportationrecoveryutil.TransportationRecoveryUtil
static NBI_coordinate_mapping(NBI_file)

Coordinate in NBI is in format of xx(degree)xx(minutes)xx.xx(seconds) map it to traditional xx.xxxx in order to create shapefile.

Parameters:

NBI_file (str) – Filename of a NBI file.

Returns:

NBI.

Return type:

dict

static convert_dmg_prob2state(dmg_results_filename)

Upstream bridge damage analysis will generate a dmg result file with the probability of each damage state; here determine what state using the maximum probability.

Parameters:

dmg_results_filename (str) – Filename of a damage results file.

Returns:

Bridge damage values. list: Unrepaired bridge.

Return type:

dict

static nw_reconstruct(node_df, arc_df, adt_data)
Parameters:
  • node_df (pd.DataFrame) – A node in _node_.csv.

  • arc_df (pd.DataFrame) – A node in edge in _edge_.csv.

  • adt_data (pd.DataFrame) – Average daily traffic flow.

Returns:

Network

Return type:

obj

static traveltime_freeflow(temp_network)

A travel time calculation.

Parameters:

temp_network (obj) – The investigated network.

Returns:

Travel efficiency.

Return type:

float

class transportationrecovery.post_disaster_long_term_solution.PostDisasterLongTermSolution(candidates, node_df, arc_df, bridge_df, bridge_damage_value, network, pm, all_ipw, path_adt)

Solution for the post disaster long term recovery function.

initialize the chromosomes

evaluate_solution(final)

Implementation of evaluation for all solutions

mutate()

Mutation operator

class transportationrecovery.nsga2.Solution(num_objectives)

Abstract solution. To be implemented.

Constructor. Parameters: number of objectives.

__lshift__(other)

True if this solution is dominated by the other (“<<” operator).

__rshift__(other)

True if this solution dominates the other (“>>” operator).

class transportationrecovery.nsga2.NSGAII(num_objectives, mutation_rate=0.1, crossover_rate=1.0)

Implementation of NSGA-II algorithm.

Constructor.

Parameters:
  • num_objectives (obj) – Number of objectives.

  • mutation_rate (float) – Mutation rate (default value 10%).

  • crossover_rate (float) – Crossover rate (default value 100%)..

crowding_distance_assignment(front)

Assign a crowding distance for each solution in the front.

Parameters:

front (dict) – A set of chromosomes in the front level.

static fast_nondominated_sort(p)

Discover Pareto fronts in P, based on non-domination criterion.

Parameters:

p (obj) – A set of chromosomes (population).

Returns:

Fronts.

Return type:

dict

make_new_pop(p)

Make new population Q, offspring of P.

Parameters:

p (obj) – A set of chromosomes (population).

Returns:

Offspring.

Return type:

list

run(p, population_size, num_generations)

Run NSGA-II.

Parameters:
  • p (obj) – A set of chromosomes (population).

  • population_size (obj) – A population size.

  • num_generations (obj) – A number of generations.

Returns:

First front of Pareto front.

Return type:

list

static sort_crowding(p)

Run calculate the crowding distance of adjacent two chromosome in a front level.

Parameters:

p (obj) – A set of chromosomes (population).

static sort_objective(p, obj_idx)

Run sort the chromosome based on their objective value.

Parameters:
  • p (obj) – A set of chromosomes (population).

  • obj_idx (int) – The index of objective function.

static sort_ranking(p)

Run sort the sort of chromosomes according to their ranks.

Parameters:

p (obj) – A set of chromosomes (population).

analyses/waterfacilitydamage

class waterfacilitydamage.waterfacilitydamage.WaterFacilityDamage(incore_client)

Computes water facility damage for an earthquake tsunami, tornado, or hurricane exposure.

get_spec()

Get basic specifications.

Note

The get_spec will be called exactly once per instance (during __init__), so children should not assume that they can do weird dynamic magic during this call. See the example spec at the bottom of this file.

Returns:

A JSON object of basic specifications of the analysis.

Return type:

obj

run()

Performs Water facility damage analysis by using the parameters from the spec and creates an output dataset in csv format

Returns:

True if successful, False otherwise

Return type:

bool

waterfacility_damage_concurrent_futures(function_name, parallel_processes, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • parallel_processes (int) – Number of workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

A list of ordered dictionaries with water facility damage values list: A list of ordered dictionaries with other water facility data/metadata

Return type:

list

waterfacilityset_damage_analysis_bulk_input(facilities, hazard, hazard_type, hazard_dataset_id)

Gets applicable fragilities and calculates damage

Parameters:
  • facilities (list) – Multiple water facilities from input inventory set.

  • hazard (object) – A hazard object.

  • hazard_type (str) – A hazard type of the hazard exposure (earthquake, tsunami, tornado, or hurricane).

  • hazard_dataset_id (str) – An id of the hazard exposure.

Returns:

A list of ordered dictionaries with water facility damage values list: A list of ordered dictionaries with other water facility data/metadata

Return type:

list

analyses/waterfacilityrepaircost

class waterfacilityrepaircost.waterfacilityrepaircost.WaterFacilityRepairCost(incore_client)

Computes water facility repair cost.

Parameters:

incore_client (IncoreClient) – Service authentication.

get_spec()

Get specifications of the water facility repair cost analysis.

Returns:

A JSON object of specifications of the water facility repair cost analysis.

Return type:

obj

run()

Executes water facility repair cost analysis.

wf_repair_cost_bulk_input(water_facilities)

Run analysis for multiple water facilities.

Parameters:

water_facilities (list) – Multiple water facilities from input inventory set.

Returns:

A list of ordered dictionaries with water facility repair cost values and other data/metadata.

Return type:

list

wf_repair_cost_concurrent_future(function_name, num_workers, *args)

Utilizes concurrent.future module.

Parameters:
  • function_name (function) – The function to be parallelized.

  • num_workers (int) – Maximum number workers in parallelization.

  • *args – All the arguments in order to pass into parameter function_name.

Returns:

A list of ordered dictionaries with water facility repair cost values and other data/metadata.

Return type:

list

analyses/waterfacilityrestoration

class waterfacilityrestoration.waterfacilityrestoration.WaterFacilityRestoration(incore_client)

Computes water facility restoration for an earthquake, tsunami, tornado, or hurricane exposure.

get_spec()

Get basic specifications.

Note

The get_spec will be called exactly once per instance (during __init__), so children should not assume that they can do weird dynamic magic during this call. See the example spec at the bottom of this file.

Returns:

A JSON object of basic specifications of the analysis.

Return type:

obj

run()

Performs Water facility restoration analysis by using the parameters from the spec and creates an output dataset in csv format

Returns:

True if successful, False otherwise

Return type:

bool

waterfacility_restoration(inventory_list, damage_result, mapping_set, restoration_key, end_time, time_interval, pf_interval, discretized_days)

Gets applicable restoration curve set and calculates restoration time and functionality

Parameters:
  • inventory_list (list) – Multiple water facilities from input inventory set.

  • damage_result (list) – Water facility damage

  • mapping_set (class) – Restoration Mapping Set

  • restoration_key (str) – Restoration Key to determine which curve to use. E.g. Restoration ID Code

  • end_time (float) – User specified end repair time

  • time_interval (float) – Increment interval of repair time. Default to 1 (1 day)

  • pf_interval (float) – Increment interval of percentage of functionality. Default 0.1 (10%)

  • discretized_days (list) – Days to compute discretized restoration (e.g. 1, 3, 7, 30, 90)

Returns:

A map between inventory and restoration being applied time_results (list): Given Percentage of functionality, the change of repair time pf_results (list): Given Repair time, change of the percentage of functionality

Return type:

inventory_restoration_map (list)

analyses/wfnfunctionality

class wfnfunctionality.wfnfunctionality.WfnFunctionality(incore_client)

Computes water facility network functionality.

Parameters:

incore_client – Service client with authentication info

get_spec()

Get specifications of the water facility network functionality analysis. :returns: A JSON object of specifications of the WFN functionality analysis. :rtype: obj

run()

Execute water facility network functionality analysis

wfn_functionality(distribution_nodes, pumpstation_nodes, num_samples, sampcols, wf_sample_df1, pp_sample_df1, G_wfn)

Run Water facility network functionality analysis.

Parameters:
  • distribution_nodes (list) – distribution nodes

  • pumpstation_nodes (list) – pump station nodes

  • num_samples (int) – number of simulations

  • sampcols (list) – list of number samples. e.g. “s0, s1,…”

  • wf_sample_df1 (dataframe) – water facility mcs failure sample dataframe

  • pp_sample_df1 (dataframe) – pipeline mcs failure sample dataframe

  • G_wfn (networkx object) – constructed network

Returns:

A list of dictionary with id/guid and failure state for N samples fp_results (list): A list dictionary with failure probability and other data/metadata.

Return type:

fs_results (list)

models

models/hazard/earthquake

models/hazard/flood

models/hazard/hazard

class models.hazard.Hazard(metadata)

Hazard.

Parameters:

metadata (dict) – Hazard metadata.

classmethod from_json_file(file_path)

Get hazard from the file.

Parameters:

file_path (str) – json file path that holds the definition of a hazard.

Returns:

Hazard

Return type:

obj

classmethod from_json_str(json_str)

Create hazard object from json string.

Parameters:

json_str (str) – JSON of the Dataset.

Returns:

Hazard

Return type:

obj

read_local_raster_hazard_values(payload: list)

Read local hazard values from raster dataset

Parameters:

payload (list) –

Returns:

Hazard values.

Return type:

obj

models/hazard/hazarddataset

models/hazard/hurricane

models/hazard/tornado

models/hazard/tsunami

models/dfr3curve.py

class models.dfr3curve.DFR3Curve(curve_parameters)

A class to represent a DFR3 curve.

get_building_period(curve_parameters, **kwargs)

Get building period from the fragility curve.

Parameters:
  • curve_parameters (dict) – Fragility curve parameters.

  • **kwargs – Keyword arguments.

Returns:

Building period.

Return type:

float

solve_curve_expression(hazard_values: dict, curve_parameters: dict, **kwargs)

Evaluates expression of the curve.

Parameters:
  • hazard_values (dict) – Hazard values. Only applicable to fragilities

  • curve_parameters (dict) – Curve parameters.

  • **kwargs – Keyword arguments.

Returns:

Result of the evaluated expression. Can be float, numpy.ndarray etc.

Return type:

any

solve_curve_for_inverse(hazard_values: dict, curve_parameters: dict, **kwargs)
Evaluates expression of the curve by calculating its inverse. Example, ppf for cdf. Only supports cdf() for

now. More inverse methods may be added in the future.

Parameters:
  • hazard_values (dict) – Hazard values. Only applicable to fragilities

  • curve_parameters (dict) – Curve parameters.

  • **kwargs – Keyword arguments.

Returns:

Result of the evaluated inverse expression. Can be float, numpy.ndarray etc.

Return type:

any

models/fragilitycurveset

class models.fragilitycurveset.FragilityCurveSet(metadata)

A class for fragility curves.

Parameters:

metadata (dict) – fragility curve metadata.

Raises:
  • ValueError – Raised if there are unsupported number of fragility curves

  • or if missing a key curve field.

static adjust_for_small_overlap(small_overlap, limit_states, damage_states)
Parameters:
  • small_overlap (obj) – Overlap.

  • limit_states (dict) – Limit states.

  • damage_states (dict) – Damage states.

Returns:

Damage states overlap.

Return type:

list

calculate_damage_interval(damage, hazard_type='earthquake', inventory_type: str = 'building')
Parameters:
  • damage (list) – A list of limit states.

  • hazard_type (str) – A string describing the hazard being evaluated.

  • inventory_type (str) – A string describing the type of element being evaluated.

Returns:

LS-to-DS mapping

Return type:

list

calculate_limit_state(hazard_values: dict = {}, inventory_type: str = 'building', **kwargs)

WIP computation of limit state probabilities accounting for custom expressions.

Parameters:
  • hazard_values (dict) – A dictionary with hazard values to compute probability.

  • inventory_type (str) – An inventory type.

  • **kwargs – Keyword arguments.

Returns:

Limit state probabilities.

Return type:

OrderedDict

construct_expression_args_from_inventory(inventory_unit: dict)
Parameters:

inventory_unit (dict) – An inventory set.

Returns:

Function parameters.

Return type:

dict

classmethod from_json_file(file_path)

Get dfr3set from the file.

Parameters:

file_path (str) – json file path that holds the definition of a dfr3 curve.

Returns:

dfr3set from file.

Return type:

obj

classmethod from_json_str(json_str)

Get dfr3set from json string.

Parameters:

json_str (str) – JSON of the Dataset.

Returns:

dfr3set from JSON.

Return type:

obj

models/mapping

class models.mapping.Mapping(entry: dict, rules: list)

mapping class that contains the rules and keys of dfr3 curves.

Parameters:
  • entry (dict) – mapping entry.

  • rules (list) – mapping match rules

models/mappingset

class models.mappingset.MappingSet(metadata)

class for dfr3 mapping.

Parameters:

metadata (dict) – mapping metadata.

classmethod from_json_file(file_path, data_type='incore:dfr3MappingSet')

Get dfr3 mapping from the file.

Parameters:
  • file_path (str) – json file path that holds the definition of a dfr3 curve.

  • data_type (str) – mapping dataset type

Returns:

dfr3 mapping from file.

Return type:

obj

classmethod from_json_str(json_str)

Get dfr3 mapping from json string.

Parameters:

json_str (str) – JSON of the Dataset.

Returns:

dfr3 mapping from JSON.

Return type:

obj

models/networkdataset

class models.networkdataset.NetworkDataset(dataset: Dataset)

This class wraps around the Dataset class.

Parameters:

dataset (obj) – The dataset object we want to extract the network data from.

classmethod from_data_service(id: str, data_service: DataService)

Get Dataset from Data service, get metadata as well.

Parameters:
  • id (str) – ID of the Dataset.

  • data_service (obj) – Data service.

Returns:

network dataset

Return type:

obj

classmethod from_dataset(dataset: Dataset)

Turn Dataset into network component

Parameters:

dataset (obj) – Dataset Object.

Returns:

network dataset

Return type:

obj

classmethod from_files(node_file_path, link_file_path, graph_file_path, network_data_type, link_data_type, node_data_type, graph_data_type)

Create Dataset from the file.

Parameters:
  • node_file_path (str) – File path.

  • link_file_path (str) – File path.

  • graph_file_path (str) – File path.

  • link_data_type (str) – Link data type.

  • node_data_type (str) – Node data type.

  • graph_data_type (str) – Graph data type.

Returns:

Dataset from file.

Return type:

obj

classmethod from_json_str(json_str, data_service: DataService = None, folder_path=None)

Get Dataset from json string.

Parameters:
  • json_str (str) – JSON of the Dataset.

  • data_service (obj) – Data Service class.

  • folder_path (str) – File path.

Returns:

network dataset

Return type:

obj

models/repaircurveset

class models.repaircurveset.RepairCurveSet(metadata)

class for repair curves.

Parameters:

metadata (dict) – repair curve metadata.

Raises:
  • ValueError – Raised if there are unsupported number of repair curves

  • or if missing a key curve field.

calculate_inverse_repair_rates(**kwargs)

Computation of inverse repair rates example, inverse of cdf, that is, ppf.

Parameters:

**kwargs – Keyword arguments.

Returns:

Limit state specific repair rates.

Return type:

OrderedDict

calculate_repair_rates(**kwargs)

Computation of repair rates.

Parameters:

**kwargs – Keyword arguments.

Returns:

Limit state specific repair rates.

Return type:

OrderedDict

classmethod from_json_file(file_path)

Get dfr3set from the file.

Parameters:

file_path (str) – json file path that holds the definition of a dfr3 curve.

Returns:

dfr3set from file.

Return type:

obj

classmethod from_json_str(json_str)

Get dfr3set from json string.

Parameters:

json_str (str) – JSON of the Dataset.

Returns:

dfr3set from JSON.

Return type:

obj

models/restorationcurveset

class models.restorationcurveset.RestorationCurveSet(metadata)

class for restoration curves.

Parameters:

metadata (dict) – restoration curve metadata.

Raises:
  • ValueError – Raised if there are unsupported number of restoration curves

  • or if missing a key curve field.

calculate_inverse_restoration_rates(**kwargs)

Computation of inverse restoration rates example, inverse of cdf, that is, ppf.

Parameters:

**kwargs – Keyword arguments.

Returns:

Limit state specific restoration rates.

Return type:

OrderedDict

calculate_restoration_rates(**kwargs)

Computation of restoration rates.

Parameters:

**kwargs – Keyword arguments.

Returns:

Limit state specific restoration rates.

Return type:

OrderedDict

classmethod from_json_file(file_path)

Get dfr3set from the file.

Parameters:

file_path (str) – json file path that holds the definition of a dfr3 curve.

Returns:

dfr3set from file.

Return type:

obj

classmethod from_json_str(json_str)

Get dfr3set from json string.

Parameters:

json_str (str) – JSON of the Dataset.

Returns:

dfr3set from JSON.

Return type:

obj

models/units

class models.units.Units

utilities

utils/analysisutil

class utils.analysisutil.AnalysisUtil

Utility methods for analysis

static adjust_damage_for_liquefaction(limit_state_probabilities, ground_failure_probabilities)

Adjusts building damage probability based on liquefaction ground failure probability with the liq_dmg, we know that it is 3 values, the first two are the same. The 3rd might be different. We always want to apply the first two to all damage states except the highest.

Parameters:
  • limit_state_probabilities (obj) – Limit state probabilities.

  • ground_failure_probabilities (list) – Ground failure probabilities.

Returns:

Adjusted limit state probability.

Return type:

OrderedDict

static chunks(lst, n)

Yield successive n-sized chunks from lst.

static create_gdocstr_from_spec(specs)
Parameters:

specs (dict) – Json of the specs for each analysis

Returns:

Google format docstrings to copy for the run() method of any analysis

Return type:

str

static determine_parallelism_locally(self, number_of_loops, user_defined_parallelism=0)

Determine the parallelism on the current compute node.

Parameters:
  • number_of_loops – total number of loops will be executed on current compute node

  • user_defined_parallelism – a customized parallelism specified by users

Returns:

parallelism on current compute node

Return type:

int

static do_hazard_values_have_errors(hazard_vals)

Checks if any of the hazard values have errors

Parameters:

hazard_vals (list) – List of hazard values returned by the service for a particular point

Returns: True if any of the values are error codes such as -9999.1, -9999.2 etc.

static get_custom_types_str(types)
Parameters:

types (str, list) – Can be string or List of strings

Returns:

Formatted string with applicable datatypes used to generate docstrigns from specs

Return type:

str

static get_discretized_restoration(restoration_curve_set, discretized_days)

Converts discretized restoration times into a dictionary from continuous curve

Parameters:
  • restoration_curve_set (obj) –

  • discretized_days (list) –

Returns:

discretized restoration for each day {day1: [100, 50, 9, 4, 3], day3: [100, 100, 50, 13, 4], etc }

Return type:

dict

static get_expected_damage(mean_damage, dmg_ratios)

Calculates mean damage.

Parameters:
  • mean_damage (float) – Mean damage value.

  • dmg_ratios (obj) – Damage ratios, descriptions and states.

Returns:

A value of the damage state.

Return type:

float

static get_exposure_from_hazard_values(hazard_vals, hazard_type)

Finds if a point is exposed to hazard based on all the demand values present. Returns “n/a” for earthquake, tsunami, hurricane and hurricane windfields

Parameters:
  • hazard_vals (list) – List of hazard values returned by the service for a particular point

  • hazard_type (str) – Type of the hazard

Returns:

If hazard is exposed or not. Can be one of ‘yes’, ‘no’, ‘partial’, ‘error’ or ‘n/a’

Return type:

str

static get_hazard_demand_type(building, fragility_set, hazard_type)

Get hazard demand type. This method is intended to replace get_hazard_demand_type. Fragility_set is not a json but a fragilityCurveSet object now.

Parameters:
  • building (obj) – A JSON mapping of a geometric object from the inventory: current building.

  • fragility_set (obj) – FragilityCurveSet object

  • hazard_type (str) – A hazard type such as earthquake, tsunami etc.

Returns:

A hazard demand type.

Return type:

str

static get_hazard_demand_types_units(building, fragility_set, hazard_type, allowed_demand_types)

Get hazard demand type. This method is intended to replace get_hazard_demand_type. Fragility_set is not a json but a fragilityCurveSet object now.

Parameters:
  • building (obj) – A JSON mapping of a geometric object from the inventory: current building.

  • fragility_set (obj) – FragilityCurveSet object

  • hazard_type (str) – A hazard type such as earthquake, tsunami etc.

  • allowed_demand_types (list) – A list of allowed demand types in lowercase

Returns:

A hazard demand type.

Return type:

str

static get_type_str(class_type)
Parameters:

class_type (str) – Example: <class ‘int’>

Returns:

Text inside first single quotes of a string

Return type:

str

static group_by_demand_type(inventories, fragility_sets, hazard_type='earthquake', is_building=False)

This method should replace group_by_demand_type in the future. Fragility_sets is not list of dictionary ( json) anymore but a list of FragilityCurveSet objects

Parameters:
  • inventories (dict) – dictionary of {id: intentory}

  • fragility_sets (dict) – fragility_sets

  • hazard_type (str) – default to earthquake

  • is_building (bool) – if the inventory is building or not

Returns:

dGrouped inventory with { (demandunit, demandtype):[inventory ids] }

Return type:

dict

utils/cgeoutputprocess

class utils.cgeoutputprocess.CGEOutputProcess

This class converts csv results outputs of Joplin CGE analysis to json format.

static get_cge_domestic_supply(domestic_supply, domestic_supply_path=None, filename_json=None, supply_categories=('Goods', 'Trade', 'Other', 'HS1', 'HS2', 'HS3'))

Calculate domestic supply results from the output files of the Joplin CGE analysis and convert the results to json format. {

“afterEvent”: {“Goods”: 662.3, “Trade”: 209.0, “Other”: 254.1,

“HS1”: 22.0, “HS2”: 1337.1, “HS3”: 466.2},

“beforeEvent”: {“Goods”: 662.3, “Trade”: 209.0, “Other”: 254.1,

“HS1”: 22.0, “HS2”: 1337.1, “HS3”: 466.2},

“%_change”: {“Goods”: -1.1, “Trade”: -1.1, “Other”: -1.1,

“HS1”: -1.1, “HS2”: -1.1, “HS3”: -1.1}

}

Parameters:
  • domestic_supply (obj) – IN-CORE dataset for CGE domestic supply result.

  • supply_categories (list) – demand categories to partition data with.

  • domestic_supply_path (obj) – A fallback for the case that domestic supply object of CGE is not provided. For example a user wants to directly pass in csv files, a path to CGE household count result.

  • filename_json (str) – Path and name to save json output file in. E.g “cge_domestic_supply”

Returns:

CGE total domestic supply. A JSON of the total domestic supply results ordered by category.

Return type:

obj

static get_cge_employment(pre_demand, post_demand, pre_demand_path=None, post_demand_path=None, filename_json=None, demand_categories=('GOODS', 'TRADE', 'OTHER'))

Calculate employment results from the output files of the Joplin CGE analysis and convert the results to json format. The value is a sum of L1, L2 and L3 Labor groups numbers. {

“afterEvent”: {

“Goods”: 6680, “Trade”: 8876, “Other”: 23767

}, “beforeEvent”: {“Goods”: 6744, “Trade”: 8940, “Other”: 24147}, “%_change”: {“Goods”: -0, “Trade”: -x.x, “Other”: -x.x}

}

Parameters:
  • pre_demand (obj) – IN-CORE dataset for CGE household Pre disaster factor demand result.

  • post_demand (obj) – IN-CORE dataset for CGE household Post disaster factor demand result.

  • pre_demand_path (obj) – A fallback for the case that pre_disaster_demand_factor_path object of CGE is not provided. For example a user wants to directly pass in csv files, a path to CGE household count result.

  • post_demand_path (obj) – A fallback for the case that post_disaster_demand_factor_path object of CGE is not provided. For example a user wants to directly pass in csv files, a path to CGE household count result.

  • filename_json (str) – Path and name to save json output file in. E.g “cge_employment.json”

  • demand_categories (list) – demand categories to partition data with.

Returns:

CGE total employment. A JSON of the employment results ordered by category.

Return type:

obj

static get_cge_gross_income(gross_income, gross_income_path=None, filename_json=None, income_categories=('HH1', 'HH2', 'HH3', 'HH4', 'HH5'))

Calculate household gross income results from the output files of the Joplin CGE analysis and convert the results to json format. {

“beforeEvent”: {“HH1”: 13, “HH2”: 153.5, “HH3”: 453.1, “HH4”: 438.9, “HH5”: 125.0}, “afterEvent”: {“HH1”: 13, “HH2”: 152.5, “HH3”: 445.6, “HH4”: 432.9, “HH5”: 124.5}, “%_change”: {“HH1”: -0, “HH2”: -x.x, “HH3”: -x.x, “HH4”: -x.x, “HH5”: -x.x}

}

Parameters:
  • gross_income (obj) – IN-CORE dataset for CGE household gross income result.

  • gross_income_path (obj) – A fallback for the case that gross_income object of CGE is not provided. For example a user wants to directly pass in csv files, a path to CGE gross income result.

  • filename_json (str) – Path and name to save json output file in. E.g “cge_total_house_income.json”

  • income_categories (list) – A list of income categories to partition the data

Returns:

CGE total house income. A JSON of the total household income results ordered by category.

Return type:

obj

static get_cge_household_count(household_count, household_count_path=None, filename_json=None, income_categories=('HH1', 'HH2', 'HH3', 'HH4', 'HH5'))

Calculate income results from the output files of the Joplin CGE analysis and convert the results to json format. {

“beforeEvent”: {“HH1”: 3611, “HH2”: 5997.0, “HH3”: 7544.1, “HH4”: 2394.1, “HH5”: 793.0}, “afterEvent”: {“HH1”: 3588, “HH2”: 5929.8, “HH3”: 7324.1, “HH4”: 2207.5, “HH5”: 766.4}, “%_change”: {“HH1”: -0.6369, “HH2”: -1.1, “HH3”: -2.92, “HH4”: -7.8, “HH5”: -3.35}

}

Parameters:
  • household_count (obj) – IN-CORE dataset for CGE household count result.

  • household_count_path (obj) – A fallback for the case that household count object of CGE is not provided. For example a user wants to directly pass in csv files, a path to CGE household count result.

  • filename_json (str) – Path and name to save json output file in. E.g “cge_total_household_count.json”

  • income_categories (list) – A list of income categories to partition the data

Returns:

CGE total household count. A JSON of the total household count results ordered by category.

Return type:

obj

utils/dataprocessutil

class utils.dataprocessutil.DataProcessUtil
static create_mapped_dmg(inventory, dmg_result, arch_mapping, groupby_col_name='max_state', arch_col='archetype')

This is a helper function as the operations performed in create_mapped_dmg_result and create_mapped_dmg_result_gal are same.

Returns:

returns dataframes of the results ordered by cluster and category.

Return type:

Tuple of two dataframes

static create_mapped_dmg_result(inventory, dmg_result, arch_mapping, groupby_col_name='max_state', arch_col='archetype')
Parameters:
  • inventory – dataframe represent inventory

  • dmg_result – dmg_result that need to merge with inventory and be grouped

  • arch_mapping – Path to the archetype mappings

Returns:

JSON of the results ordered by cluster and category.

Return type:

ret_json

static create_mapped_dmg_result_gal(inventory, max_dmg_result, arch_mapping, groupby_col_name='max_state', arch_col='archetype')

This function does similar operation as create_mapped_dmg_result but it is used for Galveston as it has different mapping.

Parameters:
  • inventory – dataframe represent inventory

  • dmg_result – dmg_result that need to merge with inventory and be grouped

  • arch_mapping – Path to the archetype mappings

Returns:

JSON of the results ordered by cluster and category.

Return type:

ret_json

static create_mapped_func_result(inventory, bldg_func, arch_mapping, arch_col='archetype')
Parameters:
  • inventory – dataframe represent inventory

  • bldg_func – building func state dataset

  • arch_mapping – Path to the archetype mappings

  • arch_col – archetype column to use for the clustering

Returns:

JSON of the results ordered by cluster and category.

Return type:

ret_json

static get_mapped_result_from_analysis(client, inventory_id: str, dmg_result_dataset, bldg_func_dataset, archetype_mapping_id: str, groupby_col_name: str = 'max_state', arch_col='archetype')

Use this if you want to load results directly from the output files of the analysis, than storing the results to data service and loading from there using ids. It takes the static inputs: inventory & archetypes as dataset ids. The result inputs are taken as Dataset class objects, which are created by reading the output result files.

Parameters:
  • client – Service client with authentication info

  • inventory_id – Inventory dataset id

  • dmg_result_dataset – Incore dataset for damage result

  • bldg_func_dataset – Incore dataset for building func dataset

  • archetype_mapping_id – Mapping id dataset for archetype

Returns:

JSON of the damage state results ordered by cluster and category. func_ret_json: JSON of the building functionality results ordered by cluster and category. mapped_df: Dataframe of max damage state arch_col: column name for the archetype to perform the merge

Return type:

dmg_ret_json

static get_mapped_result_from_dataset_id(client, inventory_id: str, dmg_result_id: str, bldg_func_id, archetype_mapping_id: str, groupby_col_name: str = 'max_state', arch_col='archetype')

Use this if your damage results are already stored in the data service and you have their dataset ids. All the inputs (except groupby_col_name) are dataset ids.

Parameters:
  • client – Service client with authentication info

  • inventory_id – Inventory dataset id

  • dmg_result_id – Damage result dataset id

  • bldg_func_id – Incore dataset for building func id

  • archetype_mapping_id – Mapping id dataset for archetype

  • groupby_col_name – column name to group by, default to max_state

  • arch_col – column name for the archetype to perform the merge

Returns:

JSON of the damage state results ordered by cluster and category. func_ret_json: JSON of the building functionality results ordered by cluster and category. max_state_df: Dataframe of max damage state

Return type:

dmg_ret_json

static get_mapped_result_from_path(inventory_path: str, dmg_result_path: str, func_result_path: str, archetype_mapping_path: str, groupby_col_name: str, arch_col='archetype')
Parameters:
  • inventory_path – Path to the zip file containing the inventory example: /Users/myuser/5f9091df3e86721ed82f701d.zip

  • dmg_result_path – Path to the damage result output file

  • func_result_path – Path to the bldg functionality result output file

  • archetype_mapping_path – Path to the arechetype mappings

  • groupby_col_name – column name to group by, default to max_state

  • arch_col – column name for the archetype to perform the merge

Returns:

JSON of the damage state results ordered by cluster and category. func_ret_json: JSON of the building functionality results ordered by cluster and category. mapped_df: Dataframe of max damage state

Return type:

dmg_ret_json

static get_max_damage_state(dmg_result)

Given damage result output decide the maximum damage state for each guid.

Parameters:

dmg_result (pd.DataFrame) – damage result output, such as building damage, EPF damage and etc.

Returns:

Pandas dataframe that has column GUID and column max_state.

Return type:

pd.DataFrame

utils/datasetutil

class utils.datasetutil.DatasetUtil
static construct_updated_inventories(inventory_dataset: Dataset, add_info_dataset: Dataset, mapping: MappingSet)

This method update the given inventory with retrofit information based on the mapping and additional information

Parameters:
  • inventory_dataset (gpd.GeoDataFrame) – Geopandas DataFrame object

  • add_info_dataset (pd.DataFrame) – Pandas DataFrame object

  • mapping (MappingSet) – MappingSet object

Returns:

Updated inventory dataset gpd.GeoDataFrame: Updated inventory geodataframe

Return type:

Dataset

static join_datasets(geodataset, tabledataset)

Join Geopands geodataframe and non-geospatial Dataset using GUID field

Parameters:
  • geodataset (gpd.Dataset) – pyincore Dataset object with geospatial data

  • tabledataset (gpd.Dataset) – pyincore Dataset object without geospatial data

Returns:

Geopandas DataFrame object

Return type:

gpd.GeoDataFrame

static join_table_dataset_with_source_dataset(dataset, client)

Creates geopandas geodataframe by joining table dataset and its source dataset

Parameters:
  • dataset (Dataset) – pyincore dataset object

  • client (Client) – pyincore service client object

Returns:

Geopandas geodataframe object.

Return type:

gpd.Dataset

utils/evaluateexpression

utils/geoutil

class utils.geoutil.GeoUtil

Utility methods for georeferenced data.

static add_guid(infile, outfile)

Add uuid to shapefile or geopackage

Parameters:
  • infile (str) – Full path and filename of Input file

  • outfile (str) – Full path and filename of Ouptut file

Returns:

A success or fail to add guid.

Return type:

bool

static calc_geog_distance_between_points(point1, point2, unit=1)

Calculate geometric matric between two points, this only works for WGS84 projection.

Parameters:
  • point1 (Point) – Point 1 coordinates.

  • point2 (Point) – Point 2 coordinates.

  • unit (int, optional (Defaults to 1)) – Unit selector, 1: meter, 2: km, 3: mile.

Returns:

Distance between points.

Return type:

str

static calc_geog_distance_from_linestring(line_segment, unit=1)

Calculate geometric matric from line string segment.

Parameters:
  • line_segment (Shapely.geometry) – A multi line string with coordinates of segments.

  • unit (int, optional (Defaults to 1)) – Unit selector, 1: meter, 2: km, 3: mile.

Returns:

Distance of a line.

Return type:

float

static create_output(filename, source, results, types)

Create Fiona output.

Parameters:
  • filename (str) – A name of a geo dataset resource recognized by Fiona package.

  • source (obj) – Resource with format driver and coordinate reference system.

  • results (obj) – Output with key/column names and values.

  • types (dict) – Schema key names.

Returns:

Output with metadata names and values.

Return type:

obj

static create_rtree_index(inshp)

Create rtree bounding index for an input shape.

Parameters:

inshp (obj) – Shapefile with features.

Returns:

rtree bounding box index.

Return type:

obj

static decimal_to_degree(decimal: float)

Convert decimal latitude and longitude to degree to look up in National Bridge Inventory.

Parameters:

decimal (float) – Decimal value.

Returns:

8 digits int, first 2 digits are degree, another 2 digits are minutes,

last 4 digits are xx.xx seconds.

Return type:

int

static degree_to_decimal(degree: int)

Convert degree latitude and longitude to degree to look up in National Bridge Inventory.

Parameters:
  • degree (int) – 8 digits int, first 2 digits are degree, another 2 digits are minutes,

  • seconds. (last 4 digits are xx.xx) –

Returns:

A decimal value. int: A decimal value.

Return type:

str

static find_nearest_feature(features, query_point)

Finds the first nearest feature/point in the feature set given a set of features from shapefile and one set point.

Args:

features (obj): A JSON mapping of a geometric objects from the inventory. query_point (obj): A query point

Returns:

obj: A nearest feature. obj: Nearest distances.

static get_location(feature)

Location of the object.

Parameters:

feature (obj) – A JSON mapping of a geometric object from the inventory.

Note

From the Shapely documentation: The centroid of an object might be one of its points, but this is not guaranteed.

Returns:

A representation of the object’s geometric centroid.

Return type:

point

utils/networkutil

class utils.networkutil.NetworkUtil

Create line dataset based on node shapefile and graph file graph should be in csv format.

Parameters:
  • node_filename (string) – A node shapefile file name pull path with *.shp file extension.

  • graph_filename (string) – A graph csv file name pull path.

  • id_field (string) – A field name for node shapefiles unique id that matches the information in the graph.

  • out_filename (string) – A line file name pull path that will be newly created by the process.

Returns:

To indicated if the line was created or not.

Return type:

bool

Create node dataset based on line shapefile and graph file graph should be in csv format

Parameters:
  • link_filename (string) – line shapefile file name pull path with *.shp file extension.

  • link_id_field (string) – line shapefile unique id field

  • fromnode_field (string) – field name for fromnode in line shapefile

  • tonode_field (string) – field name for tonode in line shapefile

  • out_node_filename (string) – output node shapefile name with *.shp extension

  • out_graph_filename (string) – output graph csv file name with *.csv extension

Returns:

To indicated if the shapefile and graph were created or not.

Return type:

bool

static create_network_graph_from_dataframes(df_nodes, df_links, sort='unsorted')

Given a dataframe of nodes and a dataframe of links, assemble a network object.

Parameters:
  • df_nodes (pd.DataFrame) –

  • df_links (pd.DataFrame) –

  • sort

Returns:

Create network graph from field.

Parameters:
  • link_file (str) – A name of a geo dataset resource recognized by Fiona package.

  • fromnode_fldname (str) – Line feature, from node field name.

  • tonode_fldname (str) – Line feature, to node field name.

  • is_directed (bool, optional (Defaults to False)) – Graph type. True for directed Graph, False for Graph.

Returns:

A graph from field. dict: Coordinates.

Return type:

obj

static extract_network_by_label(labeled_graph, prefix)

Given a network resulting from a labeled merging, extract only one of the networks based on its prefix

Parameters:
  • labeled_graph (obj) – a graph obtained by labeling and merging two networks

  • prefix (str) – label of the network to extract

Returns:

a new graph represented the network extracted using the label

Return type:

obj

static merge_labeled_networks(graph_a, graph_b, edges_ab, directed=False)

Merges two networks, each distinguished by a label

Parameters:
  • graph_a (obj) – labeled network a

  • graph_b (obj) – labeled network b

  • edges_ab (pd.DataFrame) – mapping containing links between network a and network b, column labels should correspond to the labels in each graph

  • directed (bool) – if the network is directed, use an additional column to determine edge direction

Returns:

a new graph that integrates the two networks

Return type:

obj

static plot_network_graph(graph, coords)

Plot graph.

Parameters:
  • graph (obj) – A nx graph to be drawn.

  • coords (dict) – Position coordinates.

static read_network_graph_from_file(filename, is_directed=False)

Get network graph from filename.

Parameters:
  • filename (str) – A name of a geo dataset resource recognized by Fiona package.

  • is_directed (bool, optional (Defaults to False)) – Graph type. True for directed Graph, False for Graph.

Returns:

A graph from field. dict: Coordinates.

Return type:

obj

static validate_network_node_ids(network_dataset, fromnode_fldname, tonode_fldname, nodeid_fldname)

Check if the node id in from or to node exist in the real node id.

Parameters:
  • network_dataset (obj) – Network dataset

  • fromnode_fldname (str) – Line feature, from node field name.

  • tonode_fldname (str) – Line feature, to node field name.

  • nodeid_fldname (str) – Node field id name.

Returns:

Validation of node existence.

Return type:

bool

utils/popdisloutputprocess.py

class utils.popdisloutputprocess.PopDislOutputProcess(pop_disl_result, pop_disl_result_path=None, filter_name=None, filter_guid=True, vacant_disl=True)

This class converts csv results outputs of Population dislocation analysis to json format and shapefiles.

Parameters:
  • pop_disl_result (obj) – IN-CORE dataset for Joplin Population Dislocation (PD) results.

  • pop_disl_result_path (obj) – A fallback for the case that Joplin PD object is not provided. For example a user wants to directly pass in csv files, a path to PD results.

  • filter_name (str) – A string to filter data by name, default empty. Example: filter_name=”Joplin” for Joplin inventory, other is Duquesne etc. Name must be valid.

  • filter_guid (bool) – A flag to filter all data, default True counts only Joplin buildings.

  • vacant_disl (bool) – A flag to include vacant (Vacant for tenure) dislocation

get_heatmap_shp(filename='pop-disl-numprec.shp')

Convert and filter population dislocation output to shapefile that contains only guid and numprec columns

Parameters:

filename (str) – Path and name to save shapefile output file in. E.g “heatmap.shp”

Returns:

full path and filename of the shapefile

Return type:

str

pd_by_housing(filename_json=None)

Calculate housing results from the output files of the Joplin Population Dislocation analysis using huestimate column (huestimate = 1 is single family, huestimate > 1 means multi family house) and convert the results to json format. [

{“household_characteristics”: “Single Family”,

“household_dislocated”: 1162, “total_households”: 837, “%_household_dislocated”: 7.3, “population_dislocated”: 1162, “total_population”: 837, “%_population_dislocated” },{},{“Total”,..,..,..,..}

]

Parameters:

filename_json (str) – Path and name to save json output file in. E.g “pd_housing_count.json”

Returns:

PD total count by housing. A JSON of the hua and population dislocation housing results by category.

Return type:

obj

pd_by_income(filename_json=None)

Calculate income results from the output files of the Joplin Population Dislocation analysis and convert the results to json format. [

{“household_characteristics”: “HH1 (less than $15,000)”,

“household_dislocated”: 311, “total_households”: 3252, “%_household_dislocated”: 7.3, “population_dislocated”: 311, “total_population”: 3252, “%_population_dislocated” }, {“HH2 ($15,000 to $35,000)”,..,..,..,..},{},{},{},{}, {“Unknown”,..,..,..,..}

]

Parameters:

filename_json (str) – Path and name to save json output file in. E.g “pd_income_count.json”

Returns:

PD total count by income. A JSON of the hua and population dislocation income results by category.

Return type:

obj

pd_by_race(filename_json=None)

Calculate race results from the output files of the Joplin Population Dislocation analysis and convert the results to json format. [

{“household_characteristics”: “Not Hispanic/White”,

“household_dislocated”: 1521, “total_households”: 18507, “%_household_dislocated”: 7.3, “population_dislocated”, “total_population”, “%_population_dislocated”

},{“household_characteristics”: “Not Hispanic/Black”,..,..},{},{}, {“No race Ethnicity Data”},{“Total”}

]

Parameters:

filename_json (str) – Path and name to save json output file in. E.g “pd_race_count.json”

Returns:

PD total count by race. A JSON of the hua and population dislocation race results by category.

Return type:

obj

pd_by_tenure(filename_json=None)

Calculate tenure results from the output files of the Joplin Population Dislocation analysis and convert the results to json format. [

{“household_characteristics”: “Owner occupied”,

“household_dislocated”: 1018, “total_households”: 11344, “%_household_dislocated”: 7.3, “population_dislocated”: 1018, “total_population”: 11344, “%_population_dislocated”

}, {“household_characteristics”: “Renter occupied”,..,..,..,..},{},{},{},{},{}, {“total”,..,..,..,..}

]

Parameters:

filename_json (str) – Path and name to save json output file in. E.g “pd_income_count.json”

Returns:

PD total count by income. A JSON of the hua and population dislocation income results by category.

Return type:

obj

pd_total(filename_json=None)

Calculate total results from the output files of the Joplin Population Dislocation analysis and convert the results to json format. { “household_dislocated”: {

“dislocated”: {

“number”: 1999, “percentage”: 0.085

}, “not_dislocated”: {}, “total”: {}

},”population_dislocated”: {“dislocated”: {},”not_dislocated”: {}, “total”: {}}

}

Parameters:

filename_json (str) – Path and name to save json output file in. E.g “pd_total_count.json”

Returns:

PD total count. A JSON of the hua and population dislocation total results by category.

Return type:

obj

services

baseanalysis

class baseanalysis.BaseAnalysis(incore_client)

Superclass that defines the specification for an IN-CORE analysis. Implementations of BaseAnalysis should implement get_spec and return their own specifications.

Parameters:

incore_client (IncoreClient) – Service authentication.

create_hazard_object_from_input_params()

Create hazard object from input parameters.

get_description()

Get the description of an analysis.

get_input_dataset(ds_id)

Get or set the analysis dataset. Setting the dataset to a new value will return True or False on error.

get_input_datasets()

Get the dictionary of the input datasets of an analysis.

get_input_hazard(hz_id)

Get or set the analysis dataset. Setting the hazard to a new value will return True or False on error.

get_input_hazards()

Get the dictionary of the input hazards of an analysis.

get_name()

Get the analysis name.

get_output_dataset(ds_id)

Get or set the output dataset. Setting the output dataset to a new value will return True or False on error.

get_output_datasets()

Get the output dataset of the analysis.

get_parameter(par_id)

Get or set the analysis parameter value. Setting a parameter to a new value will return True or False on error.

get_parameters()

Get the dictionary of analysis’ parameters.

get_spec()

Get basic specifications.

Note

The get_spec will be called exactly once per instance (during __init__), so children should not assume that they can do weird dynamic magic during this call. See the example spec at the bottom of this file.

Returns:

A JSON object of basic specifications of the analysis.

Return type:

obj

load_remote_input_dataset(analysis_param_id, remote_id)

Convenience function for loading a remote dataset by id.

Parameters:
  • analysis_param_id (str) – ID of the input Dataset in the specifications.

  • remote_id (str) – ID of the Dataset in the Data service.

run_analysis()

Validates and runs the analysis.

static validate_input_dataset(dataset_spec, dataset)

Match input dataset by type.

Parameters:
  • dataset_spec (obj) – Specifications of datasets.

  • dataset (obj) – Dataset description.

Returns:

Dataset validity, True if valid, False otherwise. Error message.

Return type:

bool, str

static validate_input_hazard(hazard_spec, hazard)

Validate input hazard.

Parameters:
  • hazard_spec (obj) – Specifications of hazard.

  • hazard (obj) – Hazard description.

Returns:

Hazard validity, True if valid, False otherwise. Error message.

Return type:

bool, str

static validate_output_dataset(dataset_spec, dataset)

Match output dataset by type.

Parameters:
  • dataset_spec (obj) – Specifications of datasets.

  • dataset (obj) – Dataset description.

Returns:

Dataset validity, True if valid, False otherwise. Error message.

Return type:

bool, str

validate_parameter(parameter_spec, parameter)

Match parameter by type.

Parameters:
  • parameter_spec (obj) – Specifications of parameters.

  • parameter (obj) – Parameter description.

Returns:

Parameter validity, True if valid, False otherwise. Error message.

Return type:

bool, str

client

class client.Client

Incore service Client class. It handles connection to the server with INCORE services and user authentication.

delete(url: str, timeout=(30, 600), **kwargs)

Delete data on the server.

Parameters:
  • url (str) – Service url.

  • timeout (tuple) – Session timeout.

  • **kwargs – A dictionary of external parameters.

Returns:

HTTP response.

Return type:

obj

get(url: str, params=None, timeout=(30, 600), **kwargs)

Get server connection response.

Parameters:
  • url (str) – Service url.

  • params (obj) – Session parameters.

  • timeout (tuple) – Session timeout.

  • **kwargs – A dictionary of external parameters.

Returns:

HTTP response.

Return type:

obj

post(url: str, data=None, json=None, timeout=(30, 600), **kwargs)

Post data on the server.

Parameters:
  • url (str) – Service url.

  • data (obj) – Data to be posted on the server.

  • json (obj) – Description of the data, metadata json.

  • timeout (tuple) – Session timeout.

  • **kwargs – A dictionary of external parameters.

Returns:

HTTP response.

Return type:

obj

put(url: str, data=None, timeout=(30, 600), **kwargs)

Put data on the server.

Parameters:
  • url (str) – Service url.

  • data (obj) – Data to be put onn the server.

  • timeout (tuple) – Session timeout.

  • **kwargs – A dictionary of external parameters.

Returns:

HTTP response.

Return type:

obj

class client.IncoreClient(service_url: str = None, token_file_name: str = None, offline: bool = False)

IN-CORE service client class. It contains token and service root url.

Parameters:
  • service_url (str) – Service url.

  • token_file_name (str) – Path to file containing the authorization token.

  • offline (bool) – Flag to indicate offline mode or not.

clear_cache()

This function helps clear the data cache for a specific repository or the entire cache

Returns: None

delete(url: str, timeout=(30, 600), **kwargs)

Delete data on the server.

Parameters:
  • url (str) – Service url.

  • timeout (tuple) – Session timeout.

  • **kwargs – A dictionary of external parameters.

Returns:

HTTP response.

Return type:

obj

get(url: str, params=None, timeout=(30, 600), **kwargs)

Get server connection response.

Parameters:
  • url (str) – Service url.

  • params (obj) – Session parameters.

  • timeout (tuple[int,int]) – Session timeout.

  • **kwargs – A dictionary of external parameters.

Returns:

HTTP response.

Return type:

obj

is_token_expired(token)

Check if the token has expired

Returns:

True if the token has expired, False otherwise

post(url: str, data=None, json=None, timeout=(30, 600), **kwargs)

Post data on the server.

Parameters:
  • url (str) – Service url.

  • data (obj) – Data to be posted on the server.

  • json (obj) – Description of the data, metadata json.

  • timeout (tuple[int,int]) – Session timeout.

  • **kwargs – A dictionary of external parameters.

Returns:

HTTP response.

Return type:

obj

put(url: str, data=None, timeout=(30, 600), **kwargs)

Put data on the server.

Parameters:
  • url (str) – Service url.

  • data (obj) – Data to be put onn the server.

  • timeout (tuple) – Session timeout.

  • **kwargs – A dictionary of external parameters.

Returns:

HTTP response.

Return type:

obj

retrieve_token_from_file()

Attempts to retrieve authorization from a local file, if it exists.

Returns:

None if token file does not exist dict: Dictionary containing authorization in the format “bearer access_token” if file exists, None otherwise

store_authorization_in_file(authorization: str)

Store the access token in local file. If the file does not exist, this function creates it.

Parameters:

authorization (str) – An authorization in the format “bearer access_token”.

class client.InsecureIncoreClient(service_url: str = None, username: str = None)

IN-CORE service client class that bypasses Ambassador auth. It contains token and service root url.

Parameters:
  • service_url (str) – Service url.

  • username (str) – Username string.

dataservice

class dataservice.DataService(client: IncoreClient)

Data service client.

Parameters:

client (IncoreClient) – Service authentication.

unzip_dataset(local_filename: str)

Unzip the dataset zip file.

Parameters:

local_filename (str) – Name of the Dataset.

Returns:

Folder name with unzipped files.

Return type:

str

dataset

class dataset.Dataset(metadata)

Dataset.

Parameters:

metadata (dict) – Dataset metadata.

cache_files(data_service: DataService)

Get the set of fragility data, curves.

Parameters:

data_service (obj) – Data service.

Returns:

A path to the local file.

Return type:

str

delete_temp_file()

Delete temporary folder.

delete_temp_folder()

Delete temporary folder.

classmethod from_csv_data(result_data, name, data_type)

Get Dataset from CSV data.

Parameters:
  • result_data (obj) – Result data and metadata.

  • name (str) – A CSV filename.

  • data_type (str) – Incore data type, e.g. incore:xxxx or ergo:xxxx

Returns:

Dataset from file.

Return type:

obj

classmethod from_data_service(id: str, data_service: DataService)

Get Dataset from Data service, get metadata as well.

Parameters:
  • id (str) – ID of the Dataset.

  • data_service (obj) – Data service.

Returns:

Dataset from Data service.

Return type:

obj

classmethod from_dataframe(dataframe, name, data_type, index=False)

Get Dataset from Panda’s DataFrame.

Parameters:
  • dataframe (obj) – Panda’s DataFrame.

  • name (str) – filename.

  • data_type (str) – Incore data type, e.g. incore:xxxx or ergo:xxxx

  • index (bool) – Store the index column

Returns:

Dataset from file.

Return type:

obj

classmethod from_file(file_path, data_type)

Get Dataset from the file.

Parameters:
  • file_path (str) – File path.

  • data_type (str) – Data type.

Returns:

Dataset from file.

Return type:

obj

classmethod from_json_data(result_data, name, data_type)

Get Dataset from JSON data.

Parameters:
  • result_data (obj) – Result data and metadata.

  • name (str) – A JSON filename.

  • data_type (str) – Incore data type, e.g. incore:xxxx or ergo:xxxx

Returns:

Dataset from file.

Return type:

obj

classmethod from_json_str(json_str, data_service: DataService = None, file_path=None)

Get Dataset from json string.

Parameters:
  • json_str (str) – JSON of the Dataset.

  • data_service (obj) – Data Service class.

  • file_path (str) – File path.

Returns:

Dataset from JSON.

Return type:

obj

get_csv_reader()

Utility method for reading different standard file formats: csv reader.

Returns:

CSV reader.

Return type:

obj

get_csv_reader_std()

Utility method for reading different standard file formats: csv reader.

Returns:

CSV reader.

Return type:

obj

get_dataframe_from_csv(low_memory=True, delimiter=None)

Utility method for reading different standard file formats: Pandas DataFrame from csv.

Parameters:
  • low_memory (bool) – A flag to suppress warning. Pandas is guessing dtypes for each column

  • demanding. (if column dtype is not specified which is very memory) –

Returns:

Panda’s DataFrame.

Return type:

obj

get_dataframe_from_shapefile()

Utility method for reading different standard file formats: GeoDataFrame from shapefile.

Returns:

Geopanda’s GeoDataFrame.

Return type:

obj

get_file_path(type='csv')

Utility method for reading different standard file formats: file path.

Parameters:

type (str) – A file type.

Returns:

File name and path.

Return type:

str

get_inventory_reader()

Utility method for reading different standard file formats: Set of inventory.

Returns:

A Fiona object.

Return type:

obj

get_json_reader()

Utility method for reading different standard file formats: json reader.

Returns:

A json model data.

Return type:

obj

get_raster_value(x, y)

Utility method for reading different standard file formats: raster value.

Parameters:
  • x (float) – X coordinate.

  • y (float) – Y coordinate.

Returns:

Hazard values.

Return type:

numpy.array

class dataset.DamageRatioDataset(filename)

For backwards compatibility until analyses are updated.

Parameters:

filename (str) – CSV file with damage ratios.

class dataset.InventoryDataset(filename)

For backwards compatibility until analyses are updated.

Parameters:

filename (str) – file with GIS layers.

dfr3service

class dfr3service.MappingSubject
class dfr3service.MappingRequest
class dfr3service.MappingResponse(sets: Dict[str, any] = {}, mapping: Dict[str, str] = {})
class dfr3service.Dfr3Service(client: IncoreClient)

DFR3 service client.

Parameters:

client (IncoreClient) – Service authentication.

batch_get_dfr3_set(dfr3_id_lists: list)

This method is intended to replace batch_get_dfr3_set in the future. It retrieve dfr3 sets from services using id and instantiate DFR3Curveset objects in bulk.

Parameters:

dfr3_id_lists (list) – A list of ids.

Returns:

A list of dfr3curve objects.

Return type:

list

static extract_inventory_class(rules)

This method will extract the inventory class name from a mapping rule. E.g. PWT2/PPP1

Parameters:
  • rules (dict) – e.g. { “AND”: [“java.lang.String utilfcltyc EQUALS ‘PWT2’”,

  • 'PPP1'"]} ("java.lang.String utilfcltyc EQUALS) –

Returns:

extracted inventory class name. “/” stands for or and “+” stands for and

Return type:

inventory_class (str)

static extract_inventory_class_legacy(rules)

This method will extract the inventory class name from a mapping rule. E.g. PWT2/PPP1

Parameters:
  • rules (list) – The outer list is applying “OR” rule and the inner list is applying an “AND” rule.

  • list (e.g.) –

Returns:

extracted inventory class name. “/” stands for or and “+” stands for and

Return type:

inventory_class (str)

match_inventory(mapping: MappingSet, inventories: list, entry_key: str | None = None)

This method is intended to replace the match_inventory method in the future. The functionality is same as match_inventory but instead of dfr3_sets in plain json, dfr3 curves will be represented in FragilityCurveSet Object.

Parameters:
  • mapping (obj) – MappingSet Object that has the rules and entries.

  • inventories (list) – A list of inventories. Each item is a fiona object

  • entry_key (None, str) – Mapping Entry Key e.g. Non-retrofit Fragility ID Code, retrofit_method_1, etc.

Returns:

A dictionary of {“inventory id”: FragilityCurveSet object}.

Return type:

dict

match_list_of_dicts(mapping: MappingSet, inventories: list, entry_key: str | None = None)

This method is same as match_inventory, except it takes a simple list of dictionaries that contains the items to be mapped in the rules. The match_inventory method takes a list of fiona objects

Parameters:
  • mapping (obj) – MappingSet Object that has the rules and entries.

  • inventories (list) – A list of inventories. Each item of the list is a simple dictionary

  • entry_key (None, str) – Mapping Entry Key e.g. Non-retrofit Fragility ID Code, retrofit_method_1, etc.

Returns:

A dictionary of {“inventory id”: FragilityCurveSet object}.

Return type:

dict

fragilityservice

class fragilityservice.FragilityService(client: IncoreClient)

Fragility service client.

Parameters:

client (IncoreClient) – Service authentication.

hazardservice

class hazardservice.HazardService(client: IncoreClient)

Hazard service client

Parameters:

client (IncoreClient) – Service authentication.

networkdata

class networkdata.NetworkData(network_type: str, file_path: str)

Network data from Fiona package. Fiona can read and write data using GIS formats.

Parameters:
  • network_type (str) – Network type.

  • file_path (str) – Path to a file with GIS layers.

get_inventory_reader()

getter

networkdataset

repairservice

class repairservice.RepairService(client: IncoreClient)

Fragility service client.

Parameters:

client (IncoreClient) – Service authentication.

restorationservice

class restorationservice.RestorationService(client: IncoreClient)

Fragility service client.

Parameters:

client (IncoreClient) – Service authentication.

spaceservice

class spaceservice.SpaceService(client: IncoreClient)

Space service client.

Parameters:

client (IncoreClient) – Service authentication.