Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Wiki Markup
*MERGER:*
*"Multi-hazard risk assessment in ExploRation/exploitation of GEoResources"*
*User Guide*
Version 1.0 (May 2019)
<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="1d73d1e1-62ae-4b91-b2b5-f691df31f2d5"><ac:parameter ac:name="">BMintro</ac:parameter></ac:structured-macro> *1  Introduction*
The implemented method for multi-hazard risk (MHR) assessment relies on the quantification of the likelihood and related consequences of identified risk pathway scenarios (e.g. Fig. 1a) structured using a _bow-tie_ (BT, Fig. 1b) approach (e.g. Bedford and Cooke, 2001; Rausand and Høyland, 2004). The BT is widely used in reliability analysis and has been proposed for assessing risks in a number of geo-resource development applications, as for example in offshore oil and gas development (e.g. Khakzad et al, 2013, 2014; Yang et al, 2013) and for the mineral industry in general (e.g. Iannacchione, 2008).
!worddav8600dbb206c862188d49113301c2a180.png|height=261,width=558!  
*Figure<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="fdc441ac-a5cf-48fe-96c7-91839a483425"><ac:parameter ac:name="">BMfig_CausalBT</ac:parameter></ac:structured-macro> 1:* (a) generic causal diagram used for qualitative structuring of a set of scenarios; (b) bow-tie structure for a determined scenario of interest (modified from Garcia-Aristizabal et al, 2017); (c) fault tree component of the bow-tie structure; (d) critical event linking the fault tree and the event tree; (e) event tree component of the bow-tie structure; (f) Bayesian inference of the parameters of the probabilistic models used to define basic events in fault trees and nodes in event trees.
The BT analysis, in particular, provides an adequate structure to perform detailed assessments of the probability of occurrence of events or chains of events in a given accident scenario. It is targeted to assess the causes and effects of specific critical events; it is composed of a _fault tree_ (FT, Fig. 1c), which is set by identifying the possible events causing the critical or _top event_ (TE, Fig. 1d), and an _event tree_ (ET, Fig. 1e), which is set by identifying possible consequences associated with the occurrence of the defined TE (e.g. Rausand and Høyland, 2004). Therefore, in the BT structure, the top event of the FT constitutes the initiating event for an ET analysis. 
The FT is a graphical representation of various combinations of basic events that lead to the occurrence of the undesirable critical situation defined as the TE (e.g. Bedford and Cooke, 2001). Starting with the TE, all possible ways for this event to occur are systematically deduced until the required level of detail is reached. Events whose causes have been further developed are _intermediate events{_}, and events that terminate branches are _basic events_ (BE). The FT implementation is based on three assumptions: (1) events are binary events (do occur/ don't occur); (2) basic events are statistically independent; and (3) relationships between events are represented by means of logical Boolean gates (mainly AND, OR). The probability of occurrence of the TE is calculated from the occurrence probabilities of the BEs. 
The ET is an inductive analytic diagram in which an event is analysed using logical series of subsequent events or consequences. The overall goal of the ET analysis in this context is to determine the probability of possible consequences resulting from the occurrence of a determined initiating event. Moreover, most industrial systems include various barriers and safety functions that have been installed to stop the development of accidental events or to reduce their consequences; these elements should be considered in the consequence analysis. 
The quantitative assessment of the scenarios implemented in a BT structure is based on the probabilities assigned to the basic events of the FT and to the nodes of the ET. In this work, the BT logic structure is coupled with a wide range of probabilistic tools that are flexible enough to make it possible to consider in the analyses different typologies of phenomena. Furthermore, since the risk scenarios associated with geo-resource development activities are likely to include events closely related to geological, hydrogeological, and geomechanical processes in underground rock formations and with limited access to direct measurements, alternative modelling mechanisms for retrieving reliable data need to be considered.
This document is a step-by-step guide for using the first release of the MERGER tool as implemented in the in the IS-EPOS platform. At the current stage, the released version of MERGER includes only the fault tree solver (MERGER-FT), and for this reason this guide is focused only on this aprt of the system. The full MERGER system (which includes the MERGER-ET solver and therefore the tool for solving the full bow tie structure) will be soon released. A detailed technical decription of the model implemented in MERGER can be found in Garcia-Aristizabal et al (2019).
<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="85bc94da-6970-4fab-8e80-0ad4459fd179"><ac:parameter ac:name="">BMApx_EPOS</ac:parameter></ac:structured-macro>{*}2  Step-by-step guide*
In this section, we briefly describe how to use the application according to its current state of development. This guide will be updated as soon as new releases and updated of the system become available. We highlight in <span style="color: #ff0000"><strong>red</strong></span> the descriptions that are related to functionalities not yet available in the IS-EPOS platform. 
\\
In the IS-EPOS Platform, MERGER is available to be used from the _Applications_ menu (Fig. 2). To use any application from the platform it has to be first added to the user's workspace (using the _Add to workspace_ button; see the quick start guide IS-EPOS, 2018). Once the MERGER application is available in the _Workspace{_}, the user can open the application, which at this point is ready to be used. The loaded application looks as shown in Fig. 3. The first step of an assessment is to provide the input data required for quantitative analyses, which in the most general definition can be structured as: (a) the definition of the TE of interest; (b) _input/analysis of the Fault Tree{_}, and  <span style="color: #ff0000">(c) <em>input/analysis of the Event Tree</em></span>. In the current release, only the FT component of the BT structure of MERGER has been integrated in the IS-EPOS platform. For this reason here we focus on the description of the interface for the construction and analysis of the FT component. 
\\
!worddav9eadc67ab501abf666d8681cc2c84f6b.png|height=353,width=487! 
*Figure<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="e7a98a1f-725c-4746-974f-173a85cb0aac"><ac:parameter ac:name="">BMfig_AppMenu</ac:parameter></ac:structured-macro> 2{*}: View of the Applications list within the IS-EPOS Platform
\\
!worddavb5f6e27b9fe80b85fe54ee276e76288f.png|height=210,width=536! 
*Figure<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="77f9dab6-33ad-4196-bb54-45c220ca20eb"><ac:parameter ac:name="">BMfig_FTinput</ac:parameter></ac:structured-macro> 3{*}: View of the Fault tree input/analysis of MERGER (MERGER-FT) within IS-EPOS Platform after loading the tool in the workspace.
 <span style="color: #0000ff"><strong>Note:</strong> At the bottom of the main page of MERGER-FT, there are two buttons: <em>Save</em> and <em>Run</em>. Use the <em>Save</em> button while loading your data to avoid loosing your data if incidentally you get unlogged from the platform (e.g., because of a long period of inactivity). On the other hand, use the <em>Run</em> button to execute the software to evaluate the fault tree once you have finished and controlled the input data.</span>
To illustrate the data input process in MERGER-FT, we will work out a very simple example related to a scenario of possible groundwater pollution caused by a surface spill related to the failure of a storage unit containing flowback fluids (in a sith with hydraulic fracturing operations). This example is part of an analysis presented by Garcia-Aristizabal et al (2019). Fig. 4 shows a simple FT for this example, which considers three BEs defined as shown in Table 1. It is assumed that an isolation (impermeable) membrane has been installed in the site to protect groundwater from on-site surface spills. 
!worddav708ae6861138f2f329a108544f5a0c13.png|height=334,width=472!  
*Figure<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="23e0a7ab-2f82-4144-867f-70e05f8af678"><ac:parameter ac:name="">BMfig_FT_StorageFailure</ac:parameter></ac:structured-macro> 4{*}: Fault tree for assessing the probability of HazMat fluids reaching a drinking groundwater layer associated with the failure of a storage unit containing flowback fluids 
\\
*Table<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="a1f1a89f-c1f8-4253-9399-8a0f56622d1c"><ac:parameter ac:name="">BMtab_StorageFailScnModelPars</ac:parameter></ac:structured-macro> 1{*}: Data used to set the prior and the likelihood distributions of the BEs defined for the scenario associated with truck accidents (Poisson and binomial models)
!worddavbd33fbe70e9a9bebd0bf375443eab22a.png|height=160,width=567! *2.1  Preparing the data for input*
For reference, here we briefly describe how to prepare the required input data for this simple example. More details regarding the procedure, models, and required model parameters can be found in Section 3 and in Garcia-Aristizabal et al (2019).
\\
*2.1.1  HazMat leakage caused by storage failure (B04)*
The HazMat storage failure basic event (B04) is set by assuming that flowback fluids are stored in tanks whose failure rates are modelled using a homogeneous Poisson process. It is worth noting that in a MHR analysis this event can be further developed to consider the failure rates associated to the occurrence of specific events as for example the material fatigue, extreme winds, or earthquake ground motions (see, e.g. B04x in Fig. 4). For the scope of the example presented in this guide, we set a single basic event (B04) aggregating all the failures, regardless of the cause. 
Considering the Bayesian approach implemented in MERGER for modelling Poisson processes, it is necessary to set a _prior_ state of knowledge (using e.g., generic data), and the likelihood function to encode the site-specific data available. To set the prior distribution for this BE, we use tank failure rates data published by Gould and Glossop (2000). To calculate the probability of HazMat storage failure, we consider that a number  of storage containers contain flowback fluids in the site. 
According to the failure rates reported by Gould and Glossop (2000), the rate of a catastrophic failure is  per vessel/year. Therefore, for  containers operating on-site, each of which can fail independently,  can be set as:  (for ).
The defined values are reported in Table 1.  is the standard deviation assumed for the prior value of the prior failure rate. To set the likelihood function, we assume that no fluid-container failures have been recorded in the analysed case (i.e. ) in the observation period ( year).
*2.1.2  Failure of the isolation membrane (B05)*
Regarding the basic event B05, the failure of the isolation membrane is also modelled as a homogeneous Poisson process. To our knowledge, no data regarding the failure rates of a plastic membrane installed for site isolation in this kind of applications is available in literature; therefore, for the sake of the example presented in this paper, the prior distribution is set by assuming arbitrarily a generic failure rate value as shown in Table 1. To set the likelihood function, it is assumed that no membrane failures have been detected during the time of project operations (that is,  and  year, see Table 1).
*2.1.3  Leaked fluids percolate reaching groundwater (B06)*
The B06 basic event is implemented using the binomial model. In this case, the prior information can be set using, for example, integrated assessment modelling (IAM) to assess the probability that fluids from a surface spill can flow through a porous media and reach the groundwater level. The definition of such IAM is out of the scope of this guide and, therefore, for the sake of this example, this value is arbitrarily assumed (see Table 1). Regarding the likelihood function, the data for this case study indicates that no failure leaks have occurred during the operations; therefore, the likelihood function is set by defining a zero number of leaks reaching groundwater () out of zero HazMat leaks caused by tank storage failures during operations (i.e. , see Table 1). 
*2.2  Loading the data in MERGER-FT*
Once the input data is ready, both the FT and the data for setting the basic events can be loaded into MERGER-FT. This process is performed following a Top-Bottom approach: first, we define the _Top Event_ of the FT, and layer by later, we create all the intermediate events until arriving to the basic events. To add an event just clock the _ADD EVENT_ button. A box opens to input the data. Selecting the _Type_ of event, you can choose between setting an _intermediate event_ or a _Basic Event_ of a given class (i.e., Homogeneous Poisson, Binomial, Weibull). Fig. 5 shows an example of setting the B04 basic event (HazMat storage failure) using the data in Table 1, which is modelled as a Poisson process. Fig. 6 shows the input box when creating an _intermediate event{_}. In this case, the system expects the input of the events linked to define that specific intermediate event. In this example, as can be seen in the FT diagram shown in Fig. 4, the created intermediate results from linking two basic events (i.e., B05, B06) linked by an AND gate. Once the events defining a given level of the FT are created, it is possible to define the _gate_ linking them (e.g., AND, OR). 
Once all the basic events defined for this example have been created, and the gates linking the basic and/or intermediate events have been defined, the FT is ready to be assessed. The resulting FT for the worked example is presented in Fig. 7.
The last parameter to be set is the _Number of Iterations{_}; this is the number of times that the full FT will be assessed by sampling the distributions characterizing the BEs. In this example we set it to 200 (see Fig. 7). 
Finally, in the MERGER-FT main screen there is also a check box called _Time-dependent calculations{_}. leave this box unchecked unless you are performing time depepndent calculations (e.g., by defining a Weibull process to describe, for example, wearing or aging processes. In such a case, once you activate this check box, the following two additional parameters need to be defined: 
• _Mission time (years){_}: is the time window for which time-dependent calculations need to be computed (e.g., 20 years) 
• _Number of time slices{_}: Outputs of time-dependent calculations will be provided in two ways: (a) histograms at different times, according to the number of time points set in this parameter (e.g., setting this parameter to 5, it means that 5 time intervals between the current time () and the mission time (e.g., Mission time)); (b) a plot of the evolution with time of the probability (or frequency) of each intermediate event as well as for the top event. 
 \\
!worddavc1c654d7c7f1c4b671e5d9a1e8b46902.png|height=242,width=557!
*Figure<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="ea776b69-6cd2-44c4-bbf7-c93be35b38b7"><ac:parameter ac:name="">BMfig_FTinput2</ac:parameter></ac:structured-macro> 5{*}: Data input - Homogeneous Poisson process.
 \\
!worddav36a7cb1812384da2465976a34e07b7f3.png|height=242,width=554!
*Figure<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="a808cc40-479e-407f-a0b7-6990b66d5d36"><ac:parameter ac:name="">BMfig_FTinput3</ac:parameter></ac:structured-macro> 6{*}: Data input - creating an intermediate event.
 \\
!worddav44401959a51fcd5b3f4d3d367c6140c2.png|height=242,width=554!
*Figure<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="df50870e-8d87-4c8b-a448-fd9353967015"><ac:parameter ac:name="">BMfig_FTinput4</ac:parameter></ac:structured-macro> 7{*}: Data input - The fault tree shown in Fig. 4 created MERGER-FT.4
 \\
!worddav10657d6a70420a4864f73fcfbdf1c514.png|height=242,width=554!
*Figure<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="5d9c2340-5675-4ff1-baa8-c4df1a05bb3e"><ac:parameter ac:name="">BMfig_FTinput5</ac:parameter></ac:structured-macro> 8{*}: Data input - Extra input data required when it is required to perform time-dependent calculations.
After specifying the FT inputs and running the application (the button _Run{_}), the MERGER software is executed on distributed computing resources (Fig. 9). The computation itself may take some time, depending on the complexity of the fault tree and on the number of iterations supplied as one of the input parameters. The application run is asynchronous; therefore, the results will be saved even if the user is not logged-in to the IS-EPOS Platform. This is a very important feature, especially when analysing large, complex problems, because in the case of computationally expensive analyses, the user can launch calculations, log-off of the system, and retrieve the results after some time by logging-in again into the platform.
After the computation is finished, the results of the application are saved to the user's _Workspace_ (see, e.g. Fig. 10) and are available to be displayed or downloaded. Clicking on a figure shows the relative plot in the workspace (see, e.g. Fig. 11). A log file is also written in the workspace (MERGER log file); this file contains a summary of the input data as well as summary statistics of the probabilities (or frequencies) of both intermediate and top events (see, e.g. Fig. 12). 
!worddavf58230efe762a0eda2db6af55240bfbc.png|height=242,width=554! 
*Figure<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="e0abddac-33cf-4de8-8100-b46bc3c4bb7c"><ac:parameter ac:name="">BMfig_FTinput6</ac:parameter></ac:structured-macro> 9:* MERGER-FT Running. While the system is running, the text in the green buttom changes from _Run_ to _Abort{_}. Wait until the results are obtained and made available in the page.
!worddavcc74cfd4bc8341faf9ccb0419eb66342.png|height=331,width=554! 
*Figure<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="37b49e02-58f4-43e4-99c2-32f1bcc914a5"><ac:parameter ac:name="">BMfig_MergerOutput1</ac:parameter></ac:structured-macro> 10{*}: Screenshot of the MERGER-FT output data: List of created files
\\
!worddavd156ac0a8c05c7e8a272a0f919ec2d37.png|height=360,width=554! 
*Figure<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="daf7a392-d262-4de4-8f37-c3f0c372bd43"><ac:parameter ac:name="">BMfig_MergerOutput2</ac:parameter></ac:structured-macro> 11{*}: Screenshot of the MERGER-FT output data: Plots generated in vector format (EPS) and as flat images (PNG).
!worddavdd2c55ff618c896ad5618e3ecc329e8d.png|height=242,width=557! 
*Figure<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="b12b8f10-bef5-4b4a-b650-3737a8d29a14"><ac:parameter ac:name="">BMfig_MergerOutput3</ac:parameter></ac:structured-macro> 12{*}: MERGER log file: A log file, containing a summary of input setting and results, is saved as a text file.
\\
\\
\\
<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="853b7f25-7a73-4d36-b110-e92ef25ffa92"><ac:parameter ac:name="">BMsec_method</ac:parameter></ac:structured-macro>{*}3  MERGER in a nutshell*
In this section we briefly present the probabilistic model implemented in MERGER. A more detailed description of the model and the implemented system can be found in Garcia-Aristizabal et al (2019).
*3.1  The MERGER-FT in a nutshell*
The quantitative assessment of the scenarios implemented in a BT structure is based on the probabilistic models assigned to the basic events of the FT and to the nodes of the ET. In this section we focus our attention in the process for data input for the fault tree analysis (hereinafter called MERGER-FT).
The MHR assessment approach in MERGER considers the following five classes of probabilistic models for implementing the stochastic characteristics of FT's basic events: 
•Homogeneous Poisson process; 
•Binomial model; 
•Weibull model; 
•Static physical reliability models; 
•Dynamic physical reliability models; 
The current version implemented in the IS-EPOS platform includes the first three probabilistic models. The last two models will be released in a next release. The Homogeneous Poisson process and the Binomial model were implemented using Bayesian data analysis techniques. In such cases, a _prior_ state of knowledge can be set using generic information as, for example, data from similar cases, the use of integrated assessment modelling, and/or by the elicitation of experts. The probabilities obtained from the initial generic data can be then updated using the Bayes' theorem (through a likelihood function) as site-specific new data becomes available. 
In this section, we briefly indicate the main features of the implemented models, as well as the input/output parameters required for defining a given BE according to these models. A detailed description of the mathematical background is presented in Garcia-Aristizabal et al (2019).
*3.1.1  Homogeneous Poisson process (HPP)*
A constant event rate implies that events are generated by a Poisson process. In this case, the inference problem is to estimate the rate of event occurrence (λ) per time unit. For simplicity, we adopt the conjugate pair Poisson likelihood / gamma prior (e.g. Gelman et al, 1995), which is one of the most frequent models used in risk assessment applications (e.g. Siu and Kelly, 1998). A prior distribution for λ can be developed from other generic data (as, e.g. data from similar cases or components, or from expert opinion elicitation). 
The prior state of knowledge can be defined using the actual analyst's knowledge of a best prior estimate of the rate \[which is set as the _mean_ prior value, _E{_}(λ)\] and a standard deviation, SD(λ), as a measure of the uncertainty in the prior best value. These two estimates are then used for setting the parameters of the gamma prior distribution (for details see Garcia-Aristizabal et al (2019)). The (Poisson) likelihood function is set for encoding the site-specific data which, for a HPP is basically the number of events _r_ occurred in a time interval Δ{_}t{_}=\[0,{_}t{_}\]. Table 2 summarises the data required for defining a basic event as a homogeneous Poisson process. 
\\
Once the posterior distribution for λ, , has been calculated, samples of λ are drawn from the posterior distribution and used to calculate the probability of at least one event occurring in a determined period of time of interest.
\\
\\
\\
*Table<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="ca9060bc-c3ef-41c7-ac7d-d486ff393e77"><ac:parameter ac:name="">BMtab_hpp</ac:parameter></ac:structured-macro> 2{*}: Parameters required for setting a basic event of class _HPP_
!worddav2553efbf56356e9f1a145cebaeeab415.png|height=121,width=552!
\\
*3.1.2  Binomial model*
The standard solution for events occurring out of a number of trials uses the binomial distribution (e.g. Siu and Kelly, 1998). This model assumes that the probability φ of observing _r_ events (e.g. failures) in _n_ trials is independent of the order in which successes and failures occur. The inference problem in this case is to estimate the value of the φ parameter of the binomial distribution, which may be uncertain due, for example, to a low number of trials. If φ is uncertain, then we can define a probability distribution for φ. For simplicity, in this case we adopt also the conjugate pair binomial likelihood / beta prior (another frequent model used in risk assessment applications). 
_r_ and _n_ are site-specific observations and the input data required for setting the (binomial) likelihood function. On the other hand, the prior beta distribution is characterised by two parameters (α and β) whose definition might not be intuitive. Therefore, to set the model parameters of the prior distribution, we make use of indirect measures that can be more easily defined by an analyst. In practice, we identify an _average_ value as the best prior estimate of the φ parameter and a measure of the degree of uncertainty related to that estimate. To define the degree of uncertainty in the _best-estimate_ value we use the so-called _equivalent sample size_ (or _equivalent number of data{_}), Λ, with Λ>0, as defined in Marzocchi et al (2008) and Selva and Sandri (2013). Λ can be interpreted as the quantity of data that the analyst expects to have at hand in order to modify a prior belief regarding the value of the φ parameter. It means that the larger Λ, the more confident the analyst is about the prior state of knowledge. For example, setting Λ=1 the analyst is expressing a maximum uncertainty condition, implying that just one singe observation can substantially modify the prior state of knowledge. 
It is worth noting here that the definition of Θ and Λ needs to be consistent. For example, if a prior belief regarding a given event indicates a prior value of Θ in the order of , it means that the analyst is quite confident that the event's probability is very low (in other words, there is low epistemic uncertainty regarding this parameter); in such a case, a high Λ value is required to reflect the low epistemic uncertainty regarding this prior belief (i.e. setting, e.g. Λ=1 in such a case would be inconsistent with that prior belief). It is difficult to define a _rule of thumb_ for setting Θ and Λ; indicatively, we can assume that the absolute value of the order of magnitude of Θ provides a rough indication about the order of magnitude of the equivalent sample size (or _trials{_}) required for obtaining such estimation. Therefore, for a consistent definition of Λ we can consider, in general, that _extreme_ (high or low) Θ values usually imply high confidence (i.e. event is very likely or very unlikely) and therefore a high value of Λ. 
Table 3 summarises the data required for defining a binomial basic event. Once the posterior distribution for φ, , has been calculated (for details see Garcia-Aristizabal et al (2019)), samples of φ values are drawn from the posterior distribution to define the probability of a binomial BE. 
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
Table<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="476d8735-8fae-45cd-80da-a65f43dcbf7d"><ac:parameter ac:name="">BMtab_ondemand</ac:parameter></ac:structured-macro> 3: Parameters required for setting a binomial basic event
!worddav1b83463eae34e742662a1351aabb79da.png|height=137,width=548!
\\
*3.1.3  The Weibull model*
The Weibull distribution has been identified as one of the most useful distributions for modelling and analysing lifetime data in different areas as engineering, geosciences, biology, and other fields (see, e.g. Garcia-Aristizabal et al, 2012). In this approach, the Weibull distribution is used for describing systems with a time-dependent hazard rate in which the probability of an event occurrence is dependent on the time passed from the last event.
The Weibull distribution is characterised by two parameters (λ>0 and _k{_}>0) and is defined for positive real numbers (e.g. Leemis, 2009). The mathematical description of this model is presented in Garcia-Aristizabal et al (2019). _k_ determines the time-dependent behaviour of the hazard rate: for _k{_}<1, the hazard rate decreases with time; for _k{_}=1, the hazard rate is constant with time (equivalent to a homogeneous Poisson process), while for _k{_}>1, the hazard rate increases with time (i.e. as in an ageing or wearing process). Given the typology of applications in which the MHR model is applied here, the cases of main interest are those for which _k{_}≥1. 
In the MHR approach presented in this paper, the Weibull model is used to calculate the conditional probability that an event happens in a time interval ({_}x{_},{_}x{_},+Δ{_}t{_}), given an interval of  years since the occurrence of the previous event, where τ the _current_ time of the assessment and  the time form the last event.
The definition of a BE using the Weibull model in the MHR approach presented in this paper requires setting five parameters, as described in Table 4; beyond the values of the two parameters of the distribution (λ,{_}k{_}), the uncertainties in the model parameter values are also required. Likewise, since this model is used for including processes with a time-dependent hazard rate, it is also necessary to set , the time passed since the last event (or, in the case of modelling an element's wearing/ageing, it the time that the element has been operating). 
\\
Table<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="e7b80bc1-660e-466e-bd16-7ac0061abfda"><ac:parameter ac:name="">BMtab_wei</ac:parameter></ac:structured-macro> 4: Parameters required for setting a basic event of Weibull class
!worddavce77af8885295fca4e6619d67db58352.png|height=203,width=555!
*3.1.4  Considering external perturbations through physical reliability models*
<span style="color: #ff0000">Note: This utility is not available yet. You can skip this section for the moment</span>.
The _physical reliability models_ (PRM) aim to explain the probability (or the rate) of event occurrences (as e.g. hazardous events and/or system failures) as a function of operational physical parameters (e.g. Dasgupta and Pecht, 1991; Ebeling, 1997; Melchers, 1999; Hall and Strutt, 2003; Khakzad et al, 2012). PRM are often used in reliability analysis for describing degradation and failure processes of both mechanical and electronic components ( Carter, 1986; Ebeling, 1997). 
We consider that this typology of modelling approach may be of particular interest for MHR assessments applied to geo-resource development activities because they allow to introduce in the analysis (1) simple cases of expected damage to system's elements caused by generic loads (as, e.g. external hazards as earthquakes, or extreme meteorological events), and (2) to consider operational parameters as covariates in the process of modelling event rate occurrences or probabilities. A number of physical reliability models have been proposed in reliability analysis; for the approach presented in this paper, we are particularly interested in implementing two typologies of PRM: (1) static and (2) dynamic PRM. 
*Static PRMs -*
We implement Static PRM as a basic template for assessing damage probabilities of elements exposed to generic loads (e.g. external hazards). This typology of models has been widely used in risk assessments associated with natural hazards, mostly in the field of seismic risk analysis (e.g. EERI Committee on Seismic Risk, 1989, among many others).
For a generic conceptual description of the static PRM implemented in this work, we take as reference the random shock-loading model (e.g. Ebeling, 1997; Hall and Strutt, 2003). This is a simple model in which it is assumed that a variable stress load _L_ is applied, at random times, to an element (e.g. a system's component or infrastructural element) which has a determined capacity to support that load (hereinafter called _strength{_}). Stresses are, in general, physical or chemical parameters affecting the component's operation. In probabilistic hazard assessment, such stresses are often called _intensity measures_ (e.g. Burby, 1998). On the other hand, the strength is defined as the highest amount of stress that the component can bear without reaching a determined _damage state_ (which can be defined at different degrees of criticality, as for example a failure, a moderate damage, etc). 
A distribution function of the intensity measure (stress), associated with reaching a given damage state, is what in the risk assessment practice is usually called as a fragility function (e.g. Kennedy et al, 1980). According to this basic model, a given damage state (e.g. failure) occurs when the stress on the component exceeds its strength (e.g. Hall and Strutt, 2003). The mathematical description of the implemented model is presented in Garcia-Aristizabal et al (2019). Stress and strength can be constant or considered as random variables having known probability distribution functions. 
*Dynamic PRMs -*
Dynamic PRMs aim to explain the event occurrence (e.g. the failure of a component) as a multivariate function of operational physical parameters (e.g. Ebeling, 1997; Khakzad et al, 2012). Operational physical parameters that can be used as covariates may include, among others, temperature, velocity, pressure, vibration amplitude, fluid injection rates, etc. The dynamic PRMs are considered in this approach assuming that it is possible to identify either: 
1.A relationship between operational or external parameters of interest and the rate of occurrence of events stressing the system (i.e. at the _hazard_ level); or, 
2.A relationship between operational or external parameters and the strength of components of interest (i.e. at the _vulnerability_ level). 
The mathematical description of the implemented model is presented in Garcia-Aristizabal et al (2019). In the first case (i.e. covariates linked to the stress component), we assume that the rate of occurrence of the loading process (hazard) may be modulated as a function of one or more covariates of interest. Such a model is implemented by defining a probability distribution for modelling the rate of the loading process, and the parameters of that distribution are allowed to change as a function of the selected covariates of interest. Examples of hazard-related covariate model implementations of interest for MHR assessments are the covariate approaches for modelling time-dependent extreme events (e.g. Garcia-Aristizabal et al, 2015), and the model for assessing induced seismicity rates as a function of the rate at which fluids are injected underground ( Garcia-Aristizabal, 2018). 
In the second case (i.e. covariates linked to the strength component), the covariates are linked to the parameters of the distribution used to model the probability of reaching a determined damage state (e.g. failure) of an element of interest. Examples of models for performing analysis in this case have been presented, for example, by Hall and Strutt (2003) and Khakzad et al (2012).
The implementation of these models into the MHR approach can be done according to the following general procedure (a detailed example of a specific implementation can be found in Garcia-Aristizabal, 2018): 
1.Identification of informative variables that can be correlated with the rate of occurrence of determined events of interest. 
2.Identification of a probability distribution to be used as a basic template for describing the process under analysis.
3.Inference of the parameter values of competing deterministic models relating the parameter(s) of the selected template distribution and the covariate(s) of interest, as well as the definition of an objective procedure for model selection. 
4.Testing the performance of the selected model by comparing model forecasts with actual observations.
Once the model has been calibrated and tested, it can be used to calculate the probability of the BE of interest as a function of the values taken by the selected covariates. 
*3.2  MERGER-ET in a nutshell*
<span style="color: #ff0000">Note: This utility is not available yet. You can skip this section for the moment</span>.
\\
In this section, we focus the attention on the probabilistic tools implemented for modelling nodes in the event tree part of the BT structure. ET nodes are often defined as binary situations characterised by two possible outcomes (e.g. yes/no, works/fails, etc.); in such cases, event probabilities are defined using the binomial model described in the previous section. Nevertheless, when constructing an ET for assessing consequences it is often required to set nodes with more than two mutually exclusive events. For example, if the starting event of an ET is the leakage of certain hazardous material (HazMat) on surface water, the subsequent node of the ET can be set to assess the probability that the leaked volume is _large{_}, _medium{_}, or _small_ (according e.g. to some pre-defined thresholds). 
To set event probabilities in such cases, we implement the _multinomial_ model, that is a generalisation of the binomial case. It can be set for cases in which there are _n_ possible mutually exclusive and exhaustive events at the ET's node, each event with probability  (where, for a given node, ).
We perform Bayesian inference of the  parameters, adopting for simplicity the conjugate pair multinomial likelihood / Dirichlet prior (e.g. Gelman et al, 1995). The mathematical definition of the multinomial model can be found in Garcia-Aristizabal et al (2019). 
The parameters required for setting the multinomial model are summarised in Table 5. To set the model parameters of the prior distribution we follow a similar approach as the one used for the binomial model; for the analyst it is usually easier to set an _average_ value as the best prior estimate of each parameter (i.e. the probability  for the _i{_}th event) and to define a degree of confidence on such estimate. Therefore, the parameters of the Dirichlet prior distribution are set adopting an approach analogue to the one presented for the beta distribution ( Marzocchi et al, 2008), in which it is possible set the prior state of knowledge by defining (1) a vector of _n_ _best estimate_ values, , and (2) an estimate of the uncertainty associated with these prior estimations using the _equivalent sample size{_}, Λ (which, as defined, is a number representing the quantity of data that the analyst expects to have in order to modify the prior values). 
\\
\\
*Table<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="a972233e-7c0a-4bfb-ba8f-1d4241d9ecd5"><ac:parameter ac:name="">BMtab_multinom</ac:parameter></ac:structured-macro> 5:* Parameters required for setting the multinomial model
!worddavc3b62f9c907c38dde5507ff59b004d4e.png|height=172,width=548!
<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="de2a2a83-5b03-4e18-ac51-efa20dd3a83b"><ac:parameter ac:name="">BMsec_MCBT</ac:parameter></ac:structured-macro>{*}3.3  FT and ET evaluation using Monte Carlo simulations*
A BT structure is quantitatively assessed by using the probability data from the BEs of the FT and the nodes of the ET. Large and complex BTs require the aid of analytic- or simulated-based methods for evaluation (e.g. Ferdous et al, 2007; Rao et al, 2009; Yevkin, 2010; Taheriyoun and Moradinejad, 2014). 
We use Monte Carlo simulations for evaluating the FT and ET components of the risk pathway scenarios structured in a BT approach. The system is structured as follows: first, the FT is solved using Monte Carlo simulations by sampling the probability distributions defined for each BE. In this way, we obtain an empirical distribution for the probability of the critical top event of the FT. Empirical distributions of intermediate events of interest are also provided as an intermediate output. 
Second, the empirical distribution obtained for the TE's probability is assigned to the initial node of the related ET, and the outcome of the ET is also assessed using Monte Carlo simulations. The algorithms implemented for solving the FT and ET components of the BT structure are described in Garcia-Aristizabal et al (2019).
<ac:structured-macro ac:name="anchor" ac:schema-version="1" ac:macro-id="2a0d6ca2-a61d-4952-930a-df492ba17a28"><ac:parameter ac:name="">BMsec_IAMExpOp</ac:parameter></ac:structured-macro>{*}3.4  Integrated assessment modelling*
Many of the events of interest for MHR assessments in geo-resource development projects are rare events that, by definition, are characterised by very low occurrence probabilities. Contrary to what usually happens with pure industrial applications, many risk pathways associated with geo-resource development activities involve elements intrinsically related to features of underground geological formations (as, e.g. rock fracture connectivity, fluid flow through porous media, pore pressure perturbations, induced seismicity, etc.) for which direct measurements may be very limited or even unavailable. It is for this reason that we explore alternative sources of information for retrieving useful data for setting a prior state of knowledge for a determined BE, intermediate event or node in the BT structure. 
Integrated assessment modelling (IAM) is a tool used for tracking complex problems in which obtaining information from direct observations or measurements is challenging. IAM has been widely used, for example, for the implementation of climate policies that require the best possible understanding the potential impacts of climate change under different anthropogenic emission scenarios (see, e.g. Stanton et al, 2009). In the field of geo-resources development, IAM has been used for example to quantify the engineering risk in shale gas development ( Soeder et al, 2014). 
IAM usually tries to link in a single modelling framework the main features of a system under analysis, taking into account the uncertainties in the modelling process. An IAM application to MHR assessment can be implemented to understand how a determined geologic or environmental system, of interest in a given BE (FT) or node (ET), behaves under determined conditions. Furthermore, it may rely on a combination of multiple data sources as numerical modelling or field measurements. 
However, physical/stochastic modelling can be a time- and computationally expensive activity, constituting a limit to the implementation of physically based IAM in many practical applications. The use of expert judgement elicitation techniques is an alternative (or complementary) tool that is often used for evaluating rare or poorly understood phenomena. 
The elicitation is the process of formally capturing judgement or opinion from a panel of recognised experts regarding a well-defined problem, relying on their combined training and expertise ( Meyer and Booker, 1991; Cooke, 1991). Structured elicitation of expert judgement has been widely used for supporting probabilistic hazard and risk assessment in different contexts, for example for seismic hazard assessment (e.g. Budnitz et al, 1998), and volcanic hazards and risk (e.g. Aspinall, 2006). 
The outcome of IAM, therefore, can be used to set the probability of a determined BE (or ET's node) for which no direct data is available. An example of IAM of interest for MHR assessment applied to the development of geo-resources has been developed in the framework of the European project SHEER (Shale gas exploration and exploitation induced Risks), where IAM has been used to assess a risk pathway scenario in which the connectivity of rock fracture networks connecting two zones of interest is a BE of interest (for details see, Garcia-Aristizabal, 2017). 
*4  Needing help? Troubles? Feedback?*
Contact us: 
_Alexander Garcia-Aristizabal._
_Istituto Nazionale di Geofisica e Vulcabnologia, Sezione di Bologna._
_Email: alexander.garcia@ingv.it_ 
*5*  *Citation*
Please acknowledge use of this application in your work: 



  • Wiki Markup
    I{_}S-EPOS. (2019). Merger \[Web application\]. DOI: ____. Retrieved from https://tcs.ah-epos.eu/_


  • _Garcia-Aristizabal, A., J. Kocot, R. Russo, and P. Gasparini (2019). A probabilistic tool for multi-hazard risk analysis using a bow-tie approach: application to environmental risk assessments for geo-resource development projects. Acta Geophys. 67, 385-410. DOI: 10.1007/s11600-018-0201-7 https://doi.org/10.1007/s11600-018-0201-7_

...