You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

The Application takes as input seismic or/and operational data parameters and transform them into their equivalent dimensions following the methodology introduced by Lasocki (2014).

REFERENCES Code RepositoryDocument Repository

CATEGORY Data Processing Applications

KEYWORDS Data conversion, Parameter Probabilistic Distribution

CITATION Please acknowledge use of this application in your work: IS-EPOS. (2019). Transformation to Equivalent Dimensions [Web application]. Retrieved from https://tcs.ah-epos.eu/

Step by Step   

After the User adds the Application into his/her personal workspace, the following window appear on the screen (Figure 1):


Figure 1. Input data selection window of “Transformation to Equivalent Dimensions” Application.

The User is requested to select either seismic catalog data or/and GDF data (red field in Figure 1). If both data sources (catalog and GDF) are selected, then the GDF data are modified in order to correspond to each seismic event (i.e. GDF data are attributed as additional catalog parameters). Once the “Add optional files” window is filled (red field, Figure 1), the User is then requested to upload seismic catalog  and/or GDF files (red fields in Figure 2) which are already available in his/her personal workspace (green field in Figure 2).

Figure 2. Input data files uploading for “Transformation to Equivalent Dimensions” Application.

Having the data files uploaded the next step is to filter the data and select parameters for performing Transformation to Equivalent Dimensions (Figure 3).

Figure 3. Parameters selection for performing “Transformation to Equivalent Dimensions” Application.

The User is therefore requested to fill the fields shown in Figure 3 (from top to bottom):

  • Catalog - The User may click on "change input" button in order to use a seismic catalog data among the ones that are already uploaded in his/her personal workspace. He/she may also either clear the field, or remove completely the Catalog input from the Application.
  • GDF with time-correlated parameters - The User may click on "change input" button in order to use a GDF data file among the ones that are already uploaded in his/her personal workspace. He/she may also either clear the field, or remove completely the GDF data input from the Application.
  • Chosen magnitude column - The User may choose among different magnitude scales (e.g ML, MW), in the Episodes where these scales are available.
  • Mmin -The User now is requested to choose the minimum magnitude to be assumed as completeness magnitude for the analysis. This can be done in two ways. The first is to type a single magnitude value in the empty box, possibly after he/she has performed an individual analysis (see "Completeness Magnitude Estimation" Application). The second is to graphically select the minimum magnitude from the Normal or the Cumulative histograms, which are available after clicking on the respective tabs. In both cases there is option to alter the step of the histogram's bars and to select between linear and logarithmic scale of the Y-axis for the plotting.
  • Catalog columns – If the User has selected a Catalog as input file, is now requested to select among the available catalog columns (parameters). Clicking in the “All” box, all parameters are automatically selected.
  • Vector names – If the User has selected a GDF with time correlated parameters file as input, is now requested to select among the available GDF vectors (parameters). Clicking in the “All” box, all parameters are automatically selected.
  • Time lag – The User is requester to enter a time lag, i.e. a positive number corresponding to the delayed response of the seismicity to the technological activities  (Only Applicable when both seismic catalog and GDF file are selected, time unit is ‘days’) .
  • Randomization Mode – Data randomization mode to eliminate identical values in the input parameter vectors. Available options are “Exponential”, “Normal” “uniform”, and “None”.
  • Sample Multiplication Mode – Mode for determining the sample multiplication. Available options are “No” (for no multiplication), “Left”, “Right” (for doubling the sample to the left or right, respectively) and “Both” (for tripling the sample, doubling both to the left and to the right).
  • Mode for Data Picking – The User is here requested to select the mode for defining the starting and ending point of the background sample and testing data. 3 such modes are available:
    • “Events”, the User may type the picking points range (minimum and maximum point) for the background as well as for the testing data.
    • “Time”, the User may type or select from a calendar the picking date range (starting and ending time) for the background as well as for the testing data (figure 4).
    • “All”, the entire range of data is considered as both background and testing sample (no further action required).

Figure 4. Background and Testing data selection with “Time” mode.

After defining the aforementioned parameters, the User shall click on the  ‘RUN’ button (green tab in Figure 4) and the calculations are performed. The Status changes from 'CREATED' through  'SENT_TO_SERVER', 'RUNNING' and finally 'FINISHED' and the output is created and plotted in the main window. The Analysis Results table appear on the screen and comprise the following outputs:

A) Output figures are created showing the histograms with original and transformed background samples and testing data (see Figure 5) as well as adaptive kernel weighting factors (Figure 6) for each one of the considered parameters. The User may select from the field in red, Figure 5, a parameter for which the corresponding histograms and plots are to be created.




Figure 5. Output figure showing the histograms of original as well as the transformed values of “Depth” parameter.

Figure 6. Kernel weighting factor of “Depth” parameter as shown in Figure 5.

B) OUTPUT_REPORT.txt: An output report is generated and stored, including a summary of the input parameters and data considered, as well as the results obtained from the after the Transformation to Equivalent Dimensions (Figure 7). The report can be downloaded as .txt file.

Figure 6. Output report produced by the Application

C) t2ed_output.mat: A matlab structure is finally produced which can be used as input for the “Cluster Analysis” Application, or can be downloaded by the User for further use. The fields that the structure contains are the following:

Field

Type

Format

Parameter

xt

Vector

Double

The transformed Testing data (parameter values in the Equivalent Dimensions, [0 1])

xBG

Vector

Double

The transformed Background Sample (parameter values in the Equivalent Dimensions, [0 1])

ierr

Scalar

Integer

(0,1 or 2)

h-convergence indicator (see “EQ_DIM” function for details)

h

Scalar

Double

kernel smoothing factor

xx*

Vector

Double

the background sample considered for transformation

ambd*

Vector

Double

weighting factors for the adaptive kernel

field

String

String

Description of the corresponding field (transformed parameter)

Index_Testing

Vector

Integer

(Index) indicator of Testing Data from the original Dataset (‘origval_all’ field) which were transformed 

Index_Background

Vector

Integer

(Index) indicator of Background Sample from the original Dataset ('origval_all’ field) which were transformed 

all+

Vector

Double

transformed parameters vector with size of the original parameter vector (size of ‘origval_all’, including NaN's)

origval

Vector

Double

vector with the original parameters that were transformed (without the Nans)

origval_all+

Vector

Double

original input vector with the parameters that were transformed (including NaN's)

Source

String

String

Source of the data (either ‘Catalog’ or ‘GDF’)

*(can be doubled or tripled according to the selected “Sample Multiplication Mode” selection)

+NaNs may be included in some of these vectors





  • No labels