Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Info

The

...

application takes as input

...

a vector of numerical or time values and transforms it into its equivalent dimension following the methodology introduced by Lasocki (2014).

Image Added


REFERENCES  Code Repository Document Repository

CATEGORY Data Processing Applications

KEYWORDS Data conversion, Parameter Probabilistic Distribution

CITATION Please acknowledge use of this application in your work: IS-EPOS. (2019). Transformation to Equivalent Dimensions [Web application]. Retrieved from https://tcs.ah-epos.eu/

Step by

...

Step   

After the User adds the Application into his/her personal workspace, the following window

...

appears on the screen (Figure 1):

...

Image Added

Figure 1. Input

...

data uploading file for the "Transformation to Equivalent Dimensions" application.

The User is requested to

...

upload vector of either numerical or time values (red field in Figure 1)

...

which is already available in his/her personal workspace (green field in Figure 1). However, if the User is not in possession of such file, he or she can generate one using "Catalog to Vectors" or "GDF to Vectors" converter, as shown in Figure 2.

Image Added

Figure 2. Application input information

After clicking on the application input information (red field in Figure 2), the User can obtain vector of numerical or time values with the ”Catalog to Vectors converter” or the ”GDF to Vectors converter” applications (green field in Figure 2).


Image Added

Image Removed

Figure 2. Input data files uploading for “Transformation to Equivalent Dimensions” Application.

Having the data files uploaded the next step is to filter the data and select parameters for performing Transformation to Equivalent Dimensions (Figure 3).

...

Figure 3. Parameters selection for performing

...

the "Transformation to Equivalent

...

Dimensions" application.

Below the plot of Number of events against data from input vector (green field in Figure 3).

...

The User is therefore requested to fill the fields shown in Figure 3

...

  • Catalog - The user may click on "change input" button in order to use a seismic catalog data among the ones that are already uploaded in his/her personal workspace. He/she may also either clear the field, or remove completely the Catalog input from the Application.
  • GDF with time-correlated parameters - The user may click on "change input" button in order to use a GDF data file among the ones that are already uploaded in his/her personal workspace. He/she may also either clear the field, or remove completely the GDF data input from the Application.
  • Chosen magnitude column - The user may choose among different magnitude scales (e.g ML, MW), in the Episodes where these scales are available.
  • Mmin -The User now is requested to choose the minimum magnitude to be assumed as completeness magnitude for the analysis. This can be done in two ways. The first is to type a single magnitude value in the empty box, possibly after he/she has performed an individual analysis (see "Completeness Magnitude Estimation" Application). The second is to graphically select the minimum magnitude from the Normal or the Cumulative histograms, which are available after clicking on the respective tabs. In both cases there is option to alter the step of the histogram's bars and to select between linear and logarithmic scale of the Y-axis for the plotting.
  • Catalog columns – If the User has selected a Catalog as input file, is now requested to select among the available catalog columns (parameters). Clicking in the “All” box, all parameters are automatically selected.
  • Vector names – If the User has selected a GDF with time correlated parameters file as input, is now requested to select among the available GDF vectors (parameters). Clicking in the “All” box, all parameters are automatically selected.
  • Time lag – The User is requester to enter a time lag, i.e. a positive number corresponding to the delayed response of the seismicity to the technological activities  (Only Applicable when both seismic catalog and GDF file are selected, time unit is ‘days’) .
  • Randomization Mode – Data randomization mode to eliminate identical values in the input parameter vectors. Available options are “Exponential”, “Normal” “uniform”, and “None”.
  • Sample Multiplication Mode – Mode for determining the sample multiplication. Available options are “No” (for no multiplication), “Left”, “Right” (for doubling the sample to the left or right, respectively) and “Both” (for tripling the sample, doubling both to the left and to the right).
  • Mode for Data Picking – The User is here requested to select the mode for defining the starting and ending point of the background sample and testing data. 3 such modes are available:
    • “Events”, the User may type the picking points range (minimum and maximum point) for the background as well as for the testing data.
    • “Time”, the User may type or select from a calendar the picking date range (starting and ending time) for the background as well as for the testing data (figure 4).
    • “All”, the entire range of data is considered as both background and testing sample (no further action required).

Image Removed

Figure 4. Background and Testing data selection with “Time” mode.

After defining the aforementioned parameters, the user shall click on the  ‘RUN’ button (green tab in Figure 4) and the calculations are performed. The Status changes from 'CREATED' through  'SENT_TO_SERVER', 'RUNNING' and finally 'FINISHED' and the output is created and plotted in the main window. The Analysis Results table appear on the screen and comprise the following outputs:

A) Output figures are created showing the histograms with original and transformed background samples and testing data (see Figure 5) as well as adaptive kernel weighting factors (Figure 6) for each one of the considered parameters. The User may select from the field in red, Figure 5, a parameter for which the corresponding histograms and plots are to be created.

Image Removed

Figure 5. Output figure showing the histograms of original as well as the transformed values of “Depth” parameter.

Image Removed

Figure 6. Kernel weighting factor of “Depth” parameter as shown in Figure 5.

B) OUTPUT_REPORT.txt: An output report is generated and stored, including a summary of the input parameters and data considered, as well as the results obtained from the after the Transformation to Equivalent Dimensions (Figure 7). The report can be downloaded as .txt file.

Image Removed

Figure 6. Output report produced by the Application

C) t2ed_output.mat: A matlab structure is finally produced which can be used as input for the “Cluster Analysis” Application, or can be downloaded by the User for further use. The fields that the structure contains are the following:

...

Field

...

Type

...

Format

...

Parameter

...

xt

...

Vector

...

Double

...

The transformed Testing data (parameter values in the Equivalent Dimensions, [0 1])

...

xBG

...

Vector

...

Double

...

The transformed Background Sample (parameter values in the Equivalent Dimensions, [0 1])

...

ierr

...

Scalar

...

Integer

(0,1 or 2)

...

h-convergence indicator (see “EQ_DIM” function for details)

...

h

...

Scalar

...

Double

...

kernel smoothing factor

...

xx*

...

Vector

...

Double

...

the background sample considered for transformation

...

ambd*

...

Vector

...

Double

...

weighting factors for the adaptive kernel

...

field

...

String

...

String

...

Description of the corresponding field (transformed parameter)

...

Index_Testing

...

Vector

...

Integer

...

(Index) indicator of Testing Data from the original Dataset (‘origval_all’ field) which were transformed 

...

Index_Background

...

Vector

...

Integer

...

(Index) indicator of Background Sample from the original Dataset ('origval_all’ field) which were transformed 

...

all+

...

Vector

...

Double

...

transformed parameters vector with size of the original parameter vector (size of ‘origval_all’, including NaN's)

...

origval

...

Vector

...

Double

...

vector with the original parameters that were transformed (without the Nans)

...

origval_all+

...

Vector

...

Double

...

original input vector with the parameters that were transformed (including NaN's)

...

Source

...

String

...

String

...

Source of the data (either ‘Catalog’ or ‘GDF’)

:

  • Randomization Mode – Data randomization mode to eliminate identical values in the input parameter vectors. Available options are "None", "Exponential", "Normal" and "Uniform".
  • Precision - The User is requested to enter precision of input vector values.
  • PDF (Probability density function) support - Available options are "Infinite", "Semi left", "Semi right", "Finite".
  • Left limit - The User is requested to enter left hand side limit of x domain.
  • Bandwidth - The User is requested to enter bandwidth to be used in the kernel estimator of distribution function.
  • Target data range - The User is requested to enter target data range for analysis.
  • Background data range - The User is requested to enter background data range for analysis.


Image Added

Figure 4. Output figure showing the histogram of original as well as transformed values of input vector.

After running of the application is finished successfully the User sees two histograms (Figure 4), two additional result files and a link to perform the "Cluster Analysis" with transformed vector as an input (Figure 5),

Image Added

Figure 5. A link to result files and to the "Cluster Analysis" application.

Back to top

*(can be doubled or tripled according to the selected “Sample Multiplication Mode” selection)

...