Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36120100421Sparsity-based denoising and its application in linear inverse problemsSparsity-based denoising and its application in linear inverse problems21454FAAliGholamiHamid RezaSiahkoohiJournal Article19700101Generally, the presence of noise in geophysical measurements is inevitable and depending on the type and the level it affects the results of geophysical studies. So, denoising is an important part of the processing of geophysical data. On the other hand, geophysicists make inferences about the physical properties of the earth interior based on the indirect measurements (data) collected at or near the surface of the earth. So, an inverse problem must be solved in order to take estimates of the physical properties in the earth. The vast majority of inverse problems which arise in geophysics are ill-posed; in other words, they have not unique and stable solutions. Regularization tools are used to find a unique and stable solution for such problems. The regularization uses a priori information about the solution to make it stable and to suppress high-frequency oscillations generated by the noise. One of the common ways to perform the regularization is expanding the unknown model (i.e. solution) with respect to an orthonormal basis, separating the model coefficients from that of the noise, and finally recovering the model.
In singular value decomposition, the specific physical nature of the model under study is not considered when defining the basis. For homogeneous operators, such basis does not provide a parsimonious approximation of models which are smooth in some regions while having sharp local changes in others. This is due to the non-localized properties of the SVD basis vectors in space (time) domain.
Wavelet-vaguelette decomposition (WVD) was introduced as a first approach for adapting wavelet methods to the framework of ill-posed inverse problems. It is a linear projection method based on wavelet-like function systems which have similar properties as the singular value decompositions. WVD are compared to the SVD construct near the orthogonal basis where the vectors are well localized in space (time) and frequency, thus producing less Gibbs-phenomenon at discontinuities. This property and existence of fast algorithm to compute the basis make wavelets a suitable candidate for solving inverse problems.
Vaguelette-wavelet decomposition (VWD) is an alternative to WVD for solving ill-posed inverse problems. It is a linear projection method based on wavelet function systems. In VWD the noisy data are expanded in a wavelet series, generated wavelet coefficients are thresholded to obtain an estimate of the wavelet expansion of noise free data, and then the resulting coefficients are transformed back for smoothed data. Later on, the smoothed data are inverted for the desired model.
In this paper we discuss: 1. The performance of sparsifying transforms (e.g. wavelet transform) for the denoising problem and their application to solve other linear inverse problems including WVD and VWD. 2. Comparing nonlinear Amplitude-scale-invariant Bayes Estimator (ABE) and hard- and soft-shrinkage filters to estimate signal coefficients in sparse domain for different levels of noise. 3. Introducing an efficient method to estimate the standard deviation of noise which is an important task in the experiments with single realization. The obtained standard deviation is then used to determine the regularization parameter in both wavelet- and SVD- based inversion methods.
Finally, inversion of integration operator to find the variation rate of a function is used to show the performance of the introduced methods in comparison to the popular SVD method. The results indicate that a simple non-linear operation of weighting and thresholding of wavelet coefficients can consistently outperform classical linear inverse methods.Generally, the presence of noise in geophysical measurements is inevitable and depending on the type and the level it affects the results of geophysical studies. So, denoising is an important part of the processing of geophysical data. On the other hand, geophysicists make inferences about the physical properties of the earth interior based on the indirect measurements (data) collected at or near the surface of the earth. So, an inverse problem must be solved in order to take estimates of the physical properties in the earth. The vast majority of inverse problems which arise in geophysics are ill-posed; in other words, they have not unique and stable solutions. Regularization tools are used to find a unique and stable solution for such problems. The regularization uses a priori information about the solution to make it stable and to suppress high-frequency oscillations generated by the noise. One of the common ways to perform the regularization is expanding the unknown model (i.e. solution) with respect to an orthonormal basis, separating the model coefficients from that of the noise, and finally recovering the model.
In singular value decomposition, the specific physical nature of the model under study is not considered when defining the basis. For homogeneous operators, such basis does not provide a parsimonious approximation of models which are smooth in some regions while having sharp local changes in others. This is due to the non-localized properties of the SVD basis vectors in space (time) domain.
Wavelet-vaguelette decomposition (WVD) was introduced as a first approach for adapting wavelet methods to the framework of ill-posed inverse problems. It is a linear projection method based on wavelet-like function systems which have similar properties as the singular value decompositions. WVD are compared to the SVD construct near the orthogonal basis where the vectors are well localized in space (time) and frequency, thus producing less Gibbs-phenomenon at discontinuities. This property and existence of fast algorithm to compute the basis make wavelets a suitable candidate for solving inverse problems.
Vaguelette-wavelet decomposition (VWD) is an alternative to WVD for solving ill-posed inverse problems. It is a linear projection method based on wavelet function systems. In VWD the noisy data are expanded in a wavelet series, generated wavelet coefficients are thresholded to obtain an estimate of the wavelet expansion of noise free data, and then the resulting coefficients are transformed back for smoothed data. Later on, the smoothed data are inverted for the desired model.
In this paper we discuss: 1. The performance of sparsifying transforms (e.g. wavelet transform) for the denoising problem and their application to solve other linear inverse problems including WVD and VWD. 2. Comparing nonlinear Amplitude-scale-invariant Bayes Estimator (ABE) and hard- and soft-shrinkage filters to estimate signal coefficients in sparse domain for different levels of noise. 3. Introducing an efficient method to estimate the standard deviation of noise which is an important task in the experiments with single realization. The obtained standard deviation is then used to determine the regularization parameter in both wavelet- and SVD- based inversion methods.
Finally, inversion of integration operator to find the variation rate of a function is used to show the performance of the introduced methods in comparison to the popular SVD method. The results indicate that a simple non-linear operation of weighting and thresholding of wavelet coefficients can consistently outperform classical linear inverse methods.https://jesphys.ut.ac.ir/article_21454_62d17d60fda575539b554de5acbe7abf.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36120100421Safety assessment of landslides by electrical tomography: A case study from Ardabil, Northwestern IranSafety assessment of landslides by electrical tomography: A case study from Ardabil, Northwestern Iran21455FAMohammad KazemHafizi0000-0002-5634-1141BahmanAbbasiAhmadAshtari TalkhestaniJournal Article19700101Introduction: Occurrence of landslides depends on several factors such as the composition and structure of earth materials, rainfall level, temperature, groundwater regime and cover crops. Landslides are a main natural hazard in Iran, because of its special geological conditions and its mountainous active tectonic regime. Linear civil structures, such as roads, highways and railways, face serious problems in Iran, because of their long length which exposes them to several various geological features. For analyzing the stability of these structures, several geophysical methods, such as electrical tomography methods are frequently used. The main purposes of geophysical surveys are reconstruction of landslide geometry, detection of sliding surface (between sliding mass and bed rock), and exploration of the groundwater flow regime which is a stimulant factor before landslide occurrence.
Regional Geology: Based on the geological map of Sarab (1:100000), Rhyolitic and Dacitic lavas form the bed rock of the region with basaltic highlands. A major portion of road on the path consisted of alluvial river terraces and their great ability to absorb water which caused muddy materials to overflow and spontaneously move towards the valley. The main reason for this hazard is the base loss in down range. The landslide occurred several years after the utilization of the road, indicating the involvement of another factor which is the increase in intense rainfall during the previous months. This causes the region's high vulnerability and lack of base, during landslides.
Electrical Tomography: Electrical tomography or electrical resistivity tomography (ERI) is a geophysical technique for imaging sub-surfaces structures (sliding surface in this case) from electrical measurements made at the surface or boreholes. So the first stage in our electrical tomography was sending an electric current into the ground and then measuring the response of the earth in voltage. In the next step for building the inversed resistivity model, the algorithms of well know software Res2dinv (for making two-dimensional inversed model) and Res3dinv (for making three-dimensional inversed model) was used.
A simple way to utilize blocky inverse modeling is the application of the ordinary least square equations. In this method, the model that best fits the data is achieved through a method of optimization.
Discussions and Conclusion: The Schlumberger array was used for data acquisition. From 99 soundings which were carried out in this area, 90 soundings are selected for inversed modeling in 6 profiles. Due to the high noise levels in the surface, more damping (attenuation) coefficient was considered for surface data. Subsurface materials due to water absorption have more moisture with low resistivity, in contrast with the mass above the sliding surface, with relatively higher resistivity. 2D Electrical Tomography generally shows the location of Landslide Valley that can be seen most particularly in place of resistivity profiles P4 and P5. 3D Electrical Tomography has also been used for acquiring a 3D view from the sliding surface. In these models, the sliding surface is at depths up to 60m. Application geotechnical data may improve the inverse modeling images for upcoming exploration programs.Introduction: Occurrence of landslides depends on several factors such as the composition and structure of earth materials, rainfall level, temperature, groundwater regime and cover crops. Landslides are a main natural hazard in Iran, because of its special geological conditions and its mountainous active tectonic regime. Linear civil structures, such as roads, highways and railways, face serious problems in Iran, because of their long length which exposes them to several various geological features. For analyzing the stability of these structures, several geophysical methods, such as electrical tomography methods are frequently used. The main purposes of geophysical surveys are reconstruction of landslide geometry, detection of sliding surface (between sliding mass and bed rock), and exploration of the groundwater flow regime which is a stimulant factor before landslide occurrence.
Regional Geology: Based on the geological map of Sarab (1:100000), Rhyolitic and Dacitic lavas form the bed rock of the region with basaltic highlands. A major portion of road on the path consisted of alluvial river terraces and their great ability to absorb water which caused muddy materials to overflow and spontaneously move towards the valley. The main reason for this hazard is the base loss in down range. The landslide occurred several years after the utilization of the road, indicating the involvement of another factor which is the increase in intense rainfall during the previous months. This causes the region's high vulnerability and lack of base, during landslides.
Electrical Tomography: Electrical tomography or electrical resistivity tomography (ERI) is a geophysical technique for imaging sub-surfaces structures (sliding surface in this case) from electrical measurements made at the surface or boreholes. So the first stage in our electrical tomography was sending an electric current into the ground and then measuring the response of the earth in voltage. In the next step for building the inversed resistivity model, the algorithms of well know software Res2dinv (for making two-dimensional inversed model) and Res3dinv (for making three-dimensional inversed model) was used.
A simple way to utilize blocky inverse modeling is the application of the ordinary least square equations. In this method, the model that best fits the data is achieved through a method of optimization.
Discussions and Conclusion: The Schlumberger array was used for data acquisition. From 99 soundings which were carried out in this area, 90 soundings are selected for inversed modeling in 6 profiles. Due to the high noise levels in the surface, more damping (attenuation) coefficient was considered for surface data. Subsurface materials due to water absorption have more moisture with low resistivity, in contrast with the mass above the sliding surface, with relatively higher resistivity. 2D Electrical Tomography generally shows the location of Landslide Valley that can be seen most particularly in place of resistivity profiles P4 and P5. 3D Electrical Tomography has also been used for acquiring a 3D view from the sliding surface. In these models, the sliding surface is at depths up to 60m. Application geotechnical data may improve the inverse modeling images for upcoming exploration programs.https://jesphys.ut.ac.ir/article_21455_812ddcdea64cdf13e55b0449778d9dbb.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36120100421Inversion of gravity data in wavelet domain using normalized forward modelsInversion of gravity data in wavelet domain using normalized forward models21456FAArashMotasharreieHosseinZomorodianHamid RezaSiahKoohiMahmodMirzaeiJournal Article19700101Due to the remarkable advantages of wavelet transformation, this technique is now very common in gravity analysis. In this research the Green’s function occurring in the Poisson potential field theory is used to construct non-orthogonal, non-compact, continuous wavelets. This kind of wavelet is directly corresponded to upward continuation procedure. Simple geometrical forward models such as Sphere, Vertical and Horizontal Cylinder, Thin sheet and Vertical sheet are applied as forward models. First, analytical wavelet transform of the models is calculated, and then the amplitude and the location of the maximum of the product is applied as a new mathematical model (forward model).
The new models have a mathematical relation with the source parameters such as depth and shape of anomaly. However, because of being normal the forward models do not have any relation with the physical parameter of density contrast.
In order to examine the accuracy, precision, behavior and application of the offered method, the synthetic data for both noisy and noise-free data, has been applied. Subsequently, considering the applicability and expansion of the method for applied goals, some suitable real datasets have been used. For the purpose of gathering data and testing the algorithm, two sources of data were accessible: Institute of Geophysics University of Tehran and the National Iranian Oil Company. Formal permission was granted by both institutions. The outcome of this process was compared with the result of other established classical methods. The parameter of depth estimated by both methods is very close (about 400m difference for the depth of about 3.5km).
After careful assessment, it became evident that results obtained from these comparisons are beneficial and useful. Real data are separated into regional and local signals using discrete wavelet analysis. The maximum points of wavelet transforms (worn diagrams) are also applied to interpret the depth of the anomaly compared to adjacent anomalies.
The result obtained by inverting the data using the parameter of amplitude has less standard deviation compared to the location of the MSE, and is believed to be more accurate. It is observed that adding noise causes higher standard deviation; however after adding 20 percent noise in synthetic data, less than 6% error occurred in the parameter of depth (still yields good results) which shows remarkable stability against noise.Due to the remarkable advantages of wavelet transformation, this technique is now very common in gravity analysis. In this research the Green’s function occurring in the Poisson potential field theory is used to construct non-orthogonal, non-compact, continuous wavelets. This kind of wavelet is directly corresponded to upward continuation procedure. Simple geometrical forward models such as Sphere, Vertical and Horizontal Cylinder, Thin sheet and Vertical sheet are applied as forward models. First, analytical wavelet transform of the models is calculated, and then the amplitude and the location of the maximum of the product is applied as a new mathematical model (forward model).
The new models have a mathematical relation with the source parameters such as depth and shape of anomaly. However, because of being normal the forward models do not have any relation with the physical parameter of density contrast.
In order to examine the accuracy, precision, behavior and application of the offered method, the synthetic data for both noisy and noise-free data, has been applied. Subsequently, considering the applicability and expansion of the method for applied goals, some suitable real datasets have been used. For the purpose of gathering data and testing the algorithm, two sources of data were accessible: Institute of Geophysics University of Tehran and the National Iranian Oil Company. Formal permission was granted by both institutions. The outcome of this process was compared with the result of other established classical methods. The parameter of depth estimated by both methods is very close (about 400m difference for the depth of about 3.5km).
After careful assessment, it became evident that results obtained from these comparisons are beneficial and useful. Real data are separated into regional and local signals using discrete wavelet analysis. The maximum points of wavelet transforms (worn diagrams) are also applied to interpret the depth of the anomaly compared to adjacent anomalies.
The result obtained by inverting the data using the parameter of amplitude has less standard deviation compared to the location of the MSE, and is believed to be more accurate. It is observed that adding noise causes higher standard deviation; however after adding 20 percent noise in synthetic data, less than 6% error occurred in the parameter of depth (still yields good results) which shows remarkable stability against noise.https://jesphys.ut.ac.ir/article_21456_a906c8a56d18110bc44f06d7cd8ecd9f.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36120100421Assessment of the Sasstamoinen model for tropospheric correction of GNSS observationsAssessment of the Sasstamoinen model for tropospheric correction of GNSS observations21457FABijanShoorchehAlirezaArdalan0000-0001-5549-3189Journal Article19700101Propagation of radio waves through the troposphere is based on minimum travel time between transmitter and receiver. The troposphere layer affects the propagation of GNSS signals and causes errors in point positioning due to (1) delay of the signal, and (2) change of the curvature of the signal path. For this reason GNSS users apply models for tropospheric error corrections. One of the commonly used models, which is applied in both scientific and commercial software packages is the Saastamoinen model. Since the tropospheric error, unlike the ionosphereic error, cannot be removed via dual frequency receiver observations, application of the tropospheric models based on meteorological information can always improve point positioning accuracy. This becomes more essential in the case of Precise Point Positioning (PPP), which has real-time applications. Atmospheric delays have two components. Wet delay is caused by atmospheric water vapor and dry delay by all other atmospheric constituents. The dry delay can be predicted to better than 1 mm with surface pressure measurement. Wet GNSS delay is highly variable and cannot be accurately predicted from surface observations. In this paper the wet Saastamoinen model is evaluated based on Zenith Total Delay (ZTD) computed from one year (2005) of permanent GNSS observations of the National Cartography Center (NCC) of Iran. The wet Saastamoinen model is dependent on constants, which are estimated from local observations. Our evaluation procedure can be summarized as follows:
1- GNSS ZTD is estimated from GNSS observation equations.
2- ZTD is computed from surface pressure, temperature and humidity observations.
3- Root Mean Square (RMS) between the two above solutions is determined.
4- The value of RMS is used as an efficiency test of constant coefficients of the Saastamoinen model.
The results of our test computations show that the constant coefficients of the Saastamoinen model have enough accuracy for the tropospheric corrections needed for PPP applications and as such there is no need to estimate those constants based on local atmospheric observations.Propagation of radio waves through the troposphere is based on minimum travel time between transmitter and receiver. The troposphere layer affects the propagation of GNSS signals and causes errors in point positioning due to (1) delay of the signal, and (2) change of the curvature of the signal path. For this reason GNSS users apply models for tropospheric error corrections. One of the commonly used models, which is applied in both scientific and commercial software packages is the Saastamoinen model. Since the tropospheric error, unlike the ionosphereic error, cannot be removed via dual frequency receiver observations, application of the tropospheric models based on meteorological information can always improve point positioning accuracy. This becomes more essential in the case of Precise Point Positioning (PPP), which has real-time applications. Atmospheric delays have two components. Wet delay is caused by atmospheric water vapor and dry delay by all other atmospheric constituents. The dry delay can be predicted to better than 1 mm with surface pressure measurement. Wet GNSS delay is highly variable and cannot be accurately predicted from surface observations. In this paper the wet Saastamoinen model is evaluated based on Zenith Total Delay (ZTD) computed from one year (2005) of permanent GNSS observations of the National Cartography Center (NCC) of Iran. The wet Saastamoinen model is dependent on constants, which are estimated from local observations. Our evaluation procedure can be summarized as follows:
1- GNSS ZTD is estimated from GNSS observation equations.
2- ZTD is computed from surface pressure, temperature and humidity observations.
3- Root Mean Square (RMS) between the two above solutions is determined.
4- The value of RMS is used as an efficiency test of constant coefficients of the Saastamoinen model.
The results of our test computations show that the constant coefficients of the Saastamoinen model have enough accuracy for the tropospheric corrections needed for PPP applications and as such there is no need to estimate those constants based on local atmospheric observations.https://jesphys.ut.ac.ir/article_21457_c94b365f4adc7e93694adb0e0ca5d4b0.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36120100421Edge detection of potential field anomalies using local phase filtersEdge detection of potential field anomalies using local phase filters21458FAKamalAlamdarAbdolhamidAnsariJournal Article19700101Potential fields are due to complex distribution of sources related to susceptibility and mass density variations for magnetic and gravity field respectively. In researching the lateral heterogeneous of different geological bodies, in particular their edge location potential field has some advantages. We mainly refer to linear features such as fault, folding axis, dyke, trend and border of the geological units as well as to circular featurs such as crates and buried voids when mentioning geological bodies.
Usually, the edge detection and edge enhancement techniques are used to distinguish between geological bodies with different depths and sizes. Edge detection methods are based on the position of the maximum or zero-crossing points associated with vertical derivative, horizontal derivative and analytic signal filters. These methods which are famed to derivative-based filters use different orders, but some instability may occur in high-order derivatives since any kind of noise or non-harmonic signal will be correspondingly enhanced with desired signals simultaneously. Another difficulty is that in the filtered image, the smaller amplitude features (which may be of considerable importance) may be hard to discern.
Other edge detection methods are local phase filters (edge enhancement methods) based on the phase variation of the derivative quantities. The advantage of these filters is their flexibility to produce new filters with most applicability just with partial variation. The edge enhancement methods mainly include Tilt angle (TA), Total Horizontal derivative of the tilt angle (THDR), theta map, Hyperbolic Tilt Angle (HTA) and normalized horizontal derivative.
The tilt angle filter did not serve as a very accurate method of locating deep sources. A much improved result came from using the second vertical derivative of the tilt angle in the space domain. Equations (1) to (4) are the local phase filters.
(1)
(2)
(3)
(4)
where, f is potential field data, T is tilt angle and x,y,z are three Cartesian components. In the above equation denote real part of function.
In this paper we applied these filters on synthetic magnetic data from a cylinder model as well as on real magnetic data from the Abadeh quadrangle. According to the obtained results the Dehshir-Baft fault in the northeast of the studied and ophiolite outcrops in the southeast are enhanced.Potential fields are due to complex distribution of sources related to susceptibility and mass density variations for magnetic and gravity field respectively. In researching the lateral heterogeneous of different geological bodies, in particular their edge location potential field has some advantages. We mainly refer to linear features such as fault, folding axis, dyke, trend and border of the geological units as well as to circular featurs such as crates and buried voids when mentioning geological bodies.
Usually, the edge detection and edge enhancement techniques are used to distinguish between geological bodies with different depths and sizes. Edge detection methods are based on the position of the maximum or zero-crossing points associated with vertical derivative, horizontal derivative and analytic signal filters. These methods which are famed to derivative-based filters use different orders, but some instability may occur in high-order derivatives since any kind of noise or non-harmonic signal will be correspondingly enhanced with desired signals simultaneously. Another difficulty is that in the filtered image, the smaller amplitude features (which may be of considerable importance) may be hard to discern.
Other edge detection methods are local phase filters (edge enhancement methods) based on the phase variation of the derivative quantities. The advantage of these filters is their flexibility to produce new filters with most applicability just with partial variation. The edge enhancement methods mainly include Tilt angle (TA), Total Horizontal derivative of the tilt angle (THDR), theta map, Hyperbolic Tilt Angle (HTA) and normalized horizontal derivative.
The tilt angle filter did not serve as a very accurate method of locating deep sources. A much improved result came from using the second vertical derivative of the tilt angle in the space domain. Equations (1) to (4) are the local phase filters.
(1)
(2)
(3)
(4)
where, f is potential field data, T is tilt angle and x,y,z are three Cartesian components. In the above equation denote real part of function.
In this paper we applied these filters on synthetic magnetic data from a cylinder model as well as on real magnetic data from the Abadeh quadrangle. According to the obtained results the Dehshir-Baft fault in the northeast of the studied and ophiolite outcrops in the southeast are enhanced.https://jesphys.ut.ac.ir/article_21458_63ae606dfd4006cdc32308d1cc94bf78.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36120100421Green's function operator matrix of intraplate faultsGreen's function operator matrix of intraplate faults21459FAAmir HosseinBohraniNaserKhajiJournal Article19700101This research presents a numerical tool to estimate the Green's function operator matrix of intraplate faults. Having this matrix and its inverse, spatial distribution of fault slippage could be investigated through the inverse analysis of geodetic data. This information could be employed to predict the location of future powerful earthquakes. To implement fault sliding in FE calculations, Soft Material Technique as a simple method is applied. In this technique, the fault is modeled by a flexible (very low elasticity modulus) thin element. This material not only prevents fault planes overlapping, but exhibits a good consistency with the physical behavior of fault. In other words, this material ignores the strength of neighboring rocks ready to trigger sliding. In this research, without involving the nonlinear contact problem, two sides of the fault are dislocated as one unit, and the ground surface deformation is measured.
Available geodetic data provides a proper opportunity to detect underground interactions. We can express observation equations with m observation data as:
Bi = Aij Xj + Ei (i = 1, …, m ) (1)
where Bi are the observed surface deformations, Xj the slippage components along the fault, Ei the random errors, and Aij Green’s function operators (i.e., the elastic response at a point i to a unit source at a point j on the model source region). This equation can be rewritten in a matrix form as:
B = AX + E (2)
where A is an m×n coefficient matrix.
To minimize the errors, the length of the vector E has to be minimized. It may be shown that, standard inversion equations based on the least-square method are obtained as:
X = (ATA)-1ATB (3)
where superscripts T and –1 indicate the transpose and inverse of a matrix, respectively. This relation offers a straightforward way for finding the source vector X. One of the new aspects of the present study is the calculation of the Green’s function operator matrix A by means of FEM. This issue enables us to overcome all limitations of traditional inverse methods.
How can the Green’s function operators be found by FEM? By applying unit source vectors in each degree of freedom, the relevant response of ground surface nodes is the corresponding component of the coefficient matrix A.
The proposed numerical model is first compared by available analytical approaches, and gains its proper validity, one of the Tehran faults is modeled by this method to calculate the corresponding Green's function operator matrix.This research presents a numerical tool to estimate the Green's function operator matrix of intraplate faults. Having this matrix and its inverse, spatial distribution of fault slippage could be investigated through the inverse analysis of geodetic data. This information could be employed to predict the location of future powerful earthquakes. To implement fault sliding in FE calculations, Soft Material Technique as a simple method is applied. In this technique, the fault is modeled by a flexible (very low elasticity modulus) thin element. This material not only prevents fault planes overlapping, but exhibits a good consistency with the physical behavior of fault. In other words, this material ignores the strength of neighboring rocks ready to trigger sliding. In this research, without involving the nonlinear contact problem, two sides of the fault are dislocated as one unit, and the ground surface deformation is measured.
Available geodetic data provides a proper opportunity to detect underground interactions. We can express observation equations with m observation data as:
Bi = Aij Xj + Ei (i = 1, …, m ) (1)
where Bi are the observed surface deformations, Xj the slippage components along the fault, Ei the random errors, and Aij Green’s function operators (i.e., the elastic response at a point i to a unit source at a point j on the model source region). This equation can be rewritten in a matrix form as:
B = AX + E (2)
where A is an m×n coefficient matrix.
To minimize the errors, the length of the vector E has to be minimized. It may be shown that, standard inversion equations based on the least-square method are obtained as:
X = (ATA)-1ATB (3)
where superscripts T and –1 indicate the transpose and inverse of a matrix, respectively. This relation offers a straightforward way for finding the source vector X. One of the new aspects of the present study is the calculation of the Green’s function operator matrix A by means of FEM. This issue enables us to overcome all limitations of traditional inverse methods.
How can the Green’s function operators be found by FEM? By applying unit source vectors in each degree of freedom, the relevant response of ground surface nodes is the corresponding component of the coefficient matrix A.
The proposed numerical model is first compared by available analytical approaches, and gains its proper validity, one of the Tehran faults is modeled by this method to calculate the corresponding Green's function operator matrix.https://jesphys.ut.ac.ir/article_21459_4db92bd3ec3faa33f2a1c3bd8ebed18f.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36120100421A new method for height determinations based on solving the geodetic fixed boundary value problemA new method for height determinations based on solving the geodetic fixed boundary value problem21460FAAbdolrezaSafariJournal Article19700101The simple method for height determination is spirit leveling. The leveled height differences are path-dependent. Ergo height from spirit leveling is not unique. The problem can be solved by converting the path-dependent leveled height differences into unique path-independent height differences (Vanicek and Krakiwsky, 1986).
The difference between the potentials of two close equipotential surface can be written as:
(1)
Practically, instead of potential, it is better to use geopotential numbers :
(2)
Having derived geopotential numbers, we can compute various height systems such as dynamic height, orthometric height and normal height. According to (1), with gravity measured along the leveling line, the potential difference as a path-independent quantity can be determined. In this method we can only compute geopotential numbers for points where we can establish a leveling path.
On the other hand with geoid height at hand and the availability of geodetic height from GPS, we can compute orthometric height. With computation of mean gravity, orthometric height can be converted to geopotential number. But there are still some open problems with computation of geoid and mean gravity. The aim of this paper is the presentation of a new method for computation of geopotentail numbers based on solving the fixed geodetic boundary value problem.
With the availability of GPS coordinates for the point on the Earth’s surface, the outer boundary of the Earth can be regarded as a known and fixed boundary. With boundary observations on the Earth’s surface at hand, we deal with a fixed geodetic boundary value problem. The problem is defined as follows:
where is the gravity potential of the Earth, the norm of the gravity vector on the Earth’s surface, mass density, the angular velocity and geoid potential.
The first step towards the solution of the proposed fixed geodetic boundary value problem is the linearization of the problem. After linearization we obtained a linear boundary value problem as follows:
In this paper we propose a new method for solving the linear boundary value problem based on harmonic splines. The main steps of the new method are as follows:
- Application of the ellipsoidal harmonic expansion complete up to degree and order of 360 and of the ellipsoidal centrifugal field for removal of the effect of the global gravity from gravity intensity at the surface of the Earth.
- The removal from the gravity intensity at the surface of the Earth of the effect of residual masses at a radius of up to 55 km from the computational point.
-Solution of the linear boundary value problem based on harmonic splines.
-Restoration of the removed effects in order to compute potential on the surface of the Earth.
-Subtraction of the geoid potential from the computed potential on the Earth's surface in order to obtain geopotential numbers.
-Computation of various height systems.
Computation of various height systems in the central part of Iran has successfully tested this methodology.The simple method for height determination is spirit leveling. The leveled height differences are path-dependent. Ergo height from spirit leveling is not unique. The problem can be solved by converting the path-dependent leveled height differences into unique path-independent height differences (Vanicek and Krakiwsky, 1986).
The difference between the potentials of two close equipotential surface can be written as:
(1)
Practically, instead of potential, it is better to use geopotential numbers :
(2)
Having derived geopotential numbers, we can compute various height systems such as dynamic height, orthometric height and normal height. According to (1), with gravity measured along the leveling line, the potential difference as a path-independent quantity can be determined. In this method we can only compute geopotential numbers for points where we can establish a leveling path.
On the other hand with geoid height at hand and the availability of geodetic height from GPS, we can compute orthometric height. With computation of mean gravity, orthometric height can be converted to geopotential number. But there are still some open problems with computation of geoid and mean gravity. The aim of this paper is the presentation of a new method for computation of geopotentail numbers based on solving the fixed geodetic boundary value problem.
With the availability of GPS coordinates for the point on the Earth’s surface, the outer boundary of the Earth can be regarded as a known and fixed boundary. With boundary observations on the Earth’s surface at hand, we deal with a fixed geodetic boundary value problem. The problem is defined as follows:
where is the gravity potential of the Earth, the norm of the gravity vector on the Earth’s surface, mass density, the angular velocity and geoid potential.
The first step towards the solution of the proposed fixed geodetic boundary value problem is the linearization of the problem. After linearization we obtained a linear boundary value problem as follows:
In this paper we propose a new method for solving the linear boundary value problem based on harmonic splines. The main steps of the new method are as follows:
- Application of the ellipsoidal harmonic expansion complete up to degree and order of 360 and of the ellipsoidal centrifugal field for removal of the effect of the global gravity from gravity intensity at the surface of the Earth.
- The removal from the gravity intensity at the surface of the Earth of the effect of residual masses at a radius of up to 55 km from the computational point.
-Solution of the linear boundary value problem based on harmonic splines.
-Restoration of the removed effects in order to compute potential on the surface of the Earth.
-Subtraction of the geoid potential from the computed potential on the Earth's surface in order to obtain geopotential numbers.
-Computation of various height systems.
Computation of various height systems in the central part of Iran has successfully tested this methodology.https://jesphys.ut.ac.ir/article_21460_c4974f7e6230ba2f4cc89ed4a90f394d.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36120100421A new method for estimating Rational Function CoefficientsA new method for estimating Rational Function Coefficients21461FAMohammad AliSharifi0000-0003-0745-4147BabakAmjadiparvarMosaSheybaniJournal Article19700101Rational functions are of great interest to engineers and geoscientists. The rational polynomial coefficient (RPC) model as a generalized sensor model has been introduced as an alternative for the rigorous sensor model of the satellite imaging.
Numerical instability of normal equations is the only single obstacle to the implementation of these functions. Practically, estimating rational function coefficients using available control points is mostly an ill-posed problem. Condition number of the normal matrix in the linear parametric model is relatively large. Therefore, a regularization method has to be employed in order to stabilize the equations. Implementation of the regularization technique improves the solution in the linear parametric model. The optimum value of the regularization parameter is estimated using the generalized cross validiation technique.
Moreover, simplification of the observation equations leads to a linear observation model which is the most frequently utilized approach for estimation of the unknown coefficients. However, rigorous modeling is recast in a combined adjustment model. Due to nonlinearity of the combined model, the initial values of unknown parameters are needed. The initialization process can be done using the estimated parameters from the linear parametric model.
Here, rational function coefficients are estimated using a combined model. Furthermore, the Tikhonov regularization method is employed for regularization of the problem in the combined model. Five different methods are implemented and their performances are compared.
Comparison of the root mean squared errors shows that the implementation of the combined model with an appropriate regularization parameter significantly improves the accuracy of the estimated coefficients. The regularized combined model gives the minimum root mean squared errors which is about half the value of the linear parametric model. The proposed method outperforms the already existing ones from an accuracy and computational point of view.Rational functions are of great interest to engineers and geoscientists. The rational polynomial coefficient (RPC) model as a generalized sensor model has been introduced as an alternative for the rigorous sensor model of the satellite imaging.
Numerical instability of normal equations is the only single obstacle to the implementation of these functions. Practically, estimating rational function coefficients using available control points is mostly an ill-posed problem. Condition number of the normal matrix in the linear parametric model is relatively large. Therefore, a regularization method has to be employed in order to stabilize the equations. Implementation of the regularization technique improves the solution in the linear parametric model. The optimum value of the regularization parameter is estimated using the generalized cross validiation technique.
Moreover, simplification of the observation equations leads to a linear observation model which is the most frequently utilized approach for estimation of the unknown coefficients. However, rigorous modeling is recast in a combined adjustment model. Due to nonlinearity of the combined model, the initial values of unknown parameters are needed. The initialization process can be done using the estimated parameters from the linear parametric model.
Here, rational function coefficients are estimated using a combined model. Furthermore, the Tikhonov regularization method is employed for regularization of the problem in the combined model. Five different methods are implemented and their performances are compared.
Comparison of the root mean squared errors shows that the implementation of the combined model with an appropriate regularization parameter significantly improves the accuracy of the estimated coefficients. The regularized combined model gives the minimum root mean squared errors which is about half the value of the linear parametric model. The proposed method outperforms the already existing ones from an accuracy and computational point of view.https://jesphys.ut.ac.ir/article_21461_bb449d1440e25a8eacb5d53e947844b4.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36120100421AMS study of the Dezou and Dahou series in the Kerman-Zarand regionAMS study of the Dezou and Dahou series in the Kerman-Zarand region21462FAMohammadHamedpourJanPiperAliKheradmandPeymanRezaeiMohsenMortazaviJournal Article19700101Intruduction: Red beds have been widely studied by palaeomagnetic methods for what they can tell us about the history of sedimentary basins and their subsequent deformation (Collinson, 1974, Turner, 1979a,b and 1981, McCabe and Elmore, 1989, Elmor, et al., 1993 and 2000). This study has investigated the palaeomagnetism of the Cambrian red sediments (Dezou and Dahou Formation) of the eastern margin of Central Iran. Davoudzadeh and Schmidt, (1984), have done some work on the region which includes these rocks and discuss the few rotations of the Central Iran micro plates. The aim of this study is to consider the AMS results to investigate the palaeo force field on the rock located at the northern side of the Persian Gulf.
Geological setting: The sampling sites of this study are situated in the Cambrian sediments of the eastern margin of Central Iran. These are the largest structural sedimentary units and are composed of complex geological structures. In this region, igneous, metamorphic and sedimentary rocks of the Precambrian to Quaternary Periods are preserved with a predominant number of outcrops of Mesozoic rocks. Most structural elements in different scales such as faults and folds are related to the tectonic activities of the Mesozoic and later eras. The most important structural element of the region is the Koohbanan fault with a NW-SE trend. Similar to the most other important faults of Central Iran, the existence of this fault is related to the late Precambrian Era.
The Dezou Formation is limited to the Rezou series (Late Precambrian) by a fault at the bottom and it can be separated by an unconformity from the Dahou Formation at the top in its area of study (Gazooie village, 30.8° N and 56.7° E, sites 13-14 & 16-20). It has a thickness of 380 m involving three major parts.
The Dahou Formation in the area of study (Gazooie village, 30.8° N and 56.7° E, site 20, figure. 4) rests with a basal breccia conformably on dolomites and limestone of the Dezou Formation. Middle to the end of early Cambrian Period was inferred from the stratigraphic position. It is covered by the Koohbanan Formation unconformably.
Rock magnetic studies: In the present study rock magnetic experiments including IRM acquisition, backfield IRM, thermomagnetic, and hysteresis and AMS studies have been performed on samples of the Dezou and Dahou Group sediments. Some of these experiments yield results that are grain size dependent and others are mineral type dependent; some are dependent on both properties.
Jelinek (1981) and Hrouda (1982) have proposed parameters T and Pj for the shape and corrected anisotropy degree (Tarling and Hrouda, 1993). A plot of T against Pj when Pj > 1 and –1 < T < 1 provides information about the shape and the degree of anisotropy on the same plot. T- Pj plot is shown for all sampling sites in the Dahou and Dezou Formations and also site 20 only. Most samples have less than 10% anisotropy but some of the samples tend to be more anisotropic..
Curie temperatures are mostly between 675-700?C and attributed to specularite and pigments of hematite. In a few examples magnetite with a Curie temperature of about 570?C can be seen as well. The clearest indication of magnetite here is RM ratio (RM ratio is the value of Ms at 100°C on the cooling curve to the value on the heating curve). RM ratio < 1 showing that magnetite has been oxidized to hematite with low Ms Values.
Rock magnetic studies show an agreement that both specular hematite and magnetite can occur in these sediments although the effect of hematite is usually dominant.
Site 13 that is sampled in red sandstone shows paramagnetic mineral content and hematite pigment with a Curie temperature of 680?C and a saturation field of 2000 mT. Low coercivity of 50 mT shows a low amount of magnetite.
Site 14 that is sampled in red sandstone shows a high amount of paramagnetic mineral content and finer grain size of hematite pigments (compare to site 13) with a Curie temperature of 680?C and a saturation field of 3000 mT. Lower coercivity than site 13 shows low amount of magnetite too.
Site 15 that is sampled in dolomite breccias shows very scattered results of IRM, hysteresis and Curie curves. And overall look suggest a predominantly diamagnetic mineral.
Site 16 that is sampled in red sandstone at its contact with dyke shows an overprinted magnetite mineral and it is obvious from the curie temperature of 580?C and very low saturation field in hysteresis and IRM curve.
Site 17 that is sampled in dolomite just above the dyke shows an overprinted magnetite. However, a predominant content of diamagnetic minerals is clear.
Site 18 and Site 19 that are sampled in red sandstone and red shale respectively show similar rock magnetic behaviour of paramagnetic mineral content and pigment and specular hematite with a high saturation field of 3000 mT and a back field of 500 mT.
Red sediments normally show a stable rock magnetic content. The presence of hematite in these samples that is evident from rock magnetic results also confirms the stability of magnetic properties and therefore we can rely on the AMS directional analysis.
Conclusion: Anisotropy of Magnetic Susceptibility (AMS) studies on Precambrian to early Cambrian red sediments, in the north side of the Kohbanan fault (Iran), show a predominant tectonic fabric with a low magnitude of oblate and prolate shape. The direction of the maximum axis of AMS lies parallel to the direction of the folded bed axis and therefore tectonic forces are probably matched up with the rotation of the Lut plate with respect to the Central Iran.Intruduction: Red beds have been widely studied by palaeomagnetic methods for what they can tell us about the history of sedimentary basins and their subsequent deformation (Collinson, 1974, Turner, 1979a,b and 1981, McCabe and Elmore, 1989, Elmor, et al., 1993 and 2000). This study has investigated the palaeomagnetism of the Cambrian red sediments (Dezou and Dahou Formation) of the eastern margin of Central Iran. Davoudzadeh and Schmidt, (1984), have done some work on the region which includes these rocks and discuss the few rotations of the Central Iran micro plates. The aim of this study is to consider the AMS results to investigate the palaeo force field on the rock located at the northern side of the Persian Gulf.
Geological setting: The sampling sites of this study are situated in the Cambrian sediments of the eastern margin of Central Iran. These are the largest structural sedimentary units and are composed of complex geological structures. In this region, igneous, metamorphic and sedimentary rocks of the Precambrian to Quaternary Periods are preserved with a predominant number of outcrops of Mesozoic rocks. Most structural elements in different scales such as faults and folds are related to the tectonic activities of the Mesozoic and later eras. The most important structural element of the region is the Koohbanan fault with a NW-SE trend. Similar to the most other important faults of Central Iran, the existence of this fault is related to the late Precambrian Era.
The Dezou Formation is limited to the Rezou series (Late Precambrian) by a fault at the bottom and it can be separated by an unconformity from the Dahou Formation at the top in its area of study (Gazooie village, 30.8° N and 56.7° E, sites 13-14 & 16-20). It has a thickness of 380 m involving three major parts.
The Dahou Formation in the area of study (Gazooie village, 30.8° N and 56.7° E, site 20, figure. 4) rests with a basal breccia conformably on dolomites and limestone of the Dezou Formation. Middle to the end of early Cambrian Period was inferred from the stratigraphic position. It is covered by the Koohbanan Formation unconformably.
Rock magnetic studies: In the present study rock magnetic experiments including IRM acquisition, backfield IRM, thermomagnetic, and hysteresis and AMS studies have been performed on samples of the Dezou and Dahou Group sediments. Some of these experiments yield results that are grain size dependent and others are mineral type dependent; some are dependent on both properties.
Jelinek (1981) and Hrouda (1982) have proposed parameters T and Pj for the shape and corrected anisotropy degree (Tarling and Hrouda, 1993). A plot of T against Pj when Pj > 1 and –1 < T < 1 provides information about the shape and the degree of anisotropy on the same plot. T- Pj plot is shown for all sampling sites in the Dahou and Dezou Formations and also site 20 only. Most samples have less than 10% anisotropy but some of the samples tend to be more anisotropic..
Curie temperatures are mostly between 675-700?C and attributed to specularite and pigments of hematite. In a few examples magnetite with a Curie temperature of about 570?C can be seen as well. The clearest indication of magnetite here is RM ratio (RM ratio is the value of Ms at 100°C on the cooling curve to the value on the heating curve). RM ratio < 1 showing that magnetite has been oxidized to hematite with low Ms Values.
Rock magnetic studies show an agreement that both specular hematite and magnetite can occur in these sediments although the effect of hematite is usually dominant.
Site 13 that is sampled in red sandstone shows paramagnetic mineral content and hematite pigment with a Curie temperature of 680?C and a saturation field of 2000 mT. Low coercivity of 50 mT shows a low amount of magnetite.
Site 14 that is sampled in red sandstone shows a high amount of paramagnetic mineral content and finer grain size of hematite pigments (compare to site 13) with a Curie temperature of 680?C and a saturation field of 3000 mT. Lower coercivity than site 13 shows low amount of magnetite too.
Site 15 that is sampled in dolomite breccias shows very scattered results of IRM, hysteresis and Curie curves. And overall look suggest a predominantly diamagnetic mineral.
Site 16 that is sampled in red sandstone at its contact with dyke shows an overprinted magnetite mineral and it is obvious from the curie temperature of 580?C and very low saturation field in hysteresis and IRM curve.
Site 17 that is sampled in dolomite just above the dyke shows an overprinted magnetite. However, a predominant content of diamagnetic minerals is clear.
Site 18 and Site 19 that are sampled in red sandstone and red shale respectively show similar rock magnetic behaviour of paramagnetic mineral content and pigment and specular hematite with a high saturation field of 3000 mT and a back field of 500 mT.
Red sediments normally show a stable rock magnetic content. The presence of hematite in these samples that is evident from rock magnetic results also confirms the stability of magnetic properties and therefore we can rely on the AMS directional analysis.
Conclusion: Anisotropy of Magnetic Susceptibility (AMS) studies on Precambrian to early Cambrian red sediments, in the north side of the Kohbanan fault (Iran), show a predominant tectonic fabric with a low magnitude of oblate and prolate shape. The direction of the maximum axis of AMS lies parallel to the direction of the folded bed axis and therefore tectonic forces are probably matched up with the rotation of the Lut plate with respect to the Central Iran.https://jesphys.ut.ac.ir/article_21462_65a7977b6c39b5b3d47f6f5249d0a1c5.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36120100421A method for automatic inversion of magnetic data, a case study on Makran subduction zone, South- eastern IranA method for automatic inversion of magnetic data, a case study on Makran subduction zone, South- eastern Iran21463FALoghmanNamakiMohammad KazemHafizi0000-0002-5634-1141MahmodMirzaeiJournal Article19700101Magnetic data inversion has a prominent role in geo-structural investigations. As lots of geological structures, such as faults, dykes, contacts etc are elongated in a specific direction, 2D algorithms became widespread during the previous years. Modifying the existing algorithms on 3D data inversion, a method has been proposed for 2D inversion of profile magnetic data, based on the physical parameter distribution method. The subsurface is divided into a large number of infinitely long horizontal prisms, with square cross section and unknown susceptibilities.
A multi-term objective function is defined and an under-determined system of equation is solved to minimize it. The solution is the magnetic susceptibility of the prisms inside the earth. The regularization parameter makes a trade-off between the data error term and regularization term. The regularization term contains a model length term which defines the total model area in 2D problems and a first order difference model which make sure that the reconstructed model is smooth. Weighting coefficients have been considered for both of these terms to apply smallness as well as smoothness for the recovered model in different directions.
To use the extra information that may be available in the area such as drilling works, other geophysical studies and also the interpreter's imagination of the geological structures, weighting matrices have been inserted in the objective function. As there is no physical and geological meaning to the negative susceptibility, we used a positivity constraint inside the inversion equations to prevent negative values for susceptibility. Having all of these in hand, we expect the final model have a reasonable shape and more satisfy the true earth. In the absence of any extra information about the geological structure of the studied area, acceptable solutions are also obtained and this is the main feature of the physical distribution method. The algorithm uses a Newton step to solve the objective function minimization.
A MATLAB code was prepared to implement the algorithm. As the forward mapping matrix is sometimes very large, a pre-conditioned conjugate gradient routine was used as the main solver for the linear equation that appeared in the Newton minimization. It apparently speeds up the algorithm. The algorithm was tested on two synthetic examples, a dipped dyke and a faulted dyke model. The results show that the method is capable of generating smooth presentation of geological structures. To apply the algorithm on real data, a long aeromagnetic flight line data located at Makran was inverted to model geological structures in the area. Makran has been detected to be an active subduction zone in SE Iran. Subducting the Oman oceanic crust beneath the Lut continental lithosphere has made a typical Trench-Arc complex in the area. The main target along this profile was the Jamurian Depression basin which has been proved to be a fore-arc basin and its magnetic basement has been covered by thick sedimentary rocks.
Ophiolite and ultramafic rock outcrops at the Makran ranges which have made high frequency anomalies, showed that the basement might have the same composition. The results prove that there is a trapped oceanic crust remnant at the basement of the Jazmurian Depression. Geological hypothesis suggest that this basement was made under an extensional regime before or at the time of the subduction.Magnetic data inversion has a prominent role in geo-structural investigations. As lots of geological structures, such as faults, dykes, contacts etc are elongated in a specific direction, 2D algorithms became widespread during the previous years. Modifying the existing algorithms on 3D data inversion, a method has been proposed for 2D inversion of profile magnetic data, based on the physical parameter distribution method. The subsurface is divided into a large number of infinitely long horizontal prisms, with square cross section and unknown susceptibilities.
A multi-term objective function is defined and an under-determined system of equation is solved to minimize it. The solution is the magnetic susceptibility of the prisms inside the earth. The regularization parameter makes a trade-off between the data error term and regularization term. The regularization term contains a model length term which defines the total model area in 2D problems and a first order difference model which make sure that the reconstructed model is smooth. Weighting coefficients have been considered for both of these terms to apply smallness as well as smoothness for the recovered model in different directions.
To use the extra information that may be available in the area such as drilling works, other geophysical studies and also the interpreter's imagination of the geological structures, weighting matrices have been inserted in the objective function. As there is no physical and geological meaning to the negative susceptibility, we used a positivity constraint inside the inversion equations to prevent negative values for susceptibility. Having all of these in hand, we expect the final model have a reasonable shape and more satisfy the true earth. In the absence of any extra information about the geological structure of the studied area, acceptable solutions are also obtained and this is the main feature of the physical distribution method. The algorithm uses a Newton step to solve the objective function minimization.
A MATLAB code was prepared to implement the algorithm. As the forward mapping matrix is sometimes very large, a pre-conditioned conjugate gradient routine was used as the main solver for the linear equation that appeared in the Newton minimization. It apparently speeds up the algorithm. The algorithm was tested on two synthetic examples, a dipped dyke and a faulted dyke model. The results show that the method is capable of generating smooth presentation of geological structures. To apply the algorithm on real data, a long aeromagnetic flight line data located at Makran was inverted to model geological structures in the area. Makran has been detected to be an active subduction zone in SE Iran. Subducting the Oman oceanic crust beneath the Lut continental lithosphere has made a typical Trench-Arc complex in the area. The main target along this profile was the Jamurian Depression basin which has been proved to be a fore-arc basin and its magnetic basement has been covered by thick sedimentary rocks.
Ophiolite and ultramafic rock outcrops at the Makran ranges which have made high frequency anomalies, showed that the basement might have the same composition. The results prove that there is a trapped oceanic crust remnant at the basement of the Jazmurian Depression. Geological hypothesis suggest that this basement was made under an extensional regime before or at the time of the subduction.https://jesphys.ut.ac.ir/article_21463_5efd9252f2286443f7916022a0fe23ca.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36120100421Mathematical modeling on the Karkheh reservoir stresses and its application to the Dalpari faultMathematical modeling on the Karkheh reservoir stresses and its application to the Dalpari fault21464FAMaryamHodhodiNasrolahKamalianJournal Article19700101In this research, a new mathematical modeling on strength changes due to reservoir elastic stresses across the preexisting fault plane is introduced. The method has been applied to the Dalpari fault, which is one of the potential seismic sources in the vicinity of the Karkheh reservoir. In this method the distribution of total stress across the fault cannot be determined because the initial stress is unknown; the pore pressure due to the reservoir is also not considered. The mathematical modeling method has been explained briefly in the following.
The lake first is divided into small rectangles of sides a and b by two sets of orthogonal straight lines, one set conveniently east-west and the other north-south. The mean water depth h in each rectangle with area S is estimated, and the water pressure on the floor of the rectangle is replaced by a vertical force F =? gS h at the center of rectangle. It is clear that rather smaller rectangles lead to more precise modeling, hence, each rectangle with increasing h is divided into some parts. The water pressure of the lake is simulated by a set of point forces F which applied in the -X3 direction and acting on the rectangles. We define now a mathematical model of the single force F in the elastostatic fields using the delta function conception: The point force F is defined as:
The component of displacement at point due to in the direction, , is given by:
where j=3, for the water pressure of the lake, and is the distance from origin to point P. The general three dimensional relationships between nine Cartesian strain component and three Cartesian displacements are given by:
These nine terms constitute the infinitesimal strain tensor, a symmetric tensor with six independent quantities. The stress tensor is given by stress-strain relationships based on constitutive law called Hooke`s law is given by:
Using the conception of the stress tensor and well-known relationships in elastostatic theory the various stress parameters such as shear and normal stresses due to reservoir can be determined at the point P in a plane with normal n. In this way, we would be able to achieve the strength values due to the reservoir across the specified preexisting fault plane. The shear stress ( ) and strength ( ) due to the reservoir across the preexisting fault plane are, respectively, as follows.
is angle measured in the preexisting fault plane between resolved shear stresses due to reservoir and ambient causes, and it is measured from the direction of the latter coefficient of friction along the fault plane, is coefficient of friction across the preexisting fault plane. The earthquakes near new reservoirs is classified into the following three cases on the basis of positive, negative and zero values of ; a reservoir based on this classification may stabilize some parts and destabilize other parts of the same nearby fault surface.
Case I: induced or reservoir assisted natural earthquakes; > 0. This situation will arise when 0 ? ? < 90 and the reservoir stresses have a net destabilizing influence on some parts of fault plane in which has a suitably large component in the same direction as initial tectonic shear stress, therefore the earthquake occurs earlier than its natural time.
Case II: natural tectonic earthquakes despite the inhibiting influence of the new reservoir; < 0. This situation will arise when 90 <? ? 180 and the reservoir stresses have a net stabilizing influence on some parts of fault plane in which a component of acts opposite to initial tectonic shear stress, therefore the earthquake occurs later than its natural time.
Case III: natural tectonic earthquakes with no influence of the reservoir; = 0. The reservoir exerts neither a stabilizing nor a destabilizing influence on the fault plane. The earthquake occurs neither hindered nor assisted by the reservoir; its occurrence time is the same as the natural time. Hence there is again a natural earthquake near the reservoir.
We have applied these concepts to the Karkheh reservoir and discussed fault stability analysis on the Dalpari fault plane. For this, we identify the strength equation as the main operational relation and list here the inputs required for its evaluation. Firstly, the location and orientation of the segment of the Dalpari fault in the vicinity of the reservoir should be specified. An estimate of the coefficient of friction, Poisson's ratio, shear and Young's modulus in the crustal rocks beneath and around the reservoir should be available also. Secondly, we should specify the direction of resolved shear stress on the fault due to ambient crustal stresses. If the fault plane solution for the earthquake is available, or can be modeled, then the direction of slip on the chosen nodal plane may be taken as the estimate of the desired shear stress direction. Thirdly, we should have estimates based on computations of the elastic stresses due to the Karkheh reservoir on the Dalpari fault plane. It is observed that in the Zagros high angle reverse faults (dips>30) appeared to be more common than low angle trust (dips<30); there is a peak in the distribution in the range 30-60 and very few nodal plane dips corresponding to low angle thrusts which would plot in the ranges 0-30 and 60-90. It is observed that in this region the seismogenic depths vary from 4 to 20 km, with typical uncertainties being ± 4 km. As a result, due to unavailable valuable information needed about the Dalpari fault plane parameters, we have considered different dips between 15° to 60° and the maximum depth 15 km for analysis.
The Karkheh reservoir is situated on the hillwall part of the Dalpari fault. Based on the analysis it is observed that the reservoir may exert a stabilizing influence and delay the time of earthquake occurrence associated with this segment. These observations are in agreement with the theoretical stress analysis due to the reservoir in this segment. This results in a more crustal stability at all dips of the surface segment of the Dalpari fault. The possibility of induced earthquake may occur at the 15° dip of the hidden segment of this fault and at depths shallower than 2.5 kilometer, which continues until nearby down the dam site. In this segment the maximum reservoir induced strength in the direction of the slip vector fault is estimated about 0.12 bar at a depth of ~1 kilometer.In this research, a new mathematical modeling on strength changes due to reservoir elastic stresses across the preexisting fault plane is introduced. The method has been applied to the Dalpari fault, which is one of the potential seismic sources in the vicinity of the Karkheh reservoir. In this method the distribution of total stress across the fault cannot be determined because the initial stress is unknown; the pore pressure due to the reservoir is also not considered. The mathematical modeling method has been explained briefly in the following.
The lake first is divided into small rectangles of sides a and b by two sets of orthogonal straight lines, one set conveniently east-west and the other north-south. The mean water depth h in each rectangle with area S is estimated, and the water pressure on the floor of the rectangle is replaced by a vertical force F =? gS h at the center of rectangle. It is clear that rather smaller rectangles lead to more precise modeling, hence, each rectangle with increasing h is divided into some parts. The water pressure of the lake is simulated by a set of point forces F which applied in the -X3 direction and acting on the rectangles. We define now a mathematical model of the single force F in the elastostatic fields using the delta function conception: The point force F is defined as:
The component of displacement at point due to in the direction, , is given by:
where j=3, for the water pressure of the lake, and is the distance from origin to point P. The general three dimensional relationships between nine Cartesian strain component and three Cartesian displacements are given by:
These nine terms constitute the infinitesimal strain tensor, a symmetric tensor with six independent quantities. The stress tensor is given by stress-strain relationships based on constitutive law called Hooke`s law is given by:
Using the conception of the stress tensor and well-known relationships in elastostatic theory the various stress parameters such as shear and normal stresses due to reservoir can be determined at the point P in a plane with normal n. In this way, we would be able to achieve the strength values due to the reservoir across the specified preexisting fault plane. The shear stress ( ) and strength ( ) due to the reservoir across the preexisting fault plane are, respectively, as follows.
is angle measured in the preexisting fault plane between resolved shear stresses due to reservoir and ambient causes, and it is measured from the direction of the latter coefficient of friction along the fault plane, is coefficient of friction across the preexisting fault plane. The earthquakes near new reservoirs is classified into the following three cases on the basis of positive, negative and zero values of ; a reservoir based on this classification may stabilize some parts and destabilize other parts of the same nearby fault surface.
Case I: induced or reservoir assisted natural earthquakes; > 0. This situation will arise when 0 ? ? < 90 and the reservoir stresses have a net destabilizing influence on some parts of fault plane in which has a suitably large component in the same direction as initial tectonic shear stress, therefore the earthquake occurs earlier than its natural time.
Case II: natural tectonic earthquakes despite the inhibiting influence of the new reservoir; < 0. This situation will arise when 90 <? ? 180 and the reservoir stresses have a net stabilizing influence on some parts of fault plane in which a component of acts opposite to initial tectonic shear stress, therefore the earthquake occurs later than its natural time.
Case III: natural tectonic earthquakes with no influence of the reservoir; = 0. The reservoir exerts neither a stabilizing nor a destabilizing influence on the fault plane. The earthquake occurs neither hindered nor assisted by the reservoir; its occurrence time is the same as the natural time. Hence there is again a natural earthquake near the reservoir.
We have applied these concepts to the Karkheh reservoir and discussed fault stability analysis on the Dalpari fault plane. For this, we identify the strength equation as the main operational relation and list here the inputs required for its evaluation. Firstly, the location and orientation of the segment of the Dalpari fault in the vicinity of the reservoir should be specified. An estimate of the coefficient of friction, Poisson's ratio, shear and Young's modulus in the crustal rocks beneath and around the reservoir should be available also. Secondly, we should specify the direction of resolved shear stress on the fault due to ambient crustal stresses. If the fault plane solution for the earthquake is available, or can be modeled, then the direction of slip on the chosen nodal plane may be taken as the estimate of the desired shear stress direction. Thirdly, we should have estimates based on computations of the elastic stresses due to the Karkheh reservoir on the Dalpari fault plane. It is observed that in the Zagros high angle reverse faults (dips>30) appeared to be more common than low angle trust (dips<30); there is a peak in the distribution in the range 30-60 and very few nodal plane dips corresponding to low angle thrusts which would plot in the ranges 0-30 and 60-90. It is observed that in this region the seismogenic depths vary from 4 to 20 km, with typical uncertainties being ± 4 km. As a result, due to unavailable valuable information needed about the Dalpari fault plane parameters, we have considered different dips between 15° to 60° and the maximum depth 15 km for analysis.
The Karkheh reservoir is situated on the hillwall part of the Dalpari fault. Based on the analysis it is observed that the reservoir may exert a stabilizing influence and delay the time of earthquake occurrence associated with this segment. These observations are in agreement with the theoretical stress analysis due to the reservoir in this segment. This results in a more crustal stability at all dips of the surface segment of the Dalpari fault. The possibility of induced earthquake may occur at the 15° dip of the hidden segment of this fault and at depths shallower than 2.5 kilometer, which continues until nearby down the dam site. In this segment the maximum reservoir induced strength in the direction of the slip vector fault is estimated about 0.12 bar at a depth of ~1 kilometer.https://jesphys.ut.ac.ir/article_21464_bf95f3f6d0df7bf2233b91f19d4cbb62.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36120100421Relation between magnetotail variations and auroral activities during substormsRelation between magnetotail variations and auroral activities during substorms21465FABarataliFeizabadyMahmodMirzaeiNaserHosseinzadeh GuyaJournal Article19700101The interaction between the solar wind and the earth’s magnetosphere results in the transport of magnetic flux into the magnetotail and to avoid a continued buildup in the tail, there is a return convection of magnetic flux from the magnetotail into the night-side dipole-like region and from there to the day-side. Since there is energetic plasma with this magnetic flux, hence electric currents exist that disturb the magnetic intensity in the earth’s surface ( substorms ) and particularly by interaction with the earth’s ionosphere producing auroral activities .
We have compared magnetotail variations with auroral activities during 3 substorms using GEOTAIL , Polar UVI and 4-Cluster spacecraft data . In the substorm event on 15 December 1996 , auroral breakups and intensifications were highly correlated with fast plasma flows, with the variations in the north-south magnetic field and with the total pressure in the magnetotail. GEOTAIL was located around X~ -21, and several fast tailward flows were observed in the early expansion phase with the southward magnetic field and the total pressure enhancement, associated with plasmoids. These flows were observed simultaneously with or within 1min of auroral breakups or pseudobreakups. In the late expansion or recovery phase, some fast earthward flows were observed with small auroral intensifications. More investigations imply that the total pressure in the magnetotail significantly decreases during auroral breakups or poleward expansion of the auroral bulge. The duration of the expansion and the maximum size of the auroral bulge are closely correlated with the duration and amount of total pressure decrease in the magnetotail, respectively. These results also imply that the substorms are the response of magnetosphere to solar wind and its frozen-in magnetic field. Also in the substorm event on September 2002 investigation of Cluster data shows that direction reverse of fast plasma flow is highly correlated with total pressure variations in the magnetotail by magnetic disturbs.
Review GEOTAIL and Polar UVI data for December 1996 and March 1997 substorms shows high correlation between changes in rapid plasma flux and magnetic field north- south and changes in the magnetotail's total pressure. Fluxes begin towards the tail and then they come back. This implies that events of this activity that retreat near the Earth’s neutral line are periodic.
The results show that communication between the auroral activity domain and substorms is dependent on energy dissipation in the magnetotail.
The data by 4- Cluster and earth's magnetograms for the September 2002 substorm also shows that the magnetic energy is stored during substorms and released when fast fluxes electron tubes are reversed . These results can be used to describe the substorms in following stages:
- The presence of a strong south component of interplanetary magnetic field (IMF) and increase in the magnetic reconnection and transmission magnetic flux into the magnetotail.
- Transfer pressure of magnetic flux from the tail to night- side and restriction of the magnetosphere.
- Creation of the new structure for magnetic flux in tail.
- Increase in the pressure of magnetic flux in side lobes and thinning the plasmasheet and formation of the tail like magnetic field lines.
- Thinning the plasmasheet sufficiently for broken (MHD) magnetohydrodynamic conditions and beginning magnetic reconnection again in places near the earth's neutral lines.
- Injection of energy and plasma in southward of the tail and re-coming toward the night-side of the earth and creating substorm's auroral activities.The interaction between the solar wind and the earth’s magnetosphere results in the transport of magnetic flux into the magnetotail and to avoid a continued buildup in the tail, there is a return convection of magnetic flux from the magnetotail into the night-side dipole-like region and from there to the day-side. Since there is energetic plasma with this magnetic flux, hence electric currents exist that disturb the magnetic intensity in the earth’s surface ( substorms ) and particularly by interaction with the earth’s ionosphere producing auroral activities .
We have compared magnetotail variations with auroral activities during 3 substorms using GEOTAIL , Polar UVI and 4-Cluster spacecraft data . In the substorm event on 15 December 1996 , auroral breakups and intensifications were highly correlated with fast plasma flows, with the variations in the north-south magnetic field and with the total pressure in the magnetotail. GEOTAIL was located around X~ -21, and several fast tailward flows were observed in the early expansion phase with the southward magnetic field and the total pressure enhancement, associated with plasmoids. These flows were observed simultaneously with or within 1min of auroral breakups or pseudobreakups. In the late expansion or recovery phase, some fast earthward flows were observed with small auroral intensifications. More investigations imply that the total pressure in the magnetotail significantly decreases during auroral breakups or poleward expansion of the auroral bulge. The duration of the expansion and the maximum size of the auroral bulge are closely correlated with the duration and amount of total pressure decrease in the magnetotail, respectively. These results also imply that the substorms are the response of magnetosphere to solar wind and its frozen-in magnetic field. Also in the substorm event on September 2002 investigation of Cluster data shows that direction reverse of fast plasma flow is highly correlated with total pressure variations in the magnetotail by magnetic disturbs.
Review GEOTAIL and Polar UVI data for December 1996 and March 1997 substorms shows high correlation between changes in rapid plasma flux and magnetic field north- south and changes in the magnetotail's total pressure. Fluxes begin towards the tail and then they come back. This implies that events of this activity that retreat near the Earth’s neutral line are periodic.
The results show that communication between the auroral activity domain and substorms is dependent on energy dissipation in the magnetotail.
The data by 4- Cluster and earth's magnetograms for the September 2002 substorm also shows that the magnetic energy is stored during substorms and released when fast fluxes electron tubes are reversed . These results can be used to describe the substorms in following stages:
- The presence of a strong south component of interplanetary magnetic field (IMF) and increase in the magnetic reconnection and transmission magnetic flux into the magnetotail.
- Transfer pressure of magnetic flux from the tail to night- side and restriction of the magnetosphere.
- Creation of the new structure for magnetic flux in tail.
- Increase in the pressure of magnetic flux in side lobes and thinning the plasmasheet and formation of the tail like magnetic field lines.
- Thinning the plasmasheet sufficiently for broken (MHD) magnetohydrodynamic conditions and beginning magnetic reconnection again in places near the earth's neutral lines.
- Injection of energy and plasma in southward of the tail and re-coming toward the night-side of the earth and creating substorm's auroral activities.https://jesphys.ut.ac.ir/article_21465_9da44e270c7b7f2d3cbe020441da0fb0.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36120100421Review of cumulus convective parameterization schemes in large and meso- scale modelsReview of cumulus convective parameterization schemes in large and meso- scale models21466FAMaryamGharayloMajidMazraeh FarahaniAbbas AliAliakbari BidokhtiJournal Article19700101Cumulus convection, as a meso-scale phenomenon, results in the release of latent heat and vertical drift of energy that by part could affect the large scale field of energy, humidity, and momentum of the atmosphere. Since the current numerical models are unable to resolve the singular cumulus convective clouds that are important from a meteorological point of view, it is used to apply some sort of approximation of them in their models. The method of approximating is called parameterization and closure problem. The fundamental idea of these methods is, to approximate the small and meso-scale part of variables by their large scale part. Furthermore we know the large scale part of fields tends to damp the small and meso-scale parts including cumulus convection activities.
The cumulus convection parameterization schemes (CPSs) are divided into schemes for large-scale models and schemes for mesoscale models. In this division, the mesoscale models are referred to models with grid spacing between 10-50 km and a time step in the order of several minutes, while the large-scale models have grid spacing larger than 50 km and a time step greater than several minutes. In this study, CPSs that are used in large and meso-scale models are generally and chronologically reviewed from the beginning of 1960 up to recent schemes. Wellknown large-scale schemes are the convective adjustment schemes and Kuo-type schemes which use moisture convergence to determine the location and intensity of convection. In the convective adjustment, it is assumed that under some constraints, a critical state for the large-scale thermodynamical field is adjusted to a new stable state and in Kuo-type schemes, convection is dependent on the moisture supply by large-scale flow convergence and boundary-layer turbulence. In developing schemes for mesoscale models, three approaches have been taken (Molinari, 1993): the traditional, the fully explicit and the hybrid approaches. CPSs which are still needed in contemporary numerical weather prediction models to simulate convection rely on a realistic cloud model as one of their crucial parts. The cloud model is a fundamental determinant of vertical mass flux, heating and drying profiles, and precipitation rate.
Although there was considerable progress in the modeling of cumulus convection systems, because of the complexity of the problem, especially in meso-scale models, it remains as an open problem in modeling of the atmosphere and a hot field of research for meteorologists.Cumulus convection, as a meso-scale phenomenon, results in the release of latent heat and vertical drift of energy that by part could affect the large scale field of energy, humidity, and momentum of the atmosphere. Since the current numerical models are unable to resolve the singular cumulus convective clouds that are important from a meteorological point of view, it is used to apply some sort of approximation of them in their models. The method of approximating is called parameterization and closure problem. The fundamental idea of these methods is, to approximate the small and meso-scale part of variables by their large scale part. Furthermore we know the large scale part of fields tends to damp the small and meso-scale parts including cumulus convection activities.
The cumulus convection parameterization schemes (CPSs) are divided into schemes for large-scale models and schemes for mesoscale models. In this division, the mesoscale models are referred to models with grid spacing between 10-50 km and a time step in the order of several minutes, while the large-scale models have grid spacing larger than 50 km and a time step greater than several minutes. In this study, CPSs that are used in large and meso-scale models are generally and chronologically reviewed from the beginning of 1960 up to recent schemes. Wellknown large-scale schemes are the convective adjustment schemes and Kuo-type schemes which use moisture convergence to determine the location and intensity of convection. In the convective adjustment, it is assumed that under some constraints, a critical state for the large-scale thermodynamical field is adjusted to a new stable state and in Kuo-type schemes, convection is dependent on the moisture supply by large-scale flow convergence and boundary-layer turbulence. In developing schemes for mesoscale models, three approaches have been taken (Molinari, 1993): the traditional, the fully explicit and the hybrid approaches. CPSs which are still needed in contemporary numerical weather prediction models to simulate convection rely on a realistic cloud model as one of their crucial parts. The cloud model is a fundamental determinant of vertical mass flux, heating and drying profiles, and precipitation rate.
Although there was considerable progress in the modeling of cumulus convection systems, because of the complexity of the problem, especially in meso-scale models, it remains as an open problem in modeling of the atmosphere and a hot field of research for meteorologists.https://jesphys.ut.ac.ir/article_21466_ca3ff64458eae1acfd22b4be81954b45.pdf