Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121A suggestion on determining field gravity in sea through satellite altimetry observations, Case study; field gravity in Oman SeaA suggestion on determining field gravity in sea through satellite altimetry observations, Case study; field gravity in Oman Sea1162429810.22059/jesphys.2012.24298FAAbdolrahimAskariVahidEbrahimzade ArdestaniAlirezaArdalan0000-0001-5549-3189Journal Article19700101Measuring field gravity in sea due to the accelerations introduced via waves and movements to the gravity measuring systems has a low accuracy since according to Einstein's equalent principle, gravimeter isn't able to separate the gravity acceleration from another acceleration. Ship borne Gravimetery observations by means of like oscillations and accelerations of the ship motion, and also more equipments errors in sea, indicating less accuracy ,and also concerning the extent of seas and low ship velocity, perfectly covering all Shipborne area seas takes up much time and it is perhaps economically impossible. One of the key points in measurement gravity is the apparatus consistency within time intervals. The gravity measurement is done over a moving pad; thus it becomes a source of error into measurement observations. These errors are mostly: (a) errors apparatus,(b) error drift,(c) error in Eotvos correction ,(d) error in correction vertical acceleration, (e) error in horizontal acceleration. Thus much effort has been made by the researchers in the field to increase observations regarding sea gravity and to find other possible solution in order to provide the required. Since the beginning of Satellite altimetry techniques, taking this advantage has been paid much attention to produce gravity data. The usual method in this regard is the Stocks Integral or Veining Meinesz and also the reverse of their solutions in order to produce gravity anomalies. In this article a different method has been presented to produce gravity anomaly in sea from satellite altimetry. The case study below evaluated in Oman Sea contains the following stages:
1. Computation of Mean Sea level (MSL) from satellite altimetry observations.
2. determining the Sea Surface Topography (SST) obtained via oceanographic studies.
3. Conversion of the MSL level to geoidal undulations by difference SST and MSL.
3. Converting the geoidal undulations into potential value at the surface of the reference ellipsoid using inverse Brun's formula.
4. Removal of the effect of ellipsoidal harmonic expansion to 360 degree and order computational point.
5. Upward continuation of the incremental gravity potential obtained from the removal steps to gravity intensity at the point of interest by using gradient ellipsoidal Abel-Poisson integral.
6. Restoring the removed effect at the fourth step at computational point of step 5.Measuring field gravity in sea due to the accelerations introduced via waves and movements to the gravity measuring systems has a low accuracy since according to Einstein's equalent principle, gravimeter isn't able to separate the gravity acceleration from another acceleration. Ship borne Gravimetery observations by means of like oscillations and accelerations of the ship motion, and also more equipments errors in sea, indicating less accuracy ,and also concerning the extent of seas and low ship velocity, perfectly covering all Shipborne area seas takes up much time and it is perhaps economically impossible. One of the key points in measurement gravity is the apparatus consistency within time intervals. The gravity measurement is done over a moving pad; thus it becomes a source of error into measurement observations. These errors are mostly: (a) errors apparatus,(b) error drift,(c) error in Eotvos correction ,(d) error in correction vertical acceleration, (e) error in horizontal acceleration. Thus much effort has been made by the researchers in the field to increase observations regarding sea gravity and to find other possible solution in order to provide the required. Since the beginning of Satellite altimetry techniques, taking this advantage has been paid much attention to produce gravity data. The usual method in this regard is the Stocks Integral or Veining Meinesz and also the reverse of their solutions in order to produce gravity anomalies. In this article a different method has been presented to produce gravity anomaly in sea from satellite altimetry. The case study below evaluated in Oman Sea contains the following stages:
1. Computation of Mean Sea level (MSL) from satellite altimetry observations.
2. determining the Sea Surface Topography (SST) obtained via oceanographic studies.
3. Conversion of the MSL level to geoidal undulations by difference SST and MSL.
3. Converting the geoidal undulations into potential value at the surface of the reference ellipsoid using inverse Brun's formula.
4. Removal of the effect of ellipsoidal harmonic expansion to 360 degree and order computational point.
5. Upward continuation of the incremental gravity potential obtained from the removal steps to gravity intensity at the point of interest by using gradient ellipsoidal Abel-Poisson integral.
6. Restoring the removed effect at the fourth step at computational point of step 5.https://jesphys.ut.ac.ir/article_24298_f32bebeee1f1ffa9eec8e07fde1bef13.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121Separation of the gravity anomaly using discrete wavelet analysis and comparing to other classical methodsSeparation of the gravity anomaly using discrete wavelet analysis and comparing to other classical methods17352429910.22059/jesphys.2012.24299FAMehrdadAlimoradiyanHosseinZomorrodianArashMotasharreieJournal Article19700101Geophysical data are always affected by different sources of anomaly. These sources are classified in to three groups: first group is relatively deep sources commonly called as regional effects, second group is near surface sources or local effects and third group is high frequency noises. The only common way to separate the anomaly is separation with respect to the signal frequency. There are several classical techniques in the literature such as polynomial fitting, Griffin method, moving average methods and frequency domain filtering methods.
In this paper a method based on discrete wavelet filtering has been applied. Discrete wavelet is calculated using MATLAB tool box based on rebio6.8 wavelet mother kernel. Discrete wavelet transform decomposes the signal into two parts low frequency (with approximation) and high frequency (whit details). The detail part can be also decomposed in to two or more parts based on the building block frequencies of the signal.
Sphere forward modeling is applied to test the algorithm of the separation method. Synthetic data is calculated assuming two spheres buried at two different depths. White noise with frequency equivalent to sampling interval is added to the synthetic data. The mentioned separation method shows appropriate result in comparison with other separating methods. One of the advantages of the method is automatic denoising process that can be applied during the procedure.
The method has also been applied for real dataset in a salt dome structure located at a station about 25 kilometers from the city of Ghom. The dataset is affected by two different geological sources: a deep fault structure represented as low frequency and a salt dome represented as high frequency in Bouguer gravity map of the region. Bouguer anomaly map of the region represents mixed effect of both structures. The separation process has been prepared comparatively successful. It has been compared to other separating methods. The results obtained from this comparison are:
1- The regional effect due to fault structure is clearly represented and can be applied separately for inversion process. Correspondingly the local effect is separated and presented in residual anomaly map and can be used in inversion modeling.
2- The high frequency noise effect is strongly attenuated during the process automatically.Geophysical data are always affected by different sources of anomaly. These sources are classified in to three groups: first group is relatively deep sources commonly called as regional effects, second group is near surface sources or local effects and third group is high frequency noises. The only common way to separate the anomaly is separation with respect to the signal frequency. There are several classical techniques in the literature such as polynomial fitting, Griffin method, moving average methods and frequency domain filtering methods.
In this paper a method based on discrete wavelet filtering has been applied. Discrete wavelet is calculated using MATLAB tool box based on rebio6.8 wavelet mother kernel. Discrete wavelet transform decomposes the signal into two parts low frequency (with approximation) and high frequency (whit details). The detail part can be also decomposed in to two or more parts based on the building block frequencies of the signal.
Sphere forward modeling is applied to test the algorithm of the separation method. Synthetic data is calculated assuming two spheres buried at two different depths. White noise with frequency equivalent to sampling interval is added to the synthetic data. The mentioned separation method shows appropriate result in comparison with other separating methods. One of the advantages of the method is automatic denoising process that can be applied during the procedure.
The method has also been applied for real dataset in a salt dome structure located at a station about 25 kilometers from the city of Ghom. The dataset is affected by two different geological sources: a deep fault structure represented as low frequency and a salt dome represented as high frequency in Bouguer gravity map of the region. Bouguer anomaly map of the region represents mixed effect of both structures. The separation process has been prepared comparatively successful. It has been compared to other separating methods. The results obtained from this comparison are:
1- The regional effect due to fault structure is clearly represented and can be applied separately for inversion process. Correspondingly the local effect is separated and presented in residual anomaly map and can be used in inversion modeling.
2- The high frequency noise effect is strongly attenuated during the process automatically.https://jesphys.ut.ac.ir/article_24299_025cbd14f4f7885bb76821c7a5af5b20.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121Determination of horizontal and vertical design spectra for rock sites based on acceleration time histories in IranDetermination of horizontal and vertical design spectra for rock sites based on acceleration time histories in Iran37502430010.22059/jesphys.2012.24300FAS. HasanMousavi-BafroueiMortezaEskandari-GhadiNoorbakhshMirzaeiJournal Article19700101Based on the standards for design of structures, any structure should be designated for seismic loads and any combinations containing seismic loads. For spectral analysis of a structure, the site effect is taken into account by considering its effects on the design spectra. Because of different lateral and vertical stiffness of the soil layers underneath the structure, the design spectra are different from soil to soil. The stiffer the soil the higher the velocity results in. The shear and compressive wave velocities of the soil as continuum are criteria for categorizing the soil in stiffness point of view. Since, the structures are more affected by lateral forces than vertical ones, the shear wave velocity is more important than the compressive wave velocity. Moreover, the soil nearer to the structure affects the structure more than the soil far from it. Thus, in the standards for design of structures, the mean shear wave velocity of the upper 30 m of the soil layer is used for categorizing the soil underneath the structure in stiffness point of view. In the Iranian code of practice for seismic resistant design of buildings, the standard number is 2800, the sites have been categorized into four different types, which are rock (with the mean shear wave velocity of the upper 30 m denoted as larger than 750 m/sec), medium alluvium (where m/sec), soft alluvium (where m/sec), and very soft alluvium (with m/sec). To analyze the structures using design response spectra, specified horizontal and vertical spectra are needed for each category. Because of the increase in the number of strong motion accelerograms in recent years, in this research the horizontal and vertical design spectra for the first category (the rock site) are prepared based on Iranian data. To do so, all the existing horizontal and vertical acceleration time histories in different stations fixed on rock sites are gathered. The data are filtered and baseline corrected by Seismosignal software to remove the noise frequency components and to modify the magnitude of both the displacement and velocity. In addition, all the data are normalized for the Peak Ground Acceleration (PGA). According to the data, the quality of 60 vertical time histories and 71 horizontal time histories were acceptable. With this data, both the vertical and the horizontal response spectra are prepared for each time history and for four different damping ratios, which are %2, %5, %10 and %20. Averaging the response spectra, the unsmoothed design spectra are obtained. The smoothed design spectra are plotted in tripartite coordinate system and spectral acceleration-time system, as well. These procedures are done for the average plus one standard deviation of vertical and horizontal response spectra. Finally, the smoothed design spectra from the data of this research are compared with that of the Iranian code of 2800 regulation and also the Mohraz design spectra. It is shown that the results are in good agreement with the Mohraz design spectra except that in long periods, the spectral acceleration obtained in this study is smaller. Comparing the result of this research with that of 2800 regulation, it is seen that in short periods, the spectral acceleration in this study is higher than that in the 2800 regulation, while for long periods, the spectral accelerations in this study is much less than that given in 2800 regulation. It means that in the category of short period structures more strengthen structures may be needed.Based on the standards for design of structures, any structure should be designated for seismic loads and any combinations containing seismic loads. For spectral analysis of a structure, the site effect is taken into account by considering its effects on the design spectra. Because of different lateral and vertical stiffness of the soil layers underneath the structure, the design spectra are different from soil to soil. The stiffer the soil the higher the velocity results in. The shear and compressive wave velocities of the soil as continuum are criteria for categorizing the soil in stiffness point of view. Since, the structures are more affected by lateral forces than vertical ones, the shear wave velocity is more important than the compressive wave velocity. Moreover, the soil nearer to the structure affects the structure more than the soil far from it. Thus, in the standards for design of structures, the mean shear wave velocity of the upper 30 m of the soil layer is used for categorizing the soil underneath the structure in stiffness point of view. In the Iranian code of practice for seismic resistant design of buildings, the standard number is 2800, the sites have been categorized into four different types, which are rock (with the mean shear wave velocity of the upper 30 m denoted as larger than 750 m/sec), medium alluvium (where m/sec), soft alluvium (where m/sec), and very soft alluvium (with m/sec). To analyze the structures using design response spectra, specified horizontal and vertical spectra are needed for each category. Because of the increase in the number of strong motion accelerograms in recent years, in this research the horizontal and vertical design spectra for the first category (the rock site) are prepared based on Iranian data. To do so, all the existing horizontal and vertical acceleration time histories in different stations fixed on rock sites are gathered. The data are filtered and baseline corrected by Seismosignal software to remove the noise frequency components and to modify the magnitude of both the displacement and velocity. In addition, all the data are normalized for the Peak Ground Acceleration (PGA). According to the data, the quality of 60 vertical time histories and 71 horizontal time histories were acceptable. With this data, both the vertical and the horizontal response spectra are prepared for each time history and for four different damping ratios, which are %2, %5, %10 and %20. Averaging the response spectra, the unsmoothed design spectra are obtained. The smoothed design spectra are plotted in tripartite coordinate system and spectral acceleration-time system, as well. These procedures are done for the average plus one standard deviation of vertical and horizontal response spectra. Finally, the smoothed design spectra from the data of this research are compared with that of the Iranian code of 2800 regulation and also the Mohraz design spectra. It is shown that the results are in good agreement with the Mohraz design spectra except that in long periods, the spectral acceleration obtained in this study is smaller. Comparing the result of this research with that of 2800 regulation, it is seen that in short periods, the spectral acceleration in this study is higher than that in the 2800 regulation, while for long periods, the spectral accelerations in this study is much less than that given in 2800 regulation. It means that in the category of short period structures more strengthen structures may be needed.https://jesphys.ut.ac.ir/article_24300_efd54e3b819c434169700017205d4ced.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121Estimation of the southern Pars gas field’s permeability using General Regression Neural Networks (GRNN)Estimation of the southern Pars gas field’s permeability using General Regression Neural Networks (GRNN)51652430110.22059/jesphys.2012.24301FAAliMoradzadehFaramarzDoulati ArdejaniRezaRookiMashallahRahimiJournal Article19700101The permeability is a main property of oil reservoirs that shows the ability of rocks in the conduct of fluids such as oil, water and gas through the pore spaces of reservoir. The determination of the permeability is a crucial task in reserve estimation, production and development of oil reservoirs. Due to this, an accurate estimate of such important reservoir rock parameter should not be made from log data alone. Thus a judicious combination of core analysis and log data is required to link the most relevant parameters in order to achieve more global relationships to estimate the permeability of a reservoir rocks.
The conventional methods for permeability determination are based on limited core analysis and well test data sets. These methods are however very expensive and time consuming. Furthermore, one or more wells in an oil field may have no core samples. In practice, all exploration data including log, core, and seismic data often reflect complex nonlinear underlying relationships. Also, these data may contain a high value of uncertainty and noise caused by measurement errors. An additional source of uncertainty arises from the mapping of a sparse data set to the entire reservoir domain. Therefore, there is significant uncertainty on the estimated values of the specific petrophysical parameter such as permeability at points between wells using data obtained at the well locations. So, there is a need to use a method could appropriately measure the petrophysical properties of reservoir using available well logs. The methods are currently used to propose permeability models with high generalization performance include empirical correlations like Kozeny-Carmen theory that relates permeability to porosity and the specific area of a porous rock, multi-linear regression, multilayer perception, and fuzzy neural networks. The main limitation of empirical models is that they are developed for a specific formation and perform poorly when estimating permeability in other oil fields. Although multi-linear regression models perform better on unseen data, they often overestimate low values and underestimate high values of petrophysical parameters of the hydrocarbon reservoirs. Alternatively, artificial neural networks (ANNs) have been increasingly applied as a proper computational tool to estimate the required petrophysical properties by identifying the complex non-linear relationships between permeability, porosity, fluid saturations and well log data.
Due to the inherent ability of the ANNs, this study attempts to evaluate the ability of the general regression neural network (GRNN) and to use it for predicting the horizontal and vertical permeability (Kh and Kv) of the gas reservoirs within the Kangan and Dalan formations in the South Pars gas field. To achieve the goal the well logs and core data of three wells are used and the required computational computer codes have been written in MATLAB multi-purpose software environment. The values of digitized well logs data including sonic (DT), gamma ray (GR), compensated neutron porosity (NPHI), density (ROHB), photoelectric factor (PEF), micro spherical focused resistivity log (MSFL), shallow and deep latero-resistivity logs (LLS and LLD) are taken as input, whereas horizontal or vertical permeability (Kh and Kv) are considered as the output of the networks. In order to find the most relevant input variables (logs data) to estimate the permeability, a series of statistical analysis has been done by SPSS statistical software and the obtained correlation matrix showed a strong positive correlation between the permeability and sonic and neutron logs and a strong negative correlation with the density log. The other logs show a low to moderate correlation with permeability.
Among the 250 number of logs and core permeability datasets of the Kangan and Dalan gas reservoirs, 70 percent was randomly divided for training and 30 percent of them was allocated as testing subsets. At the next stage and in order to increase the network resolution to discriminate the high and low values, the data of all input logs were normalized within the interval of -1 to 1, whereas the output of the networks was the logarithm of core derived permeability (i.e. LKh and LKv).
As the smooth factor (SF) is the most important feature in the structure of GRNN, the training of the designed network was done by different values of this factor within the interval 0.1 to 1 and it was found that 0.27 is its optimum value with considering to the RMS error and correlation coefficient (R) of test dataset. The training of the designed network was then implemented by three, four, six, and nine combinations of input variables and it was found that the nine pattern of the input variables (i.e. X, Y, Z, DT, RHOB, NPHI, GR, PEF, MSFL) is the best relevant parameters based on the least RMSE and the highest correlation coefficient (R) values attained during the training and testing process. Consequently the designed multi-layer neural network contains a structure including one input layer with 9 neurons, one hidden layer of radial basis activation function comprising 174 neurons and an output layer containing only one neuron with linear activation function.
The obtained results of the designed networks are then compared to those provided by the multi variables linear regression (MVLR) method. The GRNN results indicate that the average correlation coefficients between core and predicted permeability are 0.95 and 0.902 in comparison to 0.85 and 0.812 of MVLR approach for train and test datasets respectively. Implementations of these methods on test datasets show that the average error of the GRNN technique is (0.65) considerably lower than that of the MVLR method (0.888) for permeability estimation. Hence, it could be concluded that the GRNN approach is faster and is more precise than the MVLR method in prediction of permeability for complex hydrocarbon reservoirs.The permeability is a main property of oil reservoirs that shows the ability of rocks in the conduct of fluids such as oil, water and gas through the pore spaces of reservoir. The determination of the permeability is a crucial task in reserve estimation, production and development of oil reservoirs. Due to this, an accurate estimate of such important reservoir rock parameter should not be made from log data alone. Thus a judicious combination of core analysis and log data is required to link the most relevant parameters in order to achieve more global relationships to estimate the permeability of a reservoir rocks.
The conventional methods for permeability determination are based on limited core analysis and well test data sets. These methods are however very expensive and time consuming. Furthermore, one or more wells in an oil field may have no core samples. In practice, all exploration data including log, core, and seismic data often reflect complex nonlinear underlying relationships. Also, these data may contain a high value of uncertainty and noise caused by measurement errors. An additional source of uncertainty arises from the mapping of a sparse data set to the entire reservoir domain. Therefore, there is significant uncertainty on the estimated values of the specific petrophysical parameter such as permeability at points between wells using data obtained at the well locations. So, there is a need to use a method could appropriately measure the petrophysical properties of reservoir using available well logs. The methods are currently used to propose permeability models with high generalization performance include empirical correlations like Kozeny-Carmen theory that relates permeability to porosity and the specific area of a porous rock, multi-linear regression, multilayer perception, and fuzzy neural networks. The main limitation of empirical models is that they are developed for a specific formation and perform poorly when estimating permeability in other oil fields. Although multi-linear regression models perform better on unseen data, they often overestimate low values and underestimate high values of petrophysical parameters of the hydrocarbon reservoirs. Alternatively, artificial neural networks (ANNs) have been increasingly applied as a proper computational tool to estimate the required petrophysical properties by identifying the complex non-linear relationships between permeability, porosity, fluid saturations and well log data.
Due to the inherent ability of the ANNs, this study attempts to evaluate the ability of the general regression neural network (GRNN) and to use it for predicting the horizontal and vertical permeability (Kh and Kv) of the gas reservoirs within the Kangan and Dalan formations in the South Pars gas field. To achieve the goal the well logs and core data of three wells are used and the required computational computer codes have been written in MATLAB multi-purpose software environment. The values of digitized well logs data including sonic (DT), gamma ray (GR), compensated neutron porosity (NPHI), density (ROHB), photoelectric factor (PEF), micro spherical focused resistivity log (MSFL), shallow and deep latero-resistivity logs (LLS and LLD) are taken as input, whereas horizontal or vertical permeability (Kh and Kv) are considered as the output of the networks. In order to find the most relevant input variables (logs data) to estimate the permeability, a series of statistical analysis has been done by SPSS statistical software and the obtained correlation matrix showed a strong positive correlation between the permeability and sonic and neutron logs and a strong negative correlation with the density log. The other logs show a low to moderate correlation with permeability.
Among the 250 number of logs and core permeability datasets of the Kangan and Dalan gas reservoirs, 70 percent was randomly divided for training and 30 percent of them was allocated as testing subsets. At the next stage and in order to increase the network resolution to discriminate the high and low values, the data of all input logs were normalized within the interval of -1 to 1, whereas the output of the networks was the logarithm of core derived permeability (i.e. LKh and LKv).
As the smooth factor (SF) is the most important feature in the structure of GRNN, the training of the designed network was done by different values of this factor within the interval 0.1 to 1 and it was found that 0.27 is its optimum value with considering to the RMS error and correlation coefficient (R) of test dataset. The training of the designed network was then implemented by three, four, six, and nine combinations of input variables and it was found that the nine pattern of the input variables (i.e. X, Y, Z, DT, RHOB, NPHI, GR, PEF, MSFL) is the best relevant parameters based on the least RMSE and the highest correlation coefficient (R) values attained during the training and testing process. Consequently the designed multi-layer neural network contains a structure including one input layer with 9 neurons, one hidden layer of radial basis activation function comprising 174 neurons and an output layer containing only one neuron with linear activation function.
The obtained results of the designed networks are then compared to those provided by the multi variables linear regression (MVLR) method. The GRNN results indicate that the average correlation coefficients between core and predicted permeability are 0.95 and 0.902 in comparison to 0.85 and 0.812 of MVLR approach for train and test datasets respectively. Implementations of these methods on test datasets show that the average error of the GRNN technique is (0.65) considerably lower than that of the MVLR method (0.888) for permeability estimation. Hence, it could be concluded that the GRNN approach is faster and is more precise than the MVLR method in prediction of permeability for complex hydrocarbon reservoirs.https://jesphys.ut.ac.ir/article_24301_4199b83fc399a715e67a7ea9f5931234.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121Using the determinant data as a replacement for the Static Shift correction in Magnetotelluric surveysUsing the determinant data as a replacement for the Static Shift correction in Magnetotelluric surveys67772430210.22059/jesphys.2012.24302FABehroozOskooiAmir HosseinJavaheri K.Ahmad AliBehroozmandJournal Article19700101Magnetotelluric (MT) method is an electromagnetic method which provides information about subsurface conductivity structures using Earth's natural electromagnetic fields. Static shift is one of disorders arising from shallow conductors; therefore static shift must be corrected as one of MT data processing steps. In the absence of sufficient information about the near surface distortions, which usually is provided by extra works like TEM and VES, one has to consider the determinant data for the inversion to avoid any misinterpretation. In this paper, we use the determinant data as an effective replacement for the static shift correction. Finally, we present a case study to show the application of determinant data.
There are various techniques for static shift removal. One of them is theoretical calculation of static shift relating to near surface buried inhomogenities or surface topographic effects. Alternatively, we can use auxiliary data of known geology of the region or independent measurements such as TEM and VES sounding (Sternberg et al., 1988). In these methods, after calculation of the accurate apparent resistivity values in the site of interest, the curves are transferred to the desired level.
Utilizing determinant data for inversion leads to the best results for the interpretation in the case that the above methods are not accessible.
As a case study, MT data from a site in Inche-Boroon area in the north of Golestan Province, Iran is considered here. After processing, MT data was obtained as apparent resistivity with respect to frequency (or period) which is shown for ,(green curve) and ,(blue curve) in Fig.3. Also, the determinant apparent resistivity data is shown in red. As can be seen, determinant data appears as a mean of and .
According to the correlation of determinant data with geological structures, it is necessary that apparent resistivity data if measured as and , to transfer to the correct level (which is compromised to determinant data).
In order to confirm the effectiveness of determinant data, MT data was collected for station in the vicinity of an exploration well in that area. This data (as a determinant apparent resistivity) and also the information obtained from the well log are shown in Fig.4. Subsurface information of the well log has a good correlation with the 1D model derived from inversion of determinant data such that there is a conductive layer containing salt water table in the depth of 670 to 840 meters which can be seen clearly in the obtained model from determinant data too. This correlation indicates the correctness of subsurface information obtained from modeling of determinant data.
The magnetotelluric data processing is one of the most important steps in MT surveys in the meantime static shift correction has an important role. If the application of common techniques for removing the static shift are not accessible (such as calculation of static shift relating to near surface buried inhomogenities or surface topographic effects, using auxiliary data of known geology of the region or independent measurements such as TEM and VES sounding), to avoid any misinterpretation, it is necessary to make use of determinant data for inversion.
As previously shown, using determinant data as a proper replacement for static shift correction can be applied in magnetotelluric studies and the case study presented here clearly shows this matter and also the correlation between determinant data and subsurface structures.
Using determinant data is always applicable since it is rotation invariant and therefore, the same data is needed for modeling without regarding to the assumed strike in 2D modeling.
Determinant data often fit very well with 2D models relative to TE and TM data and it is easily possible to operate 2D inversion (similar to 1D) by ignoring the details of static shift.Magnetotelluric (MT) method is an electromagnetic method which provides information about subsurface conductivity structures using Earth's natural electromagnetic fields. Static shift is one of disorders arising from shallow conductors; therefore static shift must be corrected as one of MT data processing steps. In the absence of sufficient information about the near surface distortions, which usually is provided by extra works like TEM and VES, one has to consider the determinant data for the inversion to avoid any misinterpretation. In this paper, we use the determinant data as an effective replacement for the static shift correction. Finally, we present a case study to show the application of determinant data.
There are various techniques for static shift removal. One of them is theoretical calculation of static shift relating to near surface buried inhomogenities or surface topographic effects. Alternatively, we can use auxiliary data of known geology of the region or independent measurements such as TEM and VES sounding (Sternberg et al., 1988). In these methods, after calculation of the accurate apparent resistivity values in the site of interest, the curves are transferred to the desired level.
Utilizing determinant data for inversion leads to the best results for the interpretation in the case that the above methods are not accessible.
As a case study, MT data from a site in Inche-Boroon area in the north of Golestan Province, Iran is considered here. After processing, MT data was obtained as apparent resistivity with respect to frequency (or period) which is shown for ,(green curve) and ,(blue curve) in Fig.3. Also, the determinant apparent resistivity data is shown in red. As can be seen, determinant data appears as a mean of and .
According to the correlation of determinant data with geological structures, it is necessary that apparent resistivity data if measured as and , to transfer to the correct level (which is compromised to determinant data).
In order to confirm the effectiveness of determinant data, MT data was collected for station in the vicinity of an exploration well in that area. This data (as a determinant apparent resistivity) and also the information obtained from the well log are shown in Fig.4. Subsurface information of the well log has a good correlation with the 1D model derived from inversion of determinant data such that there is a conductive layer containing salt water table in the depth of 670 to 840 meters which can be seen clearly in the obtained model from determinant data too. This correlation indicates the correctness of subsurface information obtained from modeling of determinant data.
The magnetotelluric data processing is one of the most important steps in MT surveys in the meantime static shift correction has an important role. If the application of common techniques for removing the static shift are not accessible (such as calculation of static shift relating to near surface buried inhomogenities or surface topographic effects, using auxiliary data of known geology of the region or independent measurements such as TEM and VES sounding), to avoid any misinterpretation, it is necessary to make use of determinant data for inversion.
As previously shown, using determinant data as a proper replacement for static shift correction can be applied in magnetotelluric studies and the case study presented here clearly shows this matter and also the correlation between determinant data and subsurface structures.
Using determinant data is always applicable since it is rotation invariant and therefore, the same data is needed for modeling without regarding to the assumed strike in 2D modeling.
Determinant data often fit very well with 2D models relative to TE and TM data and it is easily possible to operate 2D inversion (similar to 1D) by ignoring the details of static shift.https://jesphys.ut.ac.ir/article_24302_7fa3848b11a929a75df55e53e18f86cc.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121Absorption effect removal of the earth using nonstationary linear filtersAbsorption effect removal of the earth using nonstationary linear filters79922430310.22059/jesphys.2012.24303FAImanGanjiHamid RezaSiahkoohiJournal Article19700101Seismic waves travelling through inelastic media are attenuated by the conversion of elastic energy into heat. Upon being attenuated, the travelling wave changes: amplitude is reduced, travelling waveform is modified due to high-frequency content absorption, and phase is delayed. Attenuation is usually quantified through the quality factor Q: the ratio between the energy stored and lost in each cycle due to inelasticity. The energy attenuation and phase distortion caused by the absorbing medium can be removed by inverse Q filtering. In this paper we introduce a method in time frequency domain to compensate the attenuation based on non-stationary linear filters proposed by Margrave (1998).
Constant-Q attenuation model: The theory of constant-Q model (Kjartansson, 1979) predicts an amplitude loss given by:
(1)
where Q is the attenuation parameter, is the angular frequency, is the velocity, is the initial amplitude, and is the amplitude at the travelled distance x. A dispersion relation, for the velocity with respect to the frequency, is an essential element of the Q-constant theory. For the examples we used in this paper, the following dispersion relation (Aki and Richards, 2001) has been used:
(2)
which gives the phase velocity at any frequency, , in terms of the velocity at a reference frequency . A linear filter is entirely characterized by its impulse response. In the theory of constant-Q model the earth is considered a linear filter, the attenuating earth impulse response is a fundamental result. Kjartansson (1979) shows that the Fourier transform of the attenuating medium impulse response is:
(3)
A nonstationary convolutional model for an attenuated seismic trace can be established by combining equations (2) and (3), then by nonstationary convolving the attenuated impulse response with a reflectivity function and, finally, by convolving the result with an arbitrary wavelet (Margrave and Lamourex ,2002).
(4)
where the ‘hat’ symbol indicates Fourier transform, is the reflectivity function, is the wavelet and is the time-frequency exponential attenuation function,
(5)
in which the real and imaginary components in the exponent and connected through the Hilbert transform H, result that is consistent with the minimum phase characteristic of the attenuated pulse.
Inverse-Q filtering: Nonstationary convolution can be expressed in domain as follows:
(6)
which is transfer function
(7)
Regarding to transfer function characteristic in nonstationary convolution equation in time-frequency domain, if is input in frequency domain, then:
(8)
Where is the output in time domain.
In two equations (6) and (8), transfer function in time-frequency domain, has nonstationary filter characteristic.
The filter operators defined in these two equations are called pseudodifferential operators (Saint-Raymond, 1991). Here, denotes pseudodifferential operator. Such operators are more efficient for nonstationary filters and essentially for inverse Q filtering. We tested the performance of the method on both real and synthetic seismic data.Seismic waves travelling through inelastic media are attenuated by the conversion of elastic energy into heat. Upon being attenuated, the travelling wave changes: amplitude is reduced, travelling waveform is modified due to high-frequency content absorption, and phase is delayed. Attenuation is usually quantified through the quality factor Q: the ratio between the energy stored and lost in each cycle due to inelasticity. The energy attenuation and phase distortion caused by the absorbing medium can be removed by inverse Q filtering. In this paper we introduce a method in time frequency domain to compensate the attenuation based on non-stationary linear filters proposed by Margrave (1998).
Constant-Q attenuation model: The theory of constant-Q model (Kjartansson, 1979) predicts an amplitude loss given by:
(1)
where Q is the attenuation parameter, is the angular frequency, is the velocity, is the initial amplitude, and is the amplitude at the travelled distance x. A dispersion relation, for the velocity with respect to the frequency, is an essential element of the Q-constant theory. For the examples we used in this paper, the following dispersion relation (Aki and Richards, 2001) has been used:
(2)
which gives the phase velocity at any frequency, , in terms of the velocity at a reference frequency . A linear filter is entirely characterized by its impulse response. In the theory of constant-Q model the earth is considered a linear filter, the attenuating earth impulse response is a fundamental result. Kjartansson (1979) shows that the Fourier transform of the attenuating medium impulse response is:
(3)
A nonstationary convolutional model for an attenuated seismic trace can be established by combining equations (2) and (3), then by nonstationary convolving the attenuated impulse response with a reflectivity function and, finally, by convolving the result with an arbitrary wavelet (Margrave and Lamourex ,2002).
(4)
where the ‘hat’ symbol indicates Fourier transform, is the reflectivity function, is the wavelet and is the time-frequency exponential attenuation function,
(5)
in which the real and imaginary components in the exponent and connected through the Hilbert transform H, result that is consistent with the minimum phase characteristic of the attenuated pulse.
Inverse-Q filtering: Nonstationary convolution can be expressed in domain as follows:
(6)
which is transfer function
(7)
Regarding to transfer function characteristic in nonstationary convolution equation in time-frequency domain, if is input in frequency domain, then:
(8)
Where is the output in time domain.
In two equations (6) and (8), transfer function in time-frequency domain, has nonstationary filter characteristic.
The filter operators defined in these two equations are called pseudodifferential operators (Saint-Raymond, 1991). Here, denotes pseudodifferential operator. Such operators are more efficient for nonstationary filters and essentially for inverse Q filtering. We tested the performance of the method on both real and synthetic seismic data.https://jesphys.ut.ac.ir/article_24303_a33552606f877124f907a30a93c380f5.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121Application of Magnetotelluric method in exploration of geothermal reservoirs with an example from IcelandApplication of Magnetotelluric method in exploration of geothermal reservoirs with an example from Iceland931062430410.22059/jesphys.2012.24304FABehroozOskooiS. MasoodAnsariJournal Article19700101Magnetotellurics (MT) is a geophysical passive technique for exploring geothermal reservoirs. It utilizes a broad spectrum of natural geomagnetic fields for electromagnetic induction in the Earth. This method is also preferred over DC-resistivity methods, particularly where the exploration of subsurface deep aquifers is considered. The role of MT method in the exploration of geothermal reservoirs is highlighted in this paper.
As a practical example, it focuses on the results of a recent MT study performed on a geothermal region in Iceland. Because it is crossed by the Mid-Atlantic Ridge and its associated rift and fault zones, Iceland is very active both tectonically and magmatically. In order to determine the deep structure between two neighboring geothermal fields: Hengill and Brennisteinsfjoll MT data were collected. one- and two- dimensional inversions of these data are done and the results are presented. In a good agreement with geological information, the two-dimensional inversion model declares a highly conductive Smectite-Zeolite zone followed by a less conductive Epidote-Chlorite zone. Also, a highly conductive deep zone is seen in the middle of the profile which is interpreted as a cooling partial melt representing the main heat source of the geothermal system.
Introduction: Geothermal resources are renewable source of heat and of economic interest. Geophysical exploration of geothermal fields using electromagnetic (EM) methods has received increased attention over the past few years. The electrically conductive water reservoir surrounded by a relatively resistive host is efficiently imaged using EM methods. In particular, because of its capability in the large-scale imaging of lateral conductivity variations and greater depth of investigation, Magnetotellurics (MT) is preferred over
other electromagnetic methods. The main focus of the paper presented here is one- and two- dimensional interpretation of the MT data over a geothermal field in South-West of Iceland.
Geothermal systems: Geothermal gradient and thermal conductivity of rocks are chief elements which cause the heat flow within the Earth crust. In addition, both conduction and convection processes occur within a geothermal field. Because of density differences caused by varying temperature, water moves within the reservoir by convection. Also, the conduction process gives a linkage between the magma body and permeable reservoir rocks (Barbier, 2002).Through four types of geothermal systems introduced in this paper, we focused on the hydrothermal system and discussed its two Water-dominated and Vapor dominated types.
Geological settings: The Icelandic crust is mostly of volcanic origin, with both intrusive and extrusive rocks (mainly oceanic-type flood basalts, tuffs, hyaloclastites and some acidic rocks) that were erupted under rift conditions (S?mundsson, 1979). The main geological features and distribution of geothermal systems are shown in Fig 1. As it is shown in Fig 1. geothermal fields occur in regions of young volcanism and along active plate boundaries. Because the abundant geothermal systems in Iceland are the results of volcanic activities, two basic models of alteration associated with volcanic geothermal systems: Acid sulfate and Adularia sericite are also presented (Fig 2).
MT Data acquisition: In September 2000, an MT survey was carried out at 21 sites along a 12 km line in southwest Iceland (Fig 1). The MT profile is almost perpendicular to the axis of the active tectonics and volcanism, and in correspondence of the high-temperature systems of the Hengill volcanic complex and the Brennisteinsfjoll geothermal area.
Inversion and Interpretation: 1D inversion of determinant impedance data and 2D inversion of joint TE- and TM- mode data are performed. As it is shown by the 1D inversion results in Fig 5-a, a top resistive layer with a resistivity value greater than 100 ohm-m changes to a conductive structure of about 10 ohm-m. Also, a transition into a more resistive zone is seen at about 1.2 km depth. This resistive unit with a thickness of 2 km changes into a conductive structure at a depth of 4 km. As for the 2D inversion results, a resistive layer (>400 ohm-m) is recognized at the top (Fig. 6-b). The second layer is very conductive (<10 ohm-m), and shows a variable thickness along the 2D section, passing from a few hundred meters at Site 20 to about 1800 m at Site 03. Below this conductor there is an increase in resistivity with depth along the whole profile. The southern part of the profile is characterized by a high resistivity (?1000 ohm-m) basement, whereas in the middle of the profile the top conductive layer (<10 ohm-m) is followed by a resistive layer (30–100 ohm-m), which in turn is overlying a very conductive structure (<5 ohm-m). The very resistive layer at the top can be interpreted as the porous basalt layer near the surface. At about 400 m depth, the conductive layer, showing variable thickness along the profile, is most naturally interpreted as the smectite–zeolite zone. The less conductive zone below this conductor is interpreted as the chlorite-epidote mineralization zone. Considering the characteristics of the neovolcanic zone in Iceland, this conductive bulk (<5 ohm-m) located at the middle of the profile, can be interpreted as either partial melt or a porous region with hot ionized fluids located on top of a magmatic heat source. Since this conductive structure is located where the Hengill fissure swarm intercepts the profile, it is most naturally interpreted as magmatic intrusions acting as a heat source for the geothermal system.Magnetotellurics (MT) is a geophysical passive technique for exploring geothermal reservoirs. It utilizes a broad spectrum of natural geomagnetic fields for electromagnetic induction in the Earth. This method is also preferred over DC-resistivity methods, particularly where the exploration of subsurface deep aquifers is considered. The role of MT method in the exploration of geothermal reservoirs is highlighted in this paper.
As a practical example, it focuses on the results of a recent MT study performed on a geothermal region in Iceland. Because it is crossed by the Mid-Atlantic Ridge and its associated rift and fault zones, Iceland is very active both tectonically and magmatically. In order to determine the deep structure between two neighboring geothermal fields: Hengill and Brennisteinsfjoll MT data were collected. one- and two- dimensional inversions of these data are done and the results are presented. In a good agreement with geological information, the two-dimensional inversion model declares a highly conductive Smectite-Zeolite zone followed by a less conductive Epidote-Chlorite zone. Also, a highly conductive deep zone is seen in the middle of the profile which is interpreted as a cooling partial melt representing the main heat source of the geothermal system.
Introduction: Geothermal resources are renewable source of heat and of economic interest. Geophysical exploration of geothermal fields using electromagnetic (EM) methods has received increased attention over the past few years. The electrically conductive water reservoir surrounded by a relatively resistive host is efficiently imaged using EM methods. In particular, because of its capability in the large-scale imaging of lateral conductivity variations and greater depth of investigation, Magnetotellurics (MT) is preferred over
other electromagnetic methods. The main focus of the paper presented here is one- and two- dimensional interpretation of the MT data over a geothermal field in South-West of Iceland.
Geothermal systems: Geothermal gradient and thermal conductivity of rocks are chief elements which cause the heat flow within the Earth crust. In addition, both conduction and convection processes occur within a geothermal field. Because of density differences caused by varying temperature, water moves within the reservoir by convection. Also, the conduction process gives a linkage between the magma body and permeable reservoir rocks (Barbier, 2002).Through four types of geothermal systems introduced in this paper, we focused on the hydrothermal system and discussed its two Water-dominated and Vapor dominated types.
Geological settings: The Icelandic crust is mostly of volcanic origin, with both intrusive and extrusive rocks (mainly oceanic-type flood basalts, tuffs, hyaloclastites and some acidic rocks) that were erupted under rift conditions (S?mundsson, 1979). The main geological features and distribution of geothermal systems are shown in Fig 1. As it is shown in Fig 1. geothermal fields occur in regions of young volcanism and along active plate boundaries. Because the abundant geothermal systems in Iceland are the results of volcanic activities, two basic models of alteration associated with volcanic geothermal systems: Acid sulfate and Adularia sericite are also presented (Fig 2).
MT Data acquisition: In September 2000, an MT survey was carried out at 21 sites along a 12 km line in southwest Iceland (Fig 1). The MT profile is almost perpendicular to the axis of the active tectonics and volcanism, and in correspondence of the high-temperature systems of the Hengill volcanic complex and the Brennisteinsfjoll geothermal area.
Inversion and Interpretation: 1D inversion of determinant impedance data and 2D inversion of joint TE- and TM- mode data are performed. As it is shown by the 1D inversion results in Fig 5-a, a top resistive layer with a resistivity value greater than 100 ohm-m changes to a conductive structure of about 10 ohm-m. Also, a transition into a more resistive zone is seen at about 1.2 km depth. This resistive unit with a thickness of 2 km changes into a conductive structure at a depth of 4 km. As for the 2D inversion results, a resistive layer (>400 ohm-m) is recognized at the top (Fig. 6-b). The second layer is very conductive (<10 ohm-m), and shows a variable thickness along the 2D section, passing from a few hundred meters at Site 20 to about 1800 m at Site 03. Below this conductor there is an increase in resistivity with depth along the whole profile. The southern part of the profile is characterized by a high resistivity (?1000 ohm-m) basement, whereas in the middle of the profile the top conductive layer (<10 ohm-m) is followed by a resistive layer (30–100 ohm-m), which in turn is overlying a very conductive structure (<5 ohm-m). The very resistive layer at the top can be interpreted as the porous basalt layer near the surface. At about 400 m depth, the conductive layer, showing variable thickness along the profile, is most naturally interpreted as the smectite–zeolite zone. The less conductive zone below this conductor is interpreted as the chlorite-epidote mineralization zone. Considering the characteristics of the neovolcanic zone in Iceland, this conductive bulk (<5 ohm-m) located at the middle of the profile, can be interpreted as either partial melt or a porous region with hot ionized fluids located on top of a magmatic heat source. Since this conductive structure is located where the Hengill fissure swarm intercepts the profile, it is most naturally interpreted as magmatic intrusions acting as a heat source for the geothermal system.https://jesphys.ut.ac.ir/article_24304_a6f82fc36d2957a1c9602648d9381db8.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121A methodology for mean gravity value computation based on harmonic splines and their application to boundary value problemA methodology for mean gravity value computation based on harmonic splines and their application to boundary value problem1071242430510.22059/jesphys.2012.24305FAAbdolrezaSafariAbdolrahmanMostafaeiJournal Article19700101Height is amongst the most delicate subjects of geodesy. Thanks to the Global Navigation Satellite Systems (GNSS) like GPS or GLONASS, 3D point positioning of points, by geometrical positioning, since years ago has become a common practice. The height derived from these ways has a geometrical concept. In civil projects the physical concept of the height is more demanded. Orthometric height, , is one kind of the physical concepts of the height. The Orthometric height of point i, , can be calculated by
where is the mean value of the gravity along the plumb line between the geoid and the surface point i and is the geo-potential number of point i, which is calculated using
One of the problems in orthometric height calculation is computation of .The value of the gravity at the point with mean height is calculated by
Where is gravity observation value at point i.
The orthometric height computed by this mean value of the gravity is called Helmert orthometric height, according to Sanso and Sona (1993) the idea for earth gravity determination.
In this paper a methodology to calculate value of the gravity at the point with mean height from the geoid has been supposed. Derived gravity from this method is composed from three parts, (1) global and regional gravity computed by ellipsoidal harmonic expansion to degree and order 360 plus the centrifugal acceleration (2) gravitational due to terrain masses within the radius of 55km around the computational point (3) incremental gravity intensity at the computational point. The first and second parts are computed by global geopotential models and digital terrain models.
Computation of the third part is possible by solving a boundary value problem. In this paper for computing the incremental gravity intensity at the point with mean height, a method by solving a fixed-free two-boundary nonlinear value problem is addressed. This boundary value problem constructed for observables of the type (i) modulus of gravity (ii) gravity potential (iii) satellite altimetry data (iv) astronomical latitude (v) astronomical longitude.
The first step towards the solution of the proposed fixed geodetic boundary value problem is the linearization of the problem. After linearization we obtained a linear boundary value problem that its solution gives us the incremental gravity potential at the surface of the reference ellipsoid. Out of the reference ellipsoid surface, this answer could be obtained by solving the following Dirichlet boundary value problem:
In this paper harmonic splines supposed by Freeden (1987) are used to solve the Dirichlet problem. By applying the gradient operator on the incremental gravity potential, due to solving Dirichlet problem, incremental gravity at every point out of the reference ellipsoid can be calculated (Jekeli, 2005).
Second section of this paper is an introduction on harmonic splines analysis. The construction of self productive Hilbert space and optimum interpolation answer is presented in the third section. In the final section the application of harmonic splines for solving the Dirichlet boundary value problem is discussed and by the proposed methodology the mean value of gravity in the first order leveling of Iran is calculated.Height is amongst the most delicate subjects of geodesy. Thanks to the Global Navigation Satellite Systems (GNSS) like GPS or GLONASS, 3D point positioning of points, by geometrical positioning, since years ago has become a common practice. The height derived from these ways has a geometrical concept. In civil projects the physical concept of the height is more demanded. Orthometric height, , is one kind of the physical concepts of the height. The Orthometric height of point i, , can be calculated by
where is the mean value of the gravity along the plumb line between the geoid and the surface point i and is the geo-potential number of point i, which is calculated using
One of the problems in orthometric height calculation is computation of .The value of the gravity at the point with mean height is calculated by
Where is gravity observation value at point i.
The orthometric height computed by this mean value of the gravity is called Helmert orthometric height, according to Sanso and Sona (1993) the idea for earth gravity determination.
In this paper a methodology to calculate value of the gravity at the point with mean height from the geoid has been supposed. Derived gravity from this method is composed from three parts, (1) global and regional gravity computed by ellipsoidal harmonic expansion to degree and order 360 plus the centrifugal acceleration (2) gravitational due to terrain masses within the radius of 55km around the computational point (3) incremental gravity intensity at the computational point. The first and second parts are computed by global geopotential models and digital terrain models.
Computation of the third part is possible by solving a boundary value problem. In this paper for computing the incremental gravity intensity at the point with mean height, a method by solving a fixed-free two-boundary nonlinear value problem is addressed. This boundary value problem constructed for observables of the type (i) modulus of gravity (ii) gravity potential (iii) satellite altimetry data (iv) astronomical latitude (v) astronomical longitude.
The first step towards the solution of the proposed fixed geodetic boundary value problem is the linearization of the problem. After linearization we obtained a linear boundary value problem that its solution gives us the incremental gravity potential at the surface of the reference ellipsoid. Out of the reference ellipsoid surface, this answer could be obtained by solving the following Dirichlet boundary value problem:
In this paper harmonic splines supposed by Freeden (1987) are used to solve the Dirichlet problem. By applying the gradient operator on the incremental gravity potential, due to solving Dirichlet problem, incremental gravity at every point out of the reference ellipsoid can be calculated (Jekeli, 2005).
Second section of this paper is an introduction on harmonic splines analysis. The construction of self productive Hilbert space and optimum interpolation answer is presented in the third section. In the final section the application of harmonic splines for solving the Dirichlet boundary value problem is discussed and by the proposed methodology the mean value of gravity in the first order leveling of Iran is calculated.https://jesphys.ut.ac.ir/article_24305_d0ec4f535e5f26fbc67ccebdfac8b1fd.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121Deformation analysis of the Earth crust based on manifold intrinsic geometry Case Study: Deformation analysis of the geodynamic network of Iran within 1999 - 2005Deformation analysis of the Earth crust based on manifold intrinsic geometry Case Study: Deformation analysis of the geodynamic network of Iran within 1999 - 20051251462430610.22059/jesphys.2012.24306FAAlirezaArdalan A.0000-0001-5549-3189BehzadVoosoghiMehdiRaoofian-NaeeniJournal Article19700101Unlike the classical deformation analysis of the Earth crust, which derives the planar and vertical strains separately, in this study, we have offered a method for 3-D deformation study based on intrinsic geometry of the manifolds on the topographic surface of the Earth. In this way, our method would be based on the 2-D metric tensor of horizontal deformation and 2-D curvature tensor of vertical deformation of the topographic surface of the Earth, which solve the problem of classical 2 and 1-D deformation study separately, while does not have the interpretation problem of extrinsic deformation study in 3-D space which results in 3-D strain tensors. From the derived metric tensor in our method two invariant deformations measures, i.e. dilatation (changes in the scale), maximum shear, and from curvature tensor two other invariant deformation measures, i.e. mean curvature and Gaussian curvature can be obtained. Our method algorithmically can be defined through its main computational steps as follows: (i) Computation of 3-D displacement vectors from repeated geodetic observations. (ii) Computation of the covariant and contravariant components of the displacement vector in the Gaussian moving frame. (iii) Discretization of the domain (Earth crust) in to finite surface elements. (iv) Computation of the strain and curvature tensors within the finite surface elements. As the case study, using repeated GNSS observations of the geodynamic network of Iran, crustal deformation within the coverage of the network is computed. The results show that the crust in most parts of the mentioned area is under contraction with the maximum value at South-West of the region. The maximum shear strain has also occurred in the southern part of the geodynamic network. The result of the vertical strain reveals uplift of the crust with maximum values at the South and South-East of the region. The result of the computation and the evaluations by comparison with the seismic map of the region show the success and usefulness of the presented method for deformation study of the curst.Unlike the classical deformation analysis of the Earth crust, which derives the planar and vertical strains separately, in this study, we have offered a method for 3-D deformation study based on intrinsic geometry of the manifolds on the topographic surface of the Earth. In this way, our method would be based on the 2-D metric tensor of horizontal deformation and 2-D curvature tensor of vertical deformation of the topographic surface of the Earth, which solve the problem of classical 2 and 1-D deformation study separately, while does not have the interpretation problem of extrinsic deformation study in 3-D space which results in 3-D strain tensors. From the derived metric tensor in our method two invariant deformations measures, i.e. dilatation (changes in the scale), maximum shear, and from curvature tensor two other invariant deformation measures, i.e. mean curvature and Gaussian curvature can be obtained. Our method algorithmically can be defined through its main computational steps as follows: (i) Computation of 3-D displacement vectors from repeated geodetic observations. (ii) Computation of the covariant and contravariant components of the displacement vector in the Gaussian moving frame. (iii) Discretization of the domain (Earth crust) in to finite surface elements. (iv) Computation of the strain and curvature tensors within the finite surface elements. As the case study, using repeated GNSS observations of the geodynamic network of Iran, crustal deformation within the coverage of the network is computed. The results show that the crust in most parts of the mentioned area is under contraction with the maximum value at South-West of the region. The maximum shear strain has also occurred in the southern part of the geodynamic network. The result of the vertical strain reveals uplift of the crust with maximum values at the South and South-East of the region. The result of the computation and the evaluations by comparison with the seismic map of the region show the success and usefulness of the presented method for deformation study of the curst.https://jesphys.ut.ac.ir/article_24306_5ac5150cac57ea539113eaabd6cd2bab.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121Accuracy evaluation and adjustment of the first order Leveling
Network of IranAccuracy evaluation and adjustment of the first order Leveling
Network of Iran1471632430710.22059/jesphys.2012.24307FAAbdolrezaSafariYahyaJamourAbdolrahmanMostafaeiJournal Article19700101In many countries, leveling networks are established for height determination which is one of the most important topics in geodesy. In these networks, the sum of the leveled height differences between A and B will not be equal to the difference in the orthometric heights HA and HB. The reason is that the leveling increment ?n, as we henceforth denote it, is different from the corresponding increment ?HB of HB , due to the nonparallelism of the level surfaces. Denoting the corresponding increment of the potential W by ?W, we have
??W = g ?n = g'?HB , (1)
where g is the gravity at the leveling station and g' is the gravity on the plumb line of B at ?HB. Hence,
(2)
There is, thus, no direct geometrical relation between the result of leveling and the orthometric height, since Equation (2) expresses a physical relation. If gravity g is also measured, then
(3)
is determined, so that we obtain
(4)
Thus, leveling combined with gravity measurements determines potential differences, which are, physical quantities.
It is somewhat more rigorous theoretically to replace the sum in Equation (4) by an integral, obtaining
(5)
Note that this integral is independent of the path of integration. In practical cases, it is better to use geo-potential numbers, which are calculated using Equation (6), instead of potential values.
(6)
Users usually like to work by the geometrical concept of the height. Therefore, the orthometric height of the point A is defined by
, (7)
where is mean value of the gravity along the plumb line between the geoid and the surface point A. According to potential differences, on the other hand, difference between orthometric height of two points is
(8)
Where is normal gravity for an arbitrary standard latitude.
So, determination of difference in orthometric heights between points is changed to determination of potential differences between them. Then, it is necessary to measure both height difference and gravity along the leveling lines.
Observations in leveling networks are under influence of random and systematic errors. Errors originating from instruments, ambient circumstances and observer, have such character that it is very difficult to remove them from observations, also assessment of leveling accuracy is not an easy task.
Unmodelled systematic effects in levelling may be revealed through autocorrelation function of discrepancies (Vanicek and Craymer, 1983) between the forward and backward running of levelling sections. Test results, conducted with simulated data indicate that autocorrelation function can be used as a diagnostic tool to detect systematic effects.
The aim of this study is accuracy estimation of the first order levelling network of Iran by the Lallemand’s and Vignal’s formulas as well as test for significant differences between lines caused by different sources of random and systematic errors. Then, computation of section and line discrepancies is explained and the random and systematic error computed by the Lallemand’s and Vignal’s formulas is portrayed. Next, the theory of analysis of variance is given in outline and practical computations are demonstrated. After that, various kinds of adjustment models for the levelling network adjustment are discussed. Finally, the weight matrix, which is estimated using covariance function, is applied to adjust the network. The obtained results, in this research, showed that there are considerable systematic errors in the levelling network of Iran.In many countries, leveling networks are established for height determination which is one of the most important topics in geodesy. In these networks, the sum of the leveled height differences between A and B will not be equal to the difference in the orthometric heights HA and HB. The reason is that the leveling increment ?n, as we henceforth denote it, is different from the corresponding increment ?HB of HB , due to the nonparallelism of the level surfaces. Denoting the corresponding increment of the potential W by ?W, we have
??W = g ?n = g'?HB , (1)
where g is the gravity at the leveling station and g' is the gravity on the plumb line of B at ?HB. Hence,
(2)
There is, thus, no direct geometrical relation between the result of leveling and the orthometric height, since Equation (2) expresses a physical relation. If gravity g is also measured, then
(3)
is determined, so that we obtain
(4)
Thus, leveling combined with gravity measurements determines potential differences, which are, physical quantities.
It is somewhat more rigorous theoretically to replace the sum in Equation (4) by an integral, obtaining
(5)
Note that this integral is independent of the path of integration. In practical cases, it is better to use geo-potential numbers, which are calculated using Equation (6), instead of potential values.
(6)
Users usually like to work by the geometrical concept of the height. Therefore, the orthometric height of the point A is defined by
, (7)
where is mean value of the gravity along the plumb line between the geoid and the surface point A. According to potential differences, on the other hand, difference between orthometric height of two points is
(8)
Where is normal gravity for an arbitrary standard latitude.
So, determination of difference in orthometric heights between points is changed to determination of potential differences between them. Then, it is necessary to measure both height difference and gravity along the leveling lines.
Observations in leveling networks are under influence of random and systematic errors. Errors originating from instruments, ambient circumstances and observer, have such character that it is very difficult to remove them from observations, also assessment of leveling accuracy is not an easy task.
Unmodelled systematic effects in levelling may be revealed through autocorrelation function of discrepancies (Vanicek and Craymer, 1983) between the forward and backward running of levelling sections. Test results, conducted with simulated data indicate that autocorrelation function can be used as a diagnostic tool to detect systematic effects.
The aim of this study is accuracy estimation of the first order levelling network of Iran by the Lallemand’s and Vignal’s formulas as well as test for significant differences between lines caused by different sources of random and systematic errors. Then, computation of section and line discrepancies is explained and the random and systematic error computed by the Lallemand’s and Vignal’s formulas is portrayed. Next, the theory of analysis of variance is given in outline and practical computations are demonstrated. After that, various kinds of adjustment models for the levelling network adjustment are discussed. Finally, the weight matrix, which is estimated using covariance function, is applied to adjust the network. The obtained results, in this research, showed that there are considerable systematic errors in the levelling network of Iran.https://jesphys.ut.ac.ir/article_24307_9f323f11e6618888c02343f5f8f9f146.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121AVO analysis in Ghar Sand Stone Reservoir in Aboozar Oil field located in North-West of Persian GulfAVO analysis in Ghar Sand Stone Reservoir in Aboozar Oil field located in North-West of Persian Gulf1651782430810.22059/jesphys.2012.24308FAHadiHaji JomhouriMohammad AliRiahiGholamhosseinNorouziAmirShamsaJournal Article19700101AVO theory was introduced around 20 years ago. In recent years, this technique was become to a major tool in hydrocarbon sources exploration. By the help of this method with the suitable understanding from underground layers and knowing how to use this technology, quantitative specifications of reservoir can be recognized.
AVO analysis is a seismic technique that by using of pre-stack data, establish the presence of hydrocarbon in reservoir. Three basic physical parameters used in seismic interpretation are density, P-wave velocity and S-wave velocity. For applying AVO technique, having a correct understanding of this parameters is required.
Introduction: Zoeppritz equations determine Reflection and Transmission Coefficients as a function of Incidence angle, but these equations don’t show how amplitude variations change with rock physical parameters truly. Zoeppritz equations approximations are simpler and more general respect to real equations. Some famous approximations for Zoeppritz equations are Aki, Richards & Fraiser (1980), Shuey and Fatti et al. approximations. With the help of these approximations some variable attributes can be extracted.
Ghar reservoir characterizations in Aboozar oil field: Aboozar oil field is located in North-West of Persian Gulf and is 75 KM far from east of Khark Island. This field was explored in recent 1950s and its production was started in 1976. The main hydrocarbon producer layer in this field is Ghar sand stone reservoir with Oligo-Miocene age. It has an anticline structure and is alongside in north-west south-east direction. Its depth is between 820 to 880 meters. This sand stone layer is corresponded to Ahvaz sand stone member in Asmari formation.
AVO analysis in Ghar sand stone reservoir: In this paper, different techniques that are involved in AVO analysis such as Forward Modeling, Fluid Replacement Modeling (FRM), various attribute extraction and X-Plot techniques are applied in Ghar sand stone reservoir to investigate the ability of AVO method for detection of light hydrocarbon in north-west of Persian Gulf. This study was done over a seismic line crossing a well in Aboozar Field. Some well logs data that are required for Forward Modeling in this well were available such as Density, P-wave and S-wave logs.
Forward Modeling: AVO modeling applied for investigation of Amplitude Versus Offset (AVO) variations and detection of parameters that produce these variations. With the help of available logs and by usage of Zoeppritz equations and Ray Tracing, synthetic seismogram in the well was produced. After producing of primary synthetic seismogram, significant reflections on real seismic data and synthetic seismogram were compared. By Forward Modeling the time-depth curve of the well was modified and seismic data were calibrated. Then on the produced synthetic seismogram in upper boundary of Ghar sand stone reservoir, AVO curve that show variations of Reflection Coefficients versus Offsets was extracted. This curve shows that AVO anomaly from upper boundary of Ghar sand stone reservoir is a class IV type which has a positive Gradient (B) and negative Intercept (A). Class IV type is corresponded to a gas sand stone with low Acoustic Impedance. Amplitudes were decrease versus offset in upper part of the reservoir.
Fluid Replacement Modeling (FRM): In this step, to verify AVO anomalies from Ghar reservoir, FRM was applied in the well area to satisfy anomalies related to fluid and mostly affected by gas. With the help of FRM, the best attributes for identification of Ghar reservoir upper boundary were distinguished. These attributes are related mostly to intergranular fluid. The base of FRM is Gassmann equation. For this purpose, 3 logs (P-wave velocity, S-wave velocity and density) in 3 fluid situations (real situation, 100% water saturation and 80% gas - 20% water saturation) were calculated and synthetic seismograms were produced.
AVO attributes study in seismic data: The time-sections of AVO attributes were extracted for real seismic data to use them for identification of AVO anomalies. Some of attributes are Gradient (B), Intercept (A), S-wave Reflection Coefficient (Rs), P-wave Reflection Coefficient (Rp) and Poisson Ratio Variations (??). From extracted attributes, Intercept (A) and Poisson Ratio Variations (??) attributes show the reservoir area more accurate. Also Poisson Ratio Variations (??) section has most variations in upper boundary of Ghar reservoir and identify the reservoir area more precise. On the Gradient (B) attribute section, the upper boundary of Ghar sand stone has negative values and by the help of Intercept (A) and Gradient (B), the type of AVO anomaly was distinguished.
X-Plot techniques
(a) Intercept (A) versus Gradient (B) X-Plot by synthetic seismogram from well data
Intercept versus Gradient X-Plot can be use for interpretation of AVO analysis. It is a technique for classification of AVO responses and hydrocarbon sediment identification. By usage of rock physic parameters, AVO modeling and X-plot technique, AVO anomalies polarity can be analyzed. X-plot technique was applied for separation of reservoir fluids on real seismic data and synthetic seismogram from well data. A 100 mille-second window was considered on the synthetic seismogram at the Ghar reservoir area to use its related points in producing X-Plot Intercept versus Gradient. In the resulted X-Plot, most of points show a wet trend (in direction of second & fourth quarter). The rest points which are trended toward first and third quarter show the hydro carbon section.
(a) Intercept (A) versus Gradient (B) X-Plot by real seismic data
The X-Plots are powerful technique for separation of different zones with different fluid content and lithology. To obtain more accurate results and determine precise boundaries of reservoir, the X-Plot were done on the seismic line between 200 and 300 X-lines. The mentioned X-Plot was produced in a 150 mille-second window which is symmetric to 750 mille-second time limit (time limit of upper boundary of Ghar sand stone). This X-Plot enables us to separate reservoir section. According to obtained points, three zones were determined as bellow:
- Water zone trended to bisector of first and third quarters.
- First hydrocarbon zone in upper part of reservoir.
- Second hydrocarbon zone in lower part of reservoir and top of water zone.
Results obtained from this X-Plot are best-correlated to results of well seismogram X-Plot.
Conclusion: The purpose of this paper is identification of AVO method abilities in exploration of hydrocarbon reservoirs using pre-stack seismic data at north-west areas of Persian Gulf. The results of this study truly reveal these abilities. For this purpose, synthetic seismogram of well logging data produced for Aboozar oil field using Forward Modeling and with the help of that the reason of observed anomalies on pre-stack seismic data was detected at the upper boundary of Ghar reservoir. With producing AVO curve, the related anomaly type which is class IV, was determined. In this step, the time-depth curve was corrected by matching well and seismic data and the seismic data were calibrated.
In Forward Modeling step, well logging data were produced synthetically using Fluid Replacement Modeling (FRM) and with the help of synthetic seismogram and extraction of different attributes in 3 different fluids situations (real situation, 100% water saturation and 80% gas - 20% water saturation), the most suitable fluid affected attributes, which are able to distinguish the upper boundary of Ghar reservoir, were determined. These attributes are as following: Intercept (A) attribute, attribute of Poisson Ratio Variations (??), attribute of P-wave Reflection Coefficient and A×Sign (B) attribute.
In this study, AVO attributes were extracted from real seismic data using different methods. The most suitable attributes which enable us to distinguish reservoir boundaries were determined and a suitable correlation observed between these attributes and results obtained from Forward Modeling. With the help of different attributes, X-Plots of Intercept (A) versus Gradient (B) were produced in reservoir area. By usage of these plots, hydrocarbon limited area was separated in lower and upper sections of reservoir which had a well matching comparison to results obtained from X-Plot of Intercept (A) versus Gradient (B) of synthetic seismogram in reservoir area.AVO theory was introduced around 20 years ago. In recent years, this technique was become to a major tool in hydrocarbon sources exploration. By the help of this method with the suitable understanding from underground layers and knowing how to use this technology, quantitative specifications of reservoir can be recognized.
AVO analysis is a seismic technique that by using of pre-stack data, establish the presence of hydrocarbon in reservoir. Three basic physical parameters used in seismic interpretation are density, P-wave velocity and S-wave velocity. For applying AVO technique, having a correct understanding of this parameters is required.
Introduction: Zoeppritz equations determine Reflection and Transmission Coefficients as a function of Incidence angle, but these equations don’t show how amplitude variations change with rock physical parameters truly. Zoeppritz equations approximations are simpler and more general respect to real equations. Some famous approximations for Zoeppritz equations are Aki, Richards & Fraiser (1980), Shuey and Fatti et al. approximations. With the help of these approximations some variable attributes can be extracted.
Ghar reservoir characterizations in Aboozar oil field: Aboozar oil field is located in North-West of Persian Gulf and is 75 KM far from east of Khark Island. This field was explored in recent 1950s and its production was started in 1976. The main hydrocarbon producer layer in this field is Ghar sand stone reservoir with Oligo-Miocene age. It has an anticline structure and is alongside in north-west south-east direction. Its depth is between 820 to 880 meters. This sand stone layer is corresponded to Ahvaz sand stone member in Asmari formation.
AVO analysis in Ghar sand stone reservoir: In this paper, different techniques that are involved in AVO analysis such as Forward Modeling, Fluid Replacement Modeling (FRM), various attribute extraction and X-Plot techniques are applied in Ghar sand stone reservoir to investigate the ability of AVO method for detection of light hydrocarbon in north-west of Persian Gulf. This study was done over a seismic line crossing a well in Aboozar Field. Some well logs data that are required for Forward Modeling in this well were available such as Density, P-wave and S-wave logs.
Forward Modeling: AVO modeling applied for investigation of Amplitude Versus Offset (AVO) variations and detection of parameters that produce these variations. With the help of available logs and by usage of Zoeppritz equations and Ray Tracing, synthetic seismogram in the well was produced. After producing of primary synthetic seismogram, significant reflections on real seismic data and synthetic seismogram were compared. By Forward Modeling the time-depth curve of the well was modified and seismic data were calibrated. Then on the produced synthetic seismogram in upper boundary of Ghar sand stone reservoir, AVO curve that show variations of Reflection Coefficients versus Offsets was extracted. This curve shows that AVO anomaly from upper boundary of Ghar sand stone reservoir is a class IV type which has a positive Gradient (B) and negative Intercept (A). Class IV type is corresponded to a gas sand stone with low Acoustic Impedance. Amplitudes were decrease versus offset in upper part of the reservoir.
Fluid Replacement Modeling (FRM): In this step, to verify AVO anomalies from Ghar reservoir, FRM was applied in the well area to satisfy anomalies related to fluid and mostly affected by gas. With the help of FRM, the best attributes for identification of Ghar reservoir upper boundary were distinguished. These attributes are related mostly to intergranular fluid. The base of FRM is Gassmann equation. For this purpose, 3 logs (P-wave velocity, S-wave velocity and density) in 3 fluid situations (real situation, 100% water saturation and 80% gas - 20% water saturation) were calculated and synthetic seismograms were produced.
AVO attributes study in seismic data: The time-sections of AVO attributes were extracted for real seismic data to use them for identification of AVO anomalies. Some of attributes are Gradient (B), Intercept (A), S-wave Reflection Coefficient (Rs), P-wave Reflection Coefficient (Rp) and Poisson Ratio Variations (??). From extracted attributes, Intercept (A) and Poisson Ratio Variations (??) attributes show the reservoir area more accurate. Also Poisson Ratio Variations (??) section has most variations in upper boundary of Ghar reservoir and identify the reservoir area more precise. On the Gradient (B) attribute section, the upper boundary of Ghar sand stone has negative values and by the help of Intercept (A) and Gradient (B), the type of AVO anomaly was distinguished.
X-Plot techniques
(a) Intercept (A) versus Gradient (B) X-Plot by synthetic seismogram from well data
Intercept versus Gradient X-Plot can be use for interpretation of AVO analysis. It is a technique for classification of AVO responses and hydrocarbon sediment identification. By usage of rock physic parameters, AVO modeling and X-plot technique, AVO anomalies polarity can be analyzed. X-plot technique was applied for separation of reservoir fluids on real seismic data and synthetic seismogram from well data. A 100 mille-second window was considered on the synthetic seismogram at the Ghar reservoir area to use its related points in producing X-Plot Intercept versus Gradient. In the resulted X-Plot, most of points show a wet trend (in direction of second & fourth quarter). The rest points which are trended toward first and third quarter show the hydro carbon section.
(a) Intercept (A) versus Gradient (B) X-Plot by real seismic data
The X-Plots are powerful technique for separation of different zones with different fluid content and lithology. To obtain more accurate results and determine precise boundaries of reservoir, the X-Plot were done on the seismic line between 200 and 300 X-lines. The mentioned X-Plot was produced in a 150 mille-second window which is symmetric to 750 mille-second time limit (time limit of upper boundary of Ghar sand stone). This X-Plot enables us to separate reservoir section. According to obtained points, three zones were determined as bellow:
- Water zone trended to bisector of first and third quarters.
- First hydrocarbon zone in upper part of reservoir.
- Second hydrocarbon zone in lower part of reservoir and top of water zone.
Results obtained from this X-Plot are best-correlated to results of well seismogram X-Plot.
Conclusion: The purpose of this paper is identification of AVO method abilities in exploration of hydrocarbon reservoirs using pre-stack seismic data at north-west areas of Persian Gulf. The results of this study truly reveal these abilities. For this purpose, synthetic seismogram of well logging data produced for Aboozar oil field using Forward Modeling and with the help of that the reason of observed anomalies on pre-stack seismic data was detected at the upper boundary of Ghar reservoir. With producing AVO curve, the related anomaly type which is class IV, was determined. In this step, the time-depth curve was corrected by matching well and seismic data and the seismic data were calibrated.
In Forward Modeling step, well logging data were produced synthetically using Fluid Replacement Modeling (FRM) and with the help of synthetic seismogram and extraction of different attributes in 3 different fluids situations (real situation, 100% water saturation and 80% gas - 20% water saturation), the most suitable fluid affected attributes, which are able to distinguish the upper boundary of Ghar reservoir, were determined. These attributes are as following: Intercept (A) attribute, attribute of Poisson Ratio Variations (??), attribute of P-wave Reflection Coefficient and A×Sign (B) attribute.
In this study, AVO attributes were extracted from real seismic data using different methods. The most suitable attributes which enable us to distinguish reservoir boundaries were determined and a suitable correlation observed between these attributes and results obtained from Forward Modeling. With the help of different attributes, X-Plots of Intercept (A) versus Gradient (B) were produced in reservoir area. By usage of these plots, hydrocarbon limited area was separated in lower and upper sections of reservoir which had a well matching comparison to results obtained from X-Plot of Intercept (A) versus Gradient (B) of synthetic seismogram in reservoir area.https://jesphys.ut.ac.ir/article_24308_55d740a5a16ec28939bf5191a794d8de.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121An investigation into the activity of the North Neyshabour fault, eastern IranAn investigation into the activity of the North Neyshabour fault, eastern Iran1791932430910.22059/jesphys.2012.24309FAMortezaFattahiSomayehRostami MehrabanMortezaTalebianAbbasBahroudiJ.HollingsworthR.WalkerJournal Article19700101Iran is one the most tectonically active parts of the world and regularly experiences earthquakes of both low and high magnitude. Therefore, earthquake hazard assessment before any kind of building construction and for already built and populated area is essential. A vital first step in this type of study is to identify, map, and determine the activity of faults within a given region. Investigating fault activity requires estimation of the average fault slip-rate, the recurrence interval between earthquakes, and the time of the last earthquake produced by each individual fault. Neyshabour is one of the most important cities in NE Iran. The city has been destroyed four times by major historical earthquakes. Three large faults exist in the region (the North Neyshabour, Binalud and Neyshabour faults). The North Neyshabour and Binalud faults lie at the foot of the Binalud range north of Neyshabour. The North Neyshabour fault has a relatively sinuous surface trace, typical of a thrust fault, and does not show any clear strike-slip component.
The North Neyshabur thrust fault is exposed in a river section, at 36o180N 58o500E. The fault dips 60o north, and forms ~8-9 m high fault scarp at the surface which vertically offsets a Quaternary terrace. Within the river section, a yellow sandstone unit is offset by 9 m. A 60 by 40 cm sample of this unit was collected from the river exposure for optically stimulated luminescence (OSL) dating (location: 36o18.3090N 58o50.2700E). The sample was dated in the Oxford luminescence lab using a Riso (Model TL/OSL-DA-15) automated TL/OSL system under subdued red light (for details of the method see Fattahi et al., 2006, 2007; Fattahi and Walker, 2007). Eighteen subsamples of sample N5 demonstrated a wide paleodose distribution. This suggests that the sediment may not have been completely reset upon deposition (i.e. not all ‘trapped’ electrons from an earlier burial period were reset during sediment transport). This causes the mean age determination using weighted mean, 42000-68000 year, to overestimate the real deposition age. One solution to this problem is to assume the date of the youngest grains represent the time of deposition, giving a lower age of 22200-26000 year. However, we decided to use both age estimates for slip rate determination. As the top Quaternary terrace, has been displaced ~ 8-9 m at the surface, we calculated two slip rate using both average and minimum ages for calculating the slip-rate on the North Neyshabur fault (~0.1–0.2 and ~0.3-0.4mm/yr), respectively.Iran is one the most tectonically active parts of the world and regularly experiences earthquakes of both low and high magnitude. Therefore, earthquake hazard assessment before any kind of building construction and for already built and populated area is essential. A vital first step in this type of study is to identify, map, and determine the activity of faults within a given region. Investigating fault activity requires estimation of the average fault slip-rate, the recurrence interval between earthquakes, and the time of the last earthquake produced by each individual fault. Neyshabour is one of the most important cities in NE Iran. The city has been destroyed four times by major historical earthquakes. Three large faults exist in the region (the North Neyshabour, Binalud and Neyshabour faults). The North Neyshabour and Binalud faults lie at the foot of the Binalud range north of Neyshabour. The North Neyshabour fault has a relatively sinuous surface trace, typical of a thrust fault, and does not show any clear strike-slip component.
The North Neyshabur thrust fault is exposed in a river section, at 36o180N 58o500E. The fault dips 60o north, and forms ~8-9 m high fault scarp at the surface which vertically offsets a Quaternary terrace. Within the river section, a yellow sandstone unit is offset by 9 m. A 60 by 40 cm sample of this unit was collected from the river exposure for optically stimulated luminescence (OSL) dating (location: 36o18.3090N 58o50.2700E). The sample was dated in the Oxford luminescence lab using a Riso (Model TL/OSL-DA-15) automated TL/OSL system under subdued red light (for details of the method see Fattahi et al., 2006, 2007; Fattahi and Walker, 2007). Eighteen subsamples of sample N5 demonstrated a wide paleodose distribution. This suggests that the sediment may not have been completely reset upon deposition (i.e. not all ‘trapped’ electrons from an earlier burial period were reset during sediment transport). This causes the mean age determination using weighted mean, 42000-68000 year, to overestimate the real deposition age. One solution to this problem is to assume the date of the youngest grains represent the time of deposition, giving a lower age of 22200-26000 year. However, we decided to use both age estimates for slip rate determination. As the top Quaternary terrace, has been displaced ~ 8-9 m at the surface, we calculated two slip rate using both average and minimum ages for calculating the slip-rate on the North Neyshabur fault (~0.1–0.2 and ~0.3-0.4mm/yr), respectively.https://jesphys.ut.ac.ir/article_24309_dd7e36a4ae9e3fd142f6152b2dc7e351.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121Study of changes physical parameters in Chahbahar Bay water in winter monsoon (2006-2007)Study of changes physical parameters in Chahbahar Bay water in winter monsoon (2006-2007)1952162431010.22059/jesphys.2012.24310FAFereshtehKomijaniVahidCheginiMohammad RezaBanazade MahaniMohammad SaeedSanjaniJournal Article19700101In this study variations of field data such as temperature, salinity and density in and across the transects of Chahbahar Bay were analyzed using CTD data acquired between winter 2006 and spring 2007 (winter monsoon).The results showed that thermocline layer intensify in mouth of Bay between 2m and 6m with 7.81 ?C decrease before the Gonou hurricane occured (mid-May 2007). Below thermocline layer to 24m depth, temperature ranged between 24 ?C to 25.5 ?C with small seasonal variation. A week after weakening of wind related hurricane, density and salinity distributions throughout the Bay showed strongly stratified conditions. This stratification were generated by, the inflow of Oman sea low salinity water toward Chahbahar Bay coast, decreasing of wind speed and mixing that produced a vertically uniform density (salinity) gradient perpendicular to Bay mouth. Surface layer salinity after the hurricane was 1.6 psu lower than before it. This decrease was caused by precipitation increase and Ekman transport toward coast. Average salinity of 37.59 psu to 34.07 psu was observed above and below the halocline, respectively. Decrease of salinity with depth was related to subsurface low salinity Oman water (average depth between 25m to150m, salinity less than 36.5 psu). Upwelling from seabed upward to 9-10 m depth was caused by westerly winds. Balancing of water salinity by Oman Sea current caused density and salinity vertical gradients in the mouth of Bay that were less than that in north of the Bay. Investigation of the results of statistical tests showed that horizontal and vertical density variations were mostly due to the water temperature and not salinity. Result of Randomized Complete Block Design test shows that circulation of chahbahar bay water is cyclonic in winter and April and anticyclonic in May and June.
Material and method: The CTD data was collected at 24 stations with 3.6 km distance along 9 transects in the Bay (fig 1). Four of the transects were perpendicular to the coast and five transects were parallel to coast. Sampling of temperature, salinity and density were continuously done in middle months of winter 2006 and spring 2007 seasons. The CTD was adjusted to collect data with a time interval of one second. Metrological data consisted of wind velocity, air temperature; humidity that were obtained from a station located in the Metrological Center of Chahbahar. Meteorological station collected time series of data at 1 hour interval for period of 6 months between January 2006 and June 2007. Some corrections were done on these data to transfer them into 10m of sea surface data. In spatial data analysis normalization of measuring data has been investigated by one-sample Kolmogrov-Smirnov nonparametric test, and correlation of these data was determined by Pearson correlation test. Finally regression test was done between temperature, salinity, density and depth.
Conclusions: In this study, monthly variations of physical properties of water such as temperature, salinity and density were determined in Chahbahar Bay by CTD sampling in winter monsoon (winter 2006 and spring 2007), and the effect of meteorological conditions such as Gonou hurricane have been analyzed.
In winter, increase of turbulent kinetic energy and decrease of sun radiation caused the mixed layer depth to increase as far as seabed. In spring the thermal structure of the Bay indicated that the thickness of mixed layer decreased to 2m, indicating the deepening of the surface mixed layer due to meteorological seasonal changes due to the hurricane. In spring, before the hurricane, the temperature structure indicated a thermocline located between 2m and 6m depth with 7.81?c temperature decrease across it. This layer caused by 4?c increasing of air temperature during April and May. Density and salinity fields after hurricane showed more vertically uniform distributions of these parameters that were strongly stratified everywhere also a for this event thermocline intensifies in mouth of Bay before hurricane.
Before hurricane salinity contours showed upwelling in the Bay that has led by the subsurface low salinity Oman water. Vertical salinity variations were very small and ranged mainly between 35.99 and 36.97 with very slight seasonal variation. Salinity vertical gradient in coastal stations were more than that of the mouth of Bay. this decrease was due to the inflow of Oman Sea water. Salinity in the halocline is slightly more than the salinity in the surface mixed layer. Due to an increase in temperature, density was slightly less than winter. Salinity data before hurricane showed that this parameter decreased with depth. The time series of wind velocity and temperature showed that upwelling has cansed the movement of the seabed upward to 9-10 m depth. Investigation of results of statistical tests clearly showed that density variations trend was mostly due to the water temperature and not salinity, therefore thermohaline circulation in Bay was controlled by temperature. Also water circulation patterns were Cyclonic in winter and Anticyclonic in spring.In this study variations of field data such as temperature, salinity and density in and across the transects of Chahbahar Bay were analyzed using CTD data acquired between winter 2006 and spring 2007 (winter monsoon).The results showed that thermocline layer intensify in mouth of Bay between 2m and 6m with 7.81 ?C decrease before the Gonou hurricane occured (mid-May 2007). Below thermocline layer to 24m depth, temperature ranged between 24 ?C to 25.5 ?C with small seasonal variation. A week after weakening of wind related hurricane, density and salinity distributions throughout the Bay showed strongly stratified conditions. This stratification were generated by, the inflow of Oman sea low salinity water toward Chahbahar Bay coast, decreasing of wind speed and mixing that produced a vertically uniform density (salinity) gradient perpendicular to Bay mouth. Surface layer salinity after the hurricane was 1.6 psu lower than before it. This decrease was caused by precipitation increase and Ekman transport toward coast. Average salinity of 37.59 psu to 34.07 psu was observed above and below the halocline, respectively. Decrease of salinity with depth was related to subsurface low salinity Oman water (average depth between 25m to150m, salinity less than 36.5 psu). Upwelling from seabed upward to 9-10 m depth was caused by westerly winds. Balancing of water salinity by Oman Sea current caused density and salinity vertical gradients in the mouth of Bay that were less than that in north of the Bay. Investigation of the results of statistical tests showed that horizontal and vertical density variations were mostly due to the water temperature and not salinity. Result of Randomized Complete Block Design test shows that circulation of chahbahar bay water is cyclonic in winter and April and anticyclonic in May and June.
Material and method: The CTD data was collected at 24 stations with 3.6 km distance along 9 transects in the Bay (fig 1). Four of the transects were perpendicular to the coast and five transects were parallel to coast. Sampling of temperature, salinity and density were continuously done in middle months of winter 2006 and spring 2007 seasons. The CTD was adjusted to collect data with a time interval of one second. Metrological data consisted of wind velocity, air temperature; humidity that were obtained from a station located in the Metrological Center of Chahbahar. Meteorological station collected time series of data at 1 hour interval for period of 6 months between January 2006 and June 2007. Some corrections were done on these data to transfer them into 10m of sea surface data. In spatial data analysis normalization of measuring data has been investigated by one-sample Kolmogrov-Smirnov nonparametric test, and correlation of these data was determined by Pearson correlation test. Finally regression test was done between temperature, salinity, density and depth.
Conclusions: In this study, monthly variations of physical properties of water such as temperature, salinity and density were determined in Chahbahar Bay by CTD sampling in winter monsoon (winter 2006 and spring 2007), and the effect of meteorological conditions such as Gonou hurricane have been analyzed.
In winter, increase of turbulent kinetic energy and decrease of sun radiation caused the mixed layer depth to increase as far as seabed. In spring the thermal structure of the Bay indicated that the thickness of mixed layer decreased to 2m, indicating the deepening of the surface mixed layer due to meteorological seasonal changes due to the hurricane. In spring, before the hurricane, the temperature structure indicated a thermocline located between 2m and 6m depth with 7.81?c temperature decrease across it. This layer caused by 4?c increasing of air temperature during April and May. Density and salinity fields after hurricane showed more vertically uniform distributions of these parameters that were strongly stratified everywhere also a for this event thermocline intensifies in mouth of Bay before hurricane.
Before hurricane salinity contours showed upwelling in the Bay that has led by the subsurface low salinity Oman water. Vertical salinity variations were very small and ranged mainly between 35.99 and 36.97 with very slight seasonal variation. Salinity vertical gradient in coastal stations were more than that of the mouth of Bay. this decrease was due to the inflow of Oman Sea water. Salinity in the halocline is slightly more than the salinity in the surface mixed layer. Due to an increase in temperature, density was slightly less than winter. Salinity data before hurricane showed that this parameter decreased with depth. The time series of wind velocity and temperature showed that upwelling has cansed the movement of the seabed upward to 9-10 m depth. Investigation of results of statistical tests clearly showed that density variations trend was mostly due to the water temperature and not salinity, therefore thermohaline circulation in Bay was controlled by temperature. Also water circulation patterns were Cyclonic in winter and Anticyclonic in spring.https://jesphys.ut.ac.ir/article_24310_7715992ba20d4e5fbc7277a3e16e2346.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121Using PCA and RDA feature reduction techniques for ranking seismic attributesUsing PCA and RDA feature reduction techniques for ranking seismic attributes2172272431110.22059/jesphys.2012.24311FASaeedehHemmatpourHosseinHashemiJournal Article19700101Optimal attributes are useful in interpretation of seismic data. Two proposed methods are presented in this paper for finding optimal attributes. Regularized Discriminate Analysis(RDA) is based on 2 parameters ë, ? which called regularization parameter. The other method is Principal Component Analysi s(PCA).In this paper gas chimney detection is defined as the subject of study for ranking relevant attributes. For 4817 samples of both classes i.e., gas chimney and non chimney with 28 attributes which are mentioned in table (1). These attributes have been picked by experienced interpreter. Among all of these attributes some of them such as Similarity (time window: [-120,-40]),Similarity (time window: [40,120]), Similarity (time window:[-40,40]), in forward selection algorithm and Similarity (time window: [-120,-40]), Similarity (time window: [-40,40]), Energy (time window:[-120,-40]) in backward selection algorithm in RDA method have the highest ranks. It should be highlighted that because the number of the observations is large so 70% of all observations have been used for train and 30% for test. The discriminate function is:
The classification error rate for RDA with ?=0.01 & ?=0.1 is 0.09 and for ?=0.1 & ?=0.1 is 0.1 and also for ?=0.1 & ?=0.01 is 0.09.
In discriminate matrix form which is shown as:
is covariance matrix of k-th class, is mean vector for k-th class and is prior probability of k-th class where is transpose of .
In PCA method the principal component obtain by calculating of eigenvectors of covariance matrix and also looking for a transformation with least square error .After these calculations we compare scatter plots of PCA. Selected attributes, PCA method are spectral decomposition with Ricker wavelet (center freq.= 60 (Hz), width=2) and Energy (time window: [-40,40]) ,Energy (time window: [-40,40]).
For better judgment and selection of optimal attributes we should combine two methods or more and obtain optimal method and also compare different method two by two.
Finally, using pattern recognition method for interpreting of seismic data is suggested.Optimal attributes are useful in interpretation of seismic data. Two proposed methods are presented in this paper for finding optimal attributes. Regularized Discriminate Analysis(RDA) is based on 2 parameters ë, ? which called regularization parameter. The other method is Principal Component Analysi s(PCA).In this paper gas chimney detection is defined as the subject of study for ranking relevant attributes. For 4817 samples of both classes i.e., gas chimney and non chimney with 28 attributes which are mentioned in table (1). These attributes have been picked by experienced interpreter. Among all of these attributes some of them such as Similarity (time window: [-120,-40]),Similarity (time window: [40,120]), Similarity (time window:[-40,40]), in forward selection algorithm and Similarity (time window: [-120,-40]), Similarity (time window: [-40,40]), Energy (time window:[-120,-40]) in backward selection algorithm in RDA method have the highest ranks. It should be highlighted that because the number of the observations is large so 70% of all observations have been used for train and 30% for test. The discriminate function is:
The classification error rate for RDA with ?=0.01 & ?=0.1 is 0.09 and for ?=0.1 & ?=0.1 is 0.1 and also for ?=0.1 & ?=0.01 is 0.09.
In discriminate matrix form which is shown as:
is covariance matrix of k-th class, is mean vector for k-th class and is prior probability of k-th class where is transpose of .
In PCA method the principal component obtain by calculating of eigenvectors of covariance matrix and also looking for a transformation with least square error .After these calculations we compare scatter plots of PCA. Selected attributes, PCA method are spectral decomposition with Ricker wavelet (center freq.= 60 (Hz), width=2) and Energy (time window: [-40,40]) ,Energy (time window: [-40,40]).
For better judgment and selection of optimal attributes we should combine two methods or more and obtain optimal method and also compare different method two by two.
Finally, using pattern recognition method for interpreting of seismic data is suggested.https://jesphys.ut.ac.ir/article_24311_179f805070668abcd1b9eb96e9c63e41.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121Moho depth and VP/VS variations in the Kope Dagh region from analysis of teleseismic receiver functionsMoho depth and VP/VS variations in the Kope Dagh region from analysis of teleseismic receiver functions1122431210.22059/jesphys.2012.24312FAElhamMohammadiForoghSodoudiAhmadSadidkhouyMohammad RezaGheitanchiJournal Article19700101In this study we use the P receiver function technique to determine the Moho depth and Vp/Vs ratio for 8 short period stations of Qochan and Mashhad seismic networks and map the variations of Moho depth under Kope Dagh region. It is shown that a receiver function can provide a relatively good point measurement of Moho depth under a short period station. The crustal thickness estimated from the delay time of the Moho P-to-S converted phase trades off strongly with the crustal Vp/Vs ratio. The ambiguity can be reduced significantly by incorporating the later multiple converted phases, namely, the PpPs and PpSs+PsPs. We use a stacking algorithm which sums the amplitudes of receiver function at the predicted arrival times of these phases by different crustal thicknesses H and Vp/Vs ratios (zhu & kanamori,2000). This transforms the time domain receiver functions directly into the H-Vp/Vs domain without need to identify these phases and to pick their arrival times. The best estimations of crustal thickness and Vp/Vs ratio are found when the three phases are stacked coherently. Applying this technique to 8 stations in Kope Dagh region reveals that the Moho depth is approximately 45 km on average and varies between 41 and 49 km. Thick and thin crust are found under the southern and northern Rang, respectively. These results are in good agreement with the geology and tectonic setting of this region.In this study we use the P receiver function technique to determine the Moho depth and Vp/Vs ratio for 8 short period stations of Qochan and Mashhad seismic networks and map the variations of Moho depth under Kope Dagh region. It is shown that a receiver function can provide a relatively good point measurement of Moho depth under a short period station. The crustal thickness estimated from the delay time of the Moho P-to-S converted phase trades off strongly with the crustal Vp/Vs ratio. The ambiguity can be reduced significantly by incorporating the later multiple converted phases, namely, the PpPs and PpSs+PsPs. We use a stacking algorithm which sums the amplitudes of receiver function at the predicted arrival times of these phases by different crustal thicknesses H and Vp/Vs ratios (zhu & kanamori,2000). This transforms the time domain receiver functions directly into the H-Vp/Vs domain without need to identify these phases and to pick their arrival times. The best estimations of crustal thickness and Vp/Vs ratio are found when the three phases are stacked coherently. Applying this technique to 8 stations in Kope Dagh region reveals that the Moho depth is approximately 45 km on average and varies between 41 and 49 km. Thick and thin crust are found under the southern and northern Rang, respectively. These results are in good agreement with the geology and tectonic setting of this region.https://jesphys.ut.ac.ir/article_24312_f68690277f0884dce7358bbf8b3b4cc6.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121Linear gravity inversion including the minimum moment of inertiaLinear gravity inversion including the minimum moment of inertia13262431310.22059/jesphys.2012.24313FAVahidArdestani, E.AliNejati KalateJournal Article19700101The compact gravity inversion including the minimization of moment of inertia has been applied to determine the geometry of anomalous bodies which cause much more depth esolution.
The new algorithm is based on Lewi's (1996) procedure including the minimum moment of inertia. The method is used with good results to several 3-Dimensional synthetic models and real examples.
The advantage of using this combination method is presented by comparing it with the other methodsThe compact gravity inversion including the minimization of moment of inertia has been applied to determine the geometry of anomalous bodies which cause much more depth esolution.
The new algorithm is based on Lewi's (1996) procedure including the minimum moment of inertia. The method is used with good results to several 3-Dimensional synthetic models and real examples.
The advantage of using this combination method is presented by comparing it with the other methodshttps://jesphys.ut.ac.ir/article_24313_c27701a325db0c3ef2e04f178b082fb0.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121Detection of a fault zone in the south of Bam using CSTMT methodDetection of a fault zone in the south of Bam using CSTMT method27392431410.22059/jesphys.2012.24314FADavoodMoghadasBehroozOskooiSaeedHashemi TabatabaeiL.PesersenAzizNasutiJournal Article19700101Due to their inherent sensitivity to resistivity contrast, EM methods have an important role in detection of fault zones. In this paper, we experienced Controlled Source Tensor Magnetotelluric (CSTMT) method to recognize a fault zone in Bam area. An earthquake devastated the town of Bam on 26 December 2003. Surface displacements reveal that over 2 m of slip occurred at depth on a fault that had not previously been identified. This fault, located in south of Bam, is a strike slip fault which extends from the centre of Bam city southwards for about 12 km. Data were collected along a profile with an approximately NW-SE direction and perpendicular to the fault. The frequency range used in this method is in the range of 1-25 kHz. The CSTMT field measurements resolved well the resistivity contrast along the profile and 1D and 2D inversion models agree well with the data. Consequently, CSTMT is proved to be a useful method for detecting such structures, utilizing tensor data produced by two perpendicular magnetic loops together with a four electric sensor array.Due to their inherent sensitivity to resistivity contrast, EM methods have an important role in detection of fault zones. In this paper, we experienced Controlled Source Tensor Magnetotelluric (CSTMT) method to recognize a fault zone in Bam area. An earthquake devastated the town of Bam on 26 December 2003. Surface displacements reveal that over 2 m of slip occurred at depth on a fault that had not previously been identified. This fault, located in south of Bam, is a strike slip fault which extends from the centre of Bam city southwards for about 12 km. Data were collected along a profile with an approximately NW-SE direction and perpendicular to the fault. The frequency range used in this method is in the range of 1-25 kHz. The CSTMT field measurements resolved well the resistivity contrast along the profile and 1D and 2D inversion models agree well with the data. Consequently, CSTMT is proved to be a useful method for detecting such structures, utilizing tensor data produced by two perpendicular magnetic loops together with a four electric sensor array.https://jesphys.ut.ac.ir/article_24314_6e77440f5548256dc785a2715b9d691e.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37420120121Seismic refraction and downhole survey for characterization of shallow depth materials of Bam city, southeast of IranSeismic refraction and downhole survey for characterization of shallow depth materials of Bam city, southeast of Iran41582431510.22059/jesphys.2012.24315FAMohammad AliRiahiS. HashemTabatabaeiAliBeytollahiAbbasGhalandarzadehMortezaTalebianMortezaFattahiJournal Article19700101Seismic refraction and downhole survey were employed to study dynamic characteristics of subsurface materials in Bam city, southeast of Iran. The data acquisition was performed at 160 P and S-wave refraction stations and 15 boreholes in the city. To derive velocity depth sections along these profiles as well as to perform downhole diagrams, SeisImager software was used. Based on the obtained values different maps consists of iso–depth, iso-velocity and iso-Poisson’s ratio maps of city were prepared. The obtained results from the velocity variations of the P and S waves, three layers were identified. The first layer has low velocity, the second layer with medium velocity and the third layer has relatively high velocity values. The thickness of the first layer increases from the south west towards the north east of the study area while the thickness of second and third layers decreases from south west towards the north east of the study area, where in the south west of the region because of the thickening of the third layer, even using the far shot data of seismic measurement, the thickness of this layer was not detected. In the other word in the most part of the north east of the region, only two layers with low and medium velocity were determined. Attentive to the velocity distribution of the P and S wave velocity, the Poisson’s ratio distribution as well as attenuation coefficient obtained for the identified layers of the area under investigation in the Bam city, it is concluded that the low velocity layer with high thickness and attenuation coefficient of the first layer can be attributed to be the cause of the strong motion vibration of the soil due to Bam earthquake on 26th December 2003 in the city.Seismic refraction and downhole survey were employed to study dynamic characteristics of subsurface materials in Bam city, southeast of Iran. The data acquisition was performed at 160 P and S-wave refraction stations and 15 boreholes in the city. To derive velocity depth sections along these profiles as well as to perform downhole diagrams, SeisImager software was used. Based on the obtained values different maps consists of iso–depth, iso-velocity and iso-Poisson’s ratio maps of city were prepared. The obtained results from the velocity variations of the P and S waves, three layers were identified. The first layer has low velocity, the second layer with medium velocity and the third layer has relatively high velocity values. The thickness of the first layer increases from the south west towards the north east of the study area while the thickness of second and third layers decreases from south west towards the north east of the study area, where in the south west of the region because of the thickening of the third layer, even using the far shot data of seismic measurement, the thickness of this layer was not detected. In the other word in the most part of the north east of the region, only two layers with low and medium velocity were determined. Attentive to the velocity distribution of the P and S wave velocity, the Poisson’s ratio distribution as well as attenuation coefficient obtained for the identified layers of the area under investigation in the Bam city, it is concluded that the low velocity layer with high thickness and attenuation coefficient of the first layer can be attributed to be the cause of the strong motion vibration of the soil due to Bam earthquake on 26th December 2003 in the city.https://jesphys.ut.ac.ir/article_24315_4f37c53e2c18f7a2bd7c6f6e28b63960.pdf