Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41220150723Crustal deformation monitoring in Qoshadagh Mountains by analysis of GPS data, tectonic stress and SAR interferometry techniqueCrustal deformation monitoring in Qoshadagh Mountains by analysis of GPS data, tectonic stress and SAR interferometry technique1671765458110.22059/jesphys.2015.54581FAEhsanSaadatfarM.Sc., Department of Remote Sensing Engineering, Kerman Graduate University of Technology, Kerman, IranBehzadZamaniAssociate Professor, Department of Geology, Faculty of Environment Science, Tabriz University, IranJournal Article20140205
<sup>*</sup>نگارنده رابط: e.saadatf@yahoo.com E-mail:
The 2012 earthquake of the Ahar and Varzegan (Mw=6.2 and 6.4) and 4 months aftershocks related to these earthqukes was shown the concentration of deformation and stress in the NW of Iran. The study area is a relay tectonically region between an active tectonic fault system, North Anatolian fault system located in Turkey, and the Alborz and Zagros in north and southeast of Iran respectively .The epicentral locations of the main shocks, their mechanisms and aftershock distribution show that the recent large earthquake in Iran, Ahar-Varzaghan earthquake, may have other sources than Tabriz and Ahar faults which are two main active faults in NW-Iran. In order to study these deformations, the results of the GPS geodesy data, tectonic stress that causes the deformations and radar data analysis were researched.
Synthetic Aperture Radar (SAR) is a coherent active microwave remote sensing system that could effectively map the scattering properties of the Earth’s surface and has been already intensively investigated. One of the major applications of the SAR technology is represented by the interferometry (InSAR) technique which exploits, in its basic form, the phase difference of two complex valued SAR images (acquired from different orbit positions and at different times) in order to measure several parameters, such as deformation. But geometrical and temporal decorrelations degrade the accuracy and even sometimes make the measurement impossible in deformation monitoring. Recent developments in differential interferometry have demonstrated some prospective to overcome the above limitations of the conventional interferometry and also for more accurate and temporally dependent results. The new interferometric processing techniques include interferometric stacking and Persistent Scatteres Interferometry. Persistent Scatterer Interferometry is a powerful group of techniques for deformation measuring and monitoring using interferometric SAR imagery. PSInSAR is possible to avoid many of the limitations of conventional method by only analyzing certain pixels which behave like point scatters and retain some degree of correlations. This technique represents an advanced type of Differential Interferometric SAR techniques: it is based on large stacks of SAR images and suitable data modeling procedures that make the estimation of different parameters possible. These parameters include the deformation time series and the average displacement rates.
This research used StaMPS method, for monitoring intersiesmic crustal deformation in Ahar-Varzegan earthquake. In this study we use 20 Envisat ASAR images during the time period of 2003- 2010 to study pre-siesmic deformations over Ahar-Varzegan by persistent scattering interferometry. The ground deformation rate in mm/year along the line of sight direction of satellite is obtained and the results showed that the maximum horizontal displacement rate is equal 7.4 mm/year.
Application of tectonic stress inversion allows determination of a consistent average state of stress in the Ahar south-thrustin which the average stress is characterised by a NW-SE (117˚/16˚) direction of compression (maximum stress). This shows that the maximum stress is roughly horizontal and this is the cause of the formation and development of the thrusting in the west of the Qoshadagh. Stress analysis results, earthquake focal mechanisms, fault mechanism, and GPS geodesy are all consistent by the radar interferometry results.
<sup>*</sup>نگارنده رابط: e.saadatf@yahoo.com E-mail:
The 2012 earthquake of the Ahar and Varzegan (Mw=6.2 and 6.4) and 4 months aftershocks related to these earthqukes was shown the concentration of deformation and stress in the NW of Iran. The study area is a relay tectonically region between an active tectonic fault system, North Anatolian fault system located in Turkey, and the Alborz and Zagros in north and southeast of Iran respectively .The epicentral locations of the main shocks, their mechanisms and aftershock distribution show that the recent large earthquake in Iran, Ahar-Varzaghan earthquake, may have other sources than Tabriz and Ahar faults which are two main active faults in NW-Iran. In order to study these deformations, the results of the GPS geodesy data, tectonic stress that causes the deformations and radar data analysis were researched.
Synthetic Aperture Radar (SAR) is a coherent active microwave remote sensing system that could effectively map the scattering properties of the Earth’s surface and has been already intensively investigated. One of the major applications of the SAR technology is represented by the interferometry (InSAR) technique which exploits, in its basic form, the phase difference of two complex valued SAR images (acquired from different orbit positions and at different times) in order to measure several parameters, such as deformation. But geometrical and temporal decorrelations degrade the accuracy and even sometimes make the measurement impossible in deformation monitoring. Recent developments in differential interferometry have demonstrated some prospective to overcome the above limitations of the conventional interferometry and also for more accurate and temporally dependent results. The new interferometric processing techniques include interferometric stacking and Persistent Scatteres Interferometry. Persistent Scatterer Interferometry is a powerful group of techniques for deformation measuring and monitoring using interferometric SAR imagery. PSInSAR is possible to avoid many of the limitations of conventional method by only analyzing certain pixels which behave like point scatters and retain some degree of correlations. This technique represents an advanced type of Differential Interferometric SAR techniques: it is based on large stacks of SAR images and suitable data modeling procedures that make the estimation of different parameters possible. These parameters include the deformation time series and the average displacement rates.
This research used StaMPS method, for monitoring intersiesmic crustal deformation in Ahar-Varzegan earthquake. In this study we use 20 Envisat ASAR images during the time period of 2003- 2010 to study pre-siesmic deformations over Ahar-Varzegan by persistent scattering interferometry. The ground deformation rate in mm/year along the line of sight direction of satellite is obtained and the results showed that the maximum horizontal displacement rate is equal 7.4 mm/year.
Application of tectonic stress inversion allows determination of a consistent average state of stress in the Ahar south-thrustin which the average stress is characterised by a NW-SE (117˚/16˚) direction of compression (maximum stress). This shows that the maximum stress is roughly horizontal and this is the cause of the formation and development of the thrusting in the west of the Qoshadagh. Stress analysis results, earthquake focal mechanisms, fault mechanism, and GPS geodesy are all consistent by the radar interferometry results.
https://jesphys.ut.ac.ir/article_54581_8078684bc8dfe7a10486ef436c293d84.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41220150723Blending and Deblending in Seismic Data Acquisition and ProcessingBlending and Deblending in Seismic Data Acquisition and Processing1771915280510.22059/jesphys.2015.52805FAHoomanKarimiMaster of science, University of tehranAliGholamiAssistant professor, University of tehranJournal Article20140505In the current seismic data acquisition techniques, sources are fired with large time intervals in order to avoid interferences between the responses of successively firing sources, measured by the receivers. This leads to a time-consuming and expensive survey. Theoretically the waiting time between two successively firing sources has to be infinite, since the wavefield never vanishes completely. However, in practice this waiting time varies from a few seconds (s) up to 30 s. This means that the source responses are negligible after the waiting time. As an example, within the time interval of 200 s, 40 source locations can be fired with 5 s waiting time, or 20 source locations can be fired with 10 s waiting time. Since decision making at the business level are usually based on minimizing the acquisition costs, the source domain is usually poorly sampled to limit the survey duration, causing spatial aliasing (Mahdad, 2011). On the other hand, modifying the waiting times brings flexibility in the source sampling and the survey time. The concept of simultaneous or blended acquisition is to address the aforementioned issues by either reducing the waiting time between firing sources, leading to reduced acquisition costs, or by increasing the number of sources within the same survey time, leading to a higher data quality. Note that a combination of the two approaches combines these benefits. The price paid for achieving higher data quality at lower acquisition cost is dealing with the interfered data, called blended data, which are acquired in the blended acquisition. But in order to precede further processing and imaging algorithms, one needs to first breakdown the blended data into its original components (single source responses) by a processing step called deblending. It is a try to retrieve the data as if they were acquired in a conventional, unblended way. In this paper, we introduce the concept of simultaneous acquisition and examine three methods of deblending:
1) The least-squares method (Pseudo-deblending) which perfectly predicts the blended data but its solution suffers from the interference noises related to the interfering sources in the observations, the so called blending noises (crosstalk noises). These noises have different characteristics in different domains of the data. For example, in the common-mid-point (CMP) domain they are incoherent and spike-like and thus can be tackled by a denoising algorithm.
2) Noise attenuation by multidirectional vector-median filter (MD-VMF). It is a generalization of the well-known conventional median filter from a scalar implementation to a vector form. More specifically, a vector median filter is applied in many trial directions and then the median vector is selected.
3) Regularization of deblending operator matrix. Deblending is by itself an underdetermined and thus ill-posed problem; meaning that, there are infinitely many solutions for the deblending problem. Therefore, constraints are necessary to solve it. A possible way is spatially band-limiting constraints which are useful when the sources are densely sampled. It has been shown that under such constraints, the deblending operator matrix can be regularized to form a well behaved direct deblending operator.
Finally, by observing the wavefield from deblended synthetic and field data we conclude that, regularization of the belending operator matrix is reliable because of its accuracy in noise attenuation and keep the signal and speed of the algorithm.In the current seismic data acquisition techniques, sources are fired with large time intervals in order to avoid interferences between the responses of successively firing sources, measured by the receivers. This leads to a time-consuming and expensive survey. Theoretically the waiting time between two successively firing sources has to be infinite, since the wavefield never vanishes completely. However, in practice this waiting time varies from a few seconds (s) up to 30 s. This means that the source responses are negligible after the waiting time. As an example, within the time interval of 200 s, 40 source locations can be fired with 5 s waiting time, or 20 source locations can be fired with 10 s waiting time. Since decision making at the business level are usually based on minimizing the acquisition costs, the source domain is usually poorly sampled to limit the survey duration, causing spatial aliasing (Mahdad, 2011). On the other hand, modifying the waiting times brings flexibility in the source sampling and the survey time. The concept of simultaneous or blended acquisition is to address the aforementioned issues by either reducing the waiting time between firing sources, leading to reduced acquisition costs, or by increasing the number of sources within the same survey time, leading to a higher data quality. Note that a combination of the two approaches combines these benefits. The price paid for achieving higher data quality at lower acquisition cost is dealing with the interfered data, called blended data, which are acquired in the blended acquisition. But in order to precede further processing and imaging algorithms, one needs to first breakdown the blended data into its original components (single source responses) by a processing step called deblending. It is a try to retrieve the data as if they were acquired in a conventional, unblended way. In this paper, we introduce the concept of simultaneous acquisition and examine three methods of deblending:
1) The least-squares method (Pseudo-deblending) which perfectly predicts the blended data but its solution suffers from the interference noises related to the interfering sources in the observations, the so called blending noises (crosstalk noises). These noises have different characteristics in different domains of the data. For example, in the common-mid-point (CMP) domain they are incoherent and spike-like and thus can be tackled by a denoising algorithm.
2) Noise attenuation by multidirectional vector-median filter (MD-VMF). It is a generalization of the well-known conventional median filter from a scalar implementation to a vector form. More specifically, a vector median filter is applied in many trial directions and then the median vector is selected.
3) Regularization of deblending operator matrix. Deblending is by itself an underdetermined and thus ill-posed problem; meaning that, there are infinitely many solutions for the deblending problem. Therefore, constraints are necessary to solve it. A possible way is spatially band-limiting constraints which are useful when the sources are densely sampled. It has been shown that under such constraints, the deblending operator matrix can be regularized to form a well behaved direct deblending operator.
Finally, by observing the wavefield from deblended synthetic and field data we conclude that, regularization of the belending operator matrix is reliable because of its accuracy in noise attenuation and keep the signal and speed of the algorithm.https://jesphys.ut.ac.ir/article_52805_d311e034aa1caad62a5e0f640b247e6e.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41220150723Interpretation of magnetic data acquired on polymetal deposit of Oshvand and comparison results with resistivity and induced polarization by inverse modelingInterpretation of magnetic data acquired on polymetal deposit of Oshvand and comparison results with resistivity and induced polarization by inverse modeling1932045281210.22059/jesphys.2015.52812FAArdalanKhazaiefarM.Sc. Student, School of Mining, Petroleum and Geophysics Engineering, University of Shahrood, IranAliNejati KalatehAssistant Professor, School of Mining, Petroleum and Geophysics Engineering, University of Shahrood, IranAminRoshandel KahooAssistant Professor, School of Mining, Petroleum and Geophysics Engineering, University of Shahrood, Iran0000-0002-2214-2558FaramarzAllahverdi MeigouniExpert in Geophysics, Exploration Directorate, Geophysical Survey of Iran, Tehran, IranJournal Article20140929There are various approaches for depth estimation of anomalous potential field data. Spectral analysis of gravity and magnetic data has been used extensively for many years to derive the depth to certain geological structures, such as the magnetic basement or the Curie temperature isotherm. The interpretation of the gravity and magnetic data is preferred in frequency domain because of simple relation between various source models and fields. The estimation of the depth of anomalous sources is usually carried out by Spector and Grant method and its variants in frequency domain. This method, which assumes a uniform distribution of parameters for an ensemble of magnetized blocks, leads to a depth-dependent exponential rate of the decay. In the frequency domain, geophysical source parameters have been assumed as uncorrelated and randomly distributed. Assumption of the uncorrelated random sources is not true as revealed from many borehole data in the German continental deep drilling project (KTB) around the globe. Susceptibility data of pilot hole is analyzed and its power spectrum shows a generalized behavior. Therefore, the generalized spectral method for gravity and magnetic data, based on the realistic distribution of anomalous sources, is found useful for finding the depth values and statistical properties of the source distribution. The scaling spectral method has been applied in many parts of the world. An important aspect of this method is that the scaling properties of the source distributions are related to the scaling properties of the fields in a general way. This relationship can be used to derive information on local geology. A technique to estimate the depth to anomalous sources from the generalized power spectra of magnetic profiles is presented. The power spectrum corresponding to low wavenumber may be dominated by scaling properties alone rather than the depth values. If the logarithm of obtained power spectrum of potential field data that transformed in Fourier domain, is plotted versus wave number values, although some factors affect the plot, but the depth is a dominant factor. The depth various source is thus found from the slope of this plot. If there is more than one ensemble, the slope at smaller frequencies gives the depth to deeper sources, and subsequent slopes at higher frequencies give the depth of shallower source. The depth values calculated by this method are close to the realistic values. To test the reliability of any technique it is necessary for the technique to be tested on the synthetic data. In present research work, the efficiency of generalized power spectrum has been investigated using theoretical model of magnetic and the results have been compared with commonly power spectrum. At the end the generalized power spectrum method has performed well in the depth estimation of anomalous sources of magnetic data acquired on polymetal deposit of Oshvand located in Hamedan Province and the results compared with the commonly power spectrum, IP and RS methods. Based on the previous studies conducted in the area, our estimation of the depth of anomalous sources by means of generalized power spectrum approach has been evaluated and confirmed.
There are various approaches for depth estimation of anomalous potential field data. Spectral analysis of gravity and magnetic data has been used extensively for many years to derive the depth to certain geological structures, such as the magnetic basement or the Curie temperature isotherm. The interpretation of the gravity and magnetic data is preferred in frequency domain because of simple relation between various source models and fields. The estimation of the depth of anomalous sources is usually carried out by Spector and Grant method and its variants in frequency domain. This method, which assumes a uniform distribution of parameters for an ensemble of magnetized blocks, leads to a depth-dependent exponential rate of the decay. In the frequency domain, geophysical source parameters have been assumed as uncorrelated and randomly distributed. Assumption of the uncorrelated random sources is not true as revealed from many borehole data in the German continental deep drilling project (KTB) around the globe. Susceptibility data of pilot hole is analyzed and its power spectrum shows a generalized behavior. Therefore, the generalized spectral method for gravity and magnetic data, based on the realistic distribution of anomalous sources, is found useful for finding the depth values and statistical properties of the source distribution. The scaling spectral method has been applied in many parts of the world. An important aspect of this method is that the scaling properties of the source distributions are related to the scaling properties of the fields in a general way. This relationship can be used to derive information on local geology. A technique to estimate the depth to anomalous sources from the generalized power spectra of magnetic profiles is presented. The power spectrum corresponding to low wavenumber may be dominated by scaling properties alone rather than the depth values. If the logarithm of obtained power spectrum of potential field data that transformed in Fourier domain, is plotted versus wave number values, although some factors affect the plot, but the depth is a dominant factor. The depth various source is thus found from the slope of this plot. If there is more than one ensemble, the slope at smaller frequencies gives the depth to deeper sources, and subsequent slopes at higher frequencies give the depth of shallower source. The depth values calculated by this method are close to the realistic values. To test the reliability of any technique it is necessary for the technique to be tested on the synthetic data. In present research work, the efficiency of generalized power spectrum has been investigated using theoretical model of magnetic and the results have been compared with commonly power spectrum. At the end the generalized power spectrum method has performed well in the depth estimation of anomalous sources of magnetic data acquired on polymetal deposit of Oshvand located in Hamedan Province and the results compared with the commonly power spectrum, IP and RS methods. Based on the previous studies conducted in the area, our estimation of the depth of anomalous sources by means of generalized power spectrum approach has been evaluated and confirmed.
https://jesphys.ut.ac.ir/article_52812_6df441014f540a6edbb277e64418609d.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41220150723Application of empirical mode decomposition and Hilbert spectrum in seismic data denoising and low frequency shadow identificationApplication of empirical mode decomposition and Hilbert spectrum in seismic data denoising and low frequency shadow identification2052175281410.22059/jesphys.2015.52814FAMohammad SadeghParkanGraduate student, Institute of Geophysics, University of TehranHamid RezaSiahkoohiFaculty of Institute of Geophysics, University of TehranAliGholamiFaculty of Institute of Geophysics, University of TehranJournal Article20140922In this paper some new applications of empirical mode decomposition (EMD) and Hilbert spectrum in seismic ground-roll attenuation, random noise attenuation and spectral decomposition are introduced. Hilbert spectrum is a time-frequency representation for Hilbert-Huang transform which is obtained by combination of instantaneous frequency (IF) concept and intrinsic mode function of empirical mode decomposition. This time-frequency representation method has a suitable characteristic in analyzing non-stationary data. The advantages and the performance of the spectrum in seismic random noise attenuation and ground-roll removal are tested here by applying on real and synthetic seismic data and the results were satisfactory. In attenuation of random noise the instantaneous frequency filtering operation is different from other time-frequency decomposition methods and the characteristics of this type of filtering also are discussed. <br /> In the case of spectral decomposition we introduced a new method. We can extract constant frequency section by using empirical mode decomposition and Hilbert-Huang transform. In addition we have used instantaneous frequency separately to construct constant instantaneous frequency sections to detect low frequency shadow zone beneath the reservoir. Spectral decomposition using constant instantaneous frequency section in comparison with other conventional time-frequency decompositions methods has some advantage. Constant frequency sections which are obtained through Hilbert-Huang transform is a time consuming process while by using instantaneous frequency separately, the massive calculation process of empirical mode decomposition is omitted and the results have no difference in comparison with Hilbert-Huang transform. <br />Here we explain how instantaneous frequency spectrum can be obtained from intrinsic mode functions (IMF). The empirical mode decomposition method developed by Huang et al. (1998) is a powerful signal analysis technique which known to be highly suitable for analysis of the non-stationary and non-linear signals, such as seismic data. EMD decompose data into functions which is called intrinsic mode functions. But EMD has a problem which is called mode mixing. Wu and Huang (2009) proposed the ensemble empirical mode decomposition (EEMD) to solve the mode mixing problem of EMD. However, not only the EEMD is not a complete decomposition method but also it is not reversible by summing all IMFs. Torres et al. (2011) proposed the complete ensemble empirical mode decomposition (CEEMD) algorithm. CEEMD overcome the mode mixing and provides an exact reconstruction of the original signal. In this paper we used CEEMD algorithm combined with Hilbert transform and analytic signal to evaluate instantaneous frequency. There are other methods to calculate IF from signals (for more information refer to Huang etal., 2009). <br /> <br />Analytic signal is obtained from signal and its Hilbert transform, we can write: <br /> <br /> <br /> <br />Where is the Hilbert transform of and is the analytic signal then its IF can be computed from <br /> <br /> <br /> <br /> is the instantaneous phase and is the instantaneous frequency. In addition, for any given time in a signal we can obtain instantaneous amplitude of signal x (t) using <br /> <br /> <br /> <br />Having time and its corresponding frequency and instantaneous amplitude we can show 3D plot of time-frequency-amplitude, which is a time-frequency representation (TFR) similar to STFT and S spectrum. This TFR representation is called instantaneous frequency spectrum or Hilbert spectrum. If we calculate instantaneous frequency from IMFs the time-frequency analysis method is called Hilbert-Huang transform. <br /> <br />Here we demonstrated the performances of the Hilbert spectrum in attenuating random and coherent seismic noise as well as identifying low frequency shadow zone on seismic sections. The results were acceptable with no evidences of the negative frequency or spikes which are common in conventional instantaneous frequency.In this paper some new applications of empirical mode decomposition (EMD) and Hilbert spectrum in seismic ground-roll attenuation, random noise attenuation and spectral decomposition are introduced. Hilbert spectrum is a time-frequency representation for Hilbert-Huang transform which is obtained by combination of instantaneous frequency (IF) concept and intrinsic mode function of empirical mode decomposition. This time-frequency representation method has a suitable characteristic in analyzing non-stationary data. The advantages and the performance of the spectrum in seismic random noise attenuation and ground-roll removal are tested here by applying on real and synthetic seismic data and the results were satisfactory. In attenuation of random noise the instantaneous frequency filtering operation is different from other time-frequency decomposition methods and the characteristics of this type of filtering also are discussed. <br /> In the case of spectral decomposition we introduced a new method. We can extract constant frequency section by using empirical mode decomposition and Hilbert-Huang transform. In addition we have used instantaneous frequency separately to construct constant instantaneous frequency sections to detect low frequency shadow zone beneath the reservoir. Spectral decomposition using constant instantaneous frequency section in comparison with other conventional time-frequency decompositions methods has some advantage. Constant frequency sections which are obtained through Hilbert-Huang transform is a time consuming process while by using instantaneous frequency separately, the massive calculation process of empirical mode decomposition is omitted and the results have no difference in comparison with Hilbert-Huang transform. <br />Here we explain how instantaneous frequency spectrum can be obtained from intrinsic mode functions (IMF). The empirical mode decomposition method developed by Huang et al. (1998) is a powerful signal analysis technique which known to be highly suitable for analysis of the non-stationary and non-linear signals, such as seismic data. EMD decompose data into functions which is called intrinsic mode functions. But EMD has a problem which is called mode mixing. Wu and Huang (2009) proposed the ensemble empirical mode decomposition (EEMD) to solve the mode mixing problem of EMD. However, not only the EEMD is not a complete decomposition method but also it is not reversible by summing all IMFs. Torres et al. (2011) proposed the complete ensemble empirical mode decomposition (CEEMD) algorithm. CEEMD overcome the mode mixing and provides an exact reconstruction of the original signal. In this paper we used CEEMD algorithm combined with Hilbert transform and analytic signal to evaluate instantaneous frequency. There are other methods to calculate IF from signals (for more information refer to Huang etal., 2009). <br /> <br />Analytic signal is obtained from signal and its Hilbert transform, we can write: <br /> <br /> <br /> <br />Where is the Hilbert transform of and is the analytic signal then its IF can be computed from <br /> <br /> <br /> <br /> is the instantaneous phase and is the instantaneous frequency. In addition, for any given time in a signal we can obtain instantaneous amplitude of signal x (t) using <br /> <br /> <br /> <br />Having time and its corresponding frequency and instantaneous amplitude we can show 3D plot of time-frequency-amplitude, which is a time-frequency representation (TFR) similar to STFT and S spectrum. This TFR representation is called instantaneous frequency spectrum or Hilbert spectrum. If we calculate instantaneous frequency from IMFs the time-frequency analysis method is called Hilbert-Huang transform. <br /> <br />Here we demonstrated the performances of the Hilbert spectrum in attenuating random and coherent seismic noise as well as identifying low frequency shadow zone on seismic sections. The results were acceptable with no evidences of the negative frequency or spikes which are common in conventional instantaneous frequency.https://jesphys.ut.ac.ir/article_52814_a92d13922f26b82fc653d4043e5f5755.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41220150723An investigation of the proposed WALDIM criteria to identify electrical anisotropy in complex geological region; case study: continental marginAn investigation of the proposed WALDIM criteria to identify electrical anisotropy in complex geological region; case study: continental margin2192285281610.22059/jesphys.2015.52816FAMansooreMontahaeiAssistant Professor, Department of Earth Physics, Institute of Geophysics, University of Tehran, IranBanafsheHabibian DehkordiAssistant Professor, Department of Earth Physics, Institute of Geophysics, University of Tehran, IranJournal Article20140607In an electrically anisotropic media, current density is not aligned with the electric field and varies with the E-field direction. This can be considered as the spatial aliasing effects aroused by the subsurface structures whose individual dimensions are smaller than the inductive length scale of diffusively propagating EM fields. The role of these structures is particularly significant in tectonically active regions, where tectonic processes induced penetrating fabrics. Accordingly, identification, characterization and interpretation of electrical anisotropy have large implications to understand evolutionary aspects of geological structures, evaluate the economic resources and interpret hydrological flows (Wannamaker, 2005). Marti (2014) provides a comprehensive review of the works conducted to recognize electrical anisotropy imprints during MT data analysis and also different strategies used to model this property in case studies.
Dimensionality analysis is a preliminary stage in MT data interpretation procedure to recover the strike direction of the regional geo-electric structure, characterize the distortion effects of superficial conductive bodies and also to adopt an appropriate modeling approach (1D, 2D or 3D), coincident with the intrinsic dimension of the measured data. The application of the preliminary dimensionality tools, Swift’s and Bahr’s skews for synthetic and real MT data affected by anisotropy shows that they are disable to identify the anisotropy footprints and distinguish between structural and anisotropy strike directions (Heise and Pous, 2001). Weaver et al., (2000) suggested a family of rotationally invariant parameters characterizing the dimensionality properties of the underlying geo-electric structures. Marti et al. (2009, 2010) published the WALDIM code based on these invariants and extended them to provide proper conditions from which isotropic and anisotropic structures could be differentiated. The main criteria proposed by the WALDIM code to differentiate anisotropic media from isotropic ones are as follows:
The WAL rotational invariant values indicate a 2D regional structure, while the strike directions estimated from the first and second columns of impedance tensor are inconsistent. This situation is mentioned as “3D/2D anisotropy” in subsequent table and figures.
We report here on the application of this scheme to analyze the dimensionality of MT responses from some principal anisotropic models representing complex geological settings at continental margins and also for MT data from an active continental margin in South-Central Chile, where the presence of electrical anisotropy has been previously recognized (mainly from geomagnetic transfer functions).
In electrical anisotropy modeling resistivity is represented as a symmetric, positive definite tensor which can be diagonalized employing Euler’s elementary rotations to obtain its principal directions and their corresponding resistivities (principal resistivities: ρ<sub>xx</sub>, ρ<sub>yy</sub>, ρ<sub>zz</sub>). These directions are known as the strike (α<sub>s</sub>), dip (α<sub>D</sub>) and slant (α<sub>L</sub>) anisotropy angles. The non-zero values of these angles and the specified relationships between principal resistivities would determine the type and geometry of the electrical anisotropy. We restricted our study to uniaxial, azimuthal anisotropy, where α<sub>s</sub>≠0, α<sub>D</sub>= α<sub>L</sub>= 0 and also ρ<sub>xx=</sub> ρ<sub>zz≠</sub> ρ<sub>yy.</sub>
Model responses were calculated employing the algorithm of Pek and Verner, 1997. The proposed models of geological settings are selected so that their complexity is gradually increasing. Dimensionality analysis results for the synthetic model responses and real data are depicted in figures (2, 3 and 4) and (6 and 7), respectively. The results indicate that the proposed criteria is slightly firm in the sense that they could not identify electrical anisotropy in the presence of galvanic distortions caused by superficial conductive structures and complexities of regional structures.In an electrically anisotropic media, current density is not aligned with the electric field and varies with the E-field direction. This can be considered as the spatial aliasing effects aroused by the subsurface structures whose individual dimensions are smaller than the inductive length scale of diffusively propagating EM fields. The role of these structures is particularly significant in tectonically active regions, where tectonic processes induced penetrating fabrics. Accordingly, identification, characterization and interpretation of electrical anisotropy have large implications to understand evolutionary aspects of geological structures, evaluate the economic resources and interpret hydrological flows (Wannamaker, 2005). Marti (2014) provides a comprehensive review of the works conducted to recognize electrical anisotropy imprints during MT data analysis and also different strategies used to model this property in case studies.
Dimensionality analysis is a preliminary stage in MT data interpretation procedure to recover the strike direction of the regional geo-electric structure, characterize the distortion effects of superficial conductive bodies and also to adopt an appropriate modeling approach (1D, 2D or 3D), coincident with the intrinsic dimension of the measured data. The application of the preliminary dimensionality tools, Swift’s and Bahr’s skews for synthetic and real MT data affected by anisotropy shows that they are disable to identify the anisotropy footprints and distinguish between structural and anisotropy strike directions (Heise and Pous, 2001). Weaver et al., (2000) suggested a family of rotationally invariant parameters characterizing the dimensionality properties of the underlying geo-electric structures. Marti et al. (2009, 2010) published the WALDIM code based on these invariants and extended them to provide proper conditions from which isotropic and anisotropic structures could be differentiated. The main criteria proposed by the WALDIM code to differentiate anisotropic media from isotropic ones are as follows:
The WAL rotational invariant values indicate a 2D regional structure, while the strike directions estimated from the first and second columns of impedance tensor are inconsistent. This situation is mentioned as “3D/2D anisotropy” in subsequent table and figures.
We report here on the application of this scheme to analyze the dimensionality of MT responses from some principal anisotropic models representing complex geological settings at continental margins and also for MT data from an active continental margin in South-Central Chile, where the presence of electrical anisotropy has been previously recognized (mainly from geomagnetic transfer functions).
In electrical anisotropy modeling resistivity is represented as a symmetric, positive definite tensor which can be diagonalized employing Euler’s elementary rotations to obtain its principal directions and their corresponding resistivities (principal resistivities: ρ<sub>xx</sub>, ρ<sub>yy</sub>, ρ<sub>zz</sub>). These directions are known as the strike (α<sub>s</sub>), dip (α<sub>D</sub>) and slant (α<sub>L</sub>) anisotropy angles. The non-zero values of these angles and the specified relationships between principal resistivities would determine the type and geometry of the electrical anisotropy. We restricted our study to uniaxial, azimuthal anisotropy, where α<sub>s</sub>≠0, α<sub>D</sub>= α<sub>L</sub>= 0 and also ρ<sub>xx=</sub> ρ<sub>zz≠</sub> ρ<sub>yy.</sub>
Model responses were calculated employing the algorithm of Pek and Verner, 1997. The proposed models of geological settings are selected so that their complexity is gradually increasing. Dimensionality analysis results for the synthetic model responses and real data are depicted in figures (2, 3 and 4) and (6 and 7), respectively. The results indicate that the proposed criteria is slightly firm in the sense that they could not identify electrical anisotropy in the presence of galvanic distortions caused by superficial conductive structures and complexities of regional structures.https://jesphys.ut.ac.ir/article_52816_59c0df56d14f12ad01661901f1dd9c19.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41220150723Application of Upward Continuation Filter in interpretation of Magnetic Data with determination of Optimum Height Continuation, Mansoorabad Area of Yazd, IranApplication of Upward Continuation Filter in interpretation of Magnetic Data with determination of Optimum Height Continuation, Mansoorabad Area of Yazd, Iran2292385370210.22059/jesphys.2015.53702FAMohammadrezaAzadM.Sc. in Exploration Engineering, Shahrood University of Technology, Shahrood, IranJournal Article20140622According to this reality that iron bulks because of having ferromagnetic minerals, they have high magnetic intensity, in mining explorations most usual geophysical method suggested to explore these resources is magnetic method. Basically in order to process and interpretation of magnetic anomaly maps several methods be used of which most of these methods are based on try and error. One of usual geophysical data refining or filtration methods is upward continuation which will be applied in this study. Upward continuation can be used to separate a regional and local magnetic anomaly from the observed magnetic. One of problems that we encounter it through this filtration is determining optimum height of upward continuation. We use a practical method to derive an optimum height for upward continuation. In this study magnetic data of northern anomaly of Mansoorabad region of Yazd was investigated. According to past studies in order to handle geophysical investigations for carry out magnetic investigation a region of 450x700 square meters were adapted. Data acquisitions of magnetic data in exploration grid were done via 10 meter distance for both profiles and measuring stations. Outcrops of iron in the form of hematite in Ooliti'c's limestone rocks are evident. In order to separation of regional anomaly the usual upward continuation were used firstly. Using this method a map of 35 meter height were detected as regional anomaly. In the following in order to more accurate interpertation and process and also determination of height of upward continuation filter, one of practical methods of which is based on cross correlation of two successive heights were used. With the aid of this method without any comparison of several maps related to various heights and without any interference of a body one can obtain suitable upward continuation filter to determine regional magnetic anomaly and consequently by subtracting this magnitudes from observed overall anomaly one can estimate map of remained anomaly that could be a better evident of local anomaly in that region. Cross correlation for upward continuation of heights from 30 meters to 40 meters by the distance of 2 meters were calculated in which the height of 39 meters were selected as optimum height of investigated data. The map of upward continuation which was calculated by this height for magnetic data of Mansoorabad region showed the best fitting with regional anomaly of data based on used method. Also the map of remained data will be obtained according to this height. In terms of geology mineralization of iron that was formed in this region is sedimentary iron type of Oolitic which were created in the time limit of Paleozoic age. Considering location of determined anomaly and geology map of region, observed that this kind of mineralization of iron in limestones by the pattern of Oolitic were created. After investigation and coincidence of magnetic anomaly in studying area, it is determined that the iron bulks were the reason of anomaly. Depth continuations of this kind of anomalies are different and are continued to the depth of 80 meters.
According to this reality that iron bulks because of having ferromagnetic minerals, they have high magnetic intensity, in mining explorations most usual geophysical method suggested to explore these resources is magnetic method. Basically in order to process and interpretation of magnetic anomaly maps several methods be used of which most of these methods are based on try and error. One of usual geophysical data refining or filtration methods is upward continuation which will be applied in this study. Upward continuation can be used to separate a regional and local magnetic anomaly from the observed magnetic. One of problems that we encounter it through this filtration is determining optimum height of upward continuation. We use a practical method to derive an optimum height for upward continuation. In this study magnetic data of northern anomaly of Mansoorabad region of Yazd was investigated. According to past studies in order to handle geophysical investigations for carry out magnetic investigation a region of 450x700 square meters were adapted. Data acquisitions of magnetic data in exploration grid were done via 10 meter distance for both profiles and measuring stations. Outcrops of iron in the form of hematite in Ooliti'c's limestone rocks are evident. In order to separation of regional anomaly the usual upward continuation were used firstly. Using this method a map of 35 meter height were detected as regional anomaly. In the following in order to more accurate interpertation and process and also determination of height of upward continuation filter, one of practical methods of which is based on cross correlation of two successive heights were used. With the aid of this method without any comparison of several maps related to various heights and without any interference of a body one can obtain suitable upward continuation filter to determine regional magnetic anomaly and consequently by subtracting this magnitudes from observed overall anomaly one can estimate map of remained anomaly that could be a better evident of local anomaly in that region. Cross correlation for upward continuation of heights from 30 meters to 40 meters by the distance of 2 meters were calculated in which the height of 39 meters were selected as optimum height of investigated data. The map of upward continuation which was calculated by this height for magnetic data of Mansoorabad region showed the best fitting with regional anomaly of data based on used method. Also the map of remained data will be obtained according to this height. In terms of geology mineralization of iron that was formed in this region is sedimentary iron type of Oolitic which were created in the time limit of Paleozoic age. Considering location of determined anomaly and geology map of region, observed that this kind of mineralization of iron in limestones by the pattern of Oolitic were created. After investigation and coincidence of magnetic anomaly in studying area, it is determined that the iron bulks were the reason of anomaly. Depth continuations of this kind of anomalies are different and are continued to the depth of 80 meters.
https://jesphys.ut.ac.ir/article_53702_62d1e0183d18dda37bdf17b4c77d6639.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X412201507232D interpretation of the magnetotelluric data on Mighan plain of ARAK2D interpretation of the magnetotelluric data on Mighan plain of ARAK2392485281510.22059/jesphys.2015.52815FABehrozOskooiAssociate Professor, Earth Physics Department, Institute of Geophysics, University of Tehran, Iran0000-0003-3065-194XHosseinParnianM.Sc. in Geophysics, Earth Physics Department, Institute of Geophysics, University of TehranMahmoodMirzaeiAssociate Professor, Physics Department of Geophysics, Faculty of Science, University of Arak , IranBehnamMohammadiM.Sc. in Geophysics, Earth Physics Department, Institute of Geophysics, University of TehranJournal Article20141021Magnetotelluric (MT) method is a passive electromagnetic technique that uses the natural, time varying electric and magnetic field components measured at right angles at the surface of the earth to make inferences about the earth’s electrical structure which, in turn, can be related to the geology tectonics and subsurface conditions. Reflection and refraction of electromagnetic (EM) signals at both horizontal and vertical interfaces separate media of different electrical parameters. Electromagnetic methods have been developed and employed to recognize the geological features and particularly fault zones in many regions. To achieve higher lateral resolution and also greater depth penetration, the MT method is one of the most effective electromagnetic techniques to create image of the subsurface structures electrically.
In 2011 wide frequency range of magnetotelluric measurements were carried out at Mighan plain in the southern part of the Markazi province in Iran to understand the crustal electrical conductivity of the region by putting emphasis on locating the geological structures and recognizing the bedrock and probable fault. The electric and magnetic field components were acquired along a profile at 6 stations with a 1000-meter distance between stations using GMS05 (Metronix, Germany) systems. Three magnetometers and two pairs of non-polarizable electrodes were connected to this ﬁve-channel data logger. The experimental setup included four electrodes distributed at a distance of 100 m in north–south (Ex) and east–west (Ey) direction.
Measurements of the horizontal components of the natural electromagnetic field were used to construct the full complex impedance tensor, Z, as a function of frequency. Using the effective impedance, determinant apparent resistivities and phases were computed and used for the inversion. MT data were processed using a code from Smirnov (2003) aiming at a robust single site estimate of electromagnetic transfer functions. As the area of study is populated and close to noise sources, the recorded data has not good quality which justifies the low coherency between the electric and magnetic channels. We performed 1D inversion of the determinant data using a code from Pedersen (2004) for all sites. The 2D modeling was applied to the data to explain the data if their responses fitted the measured data within their errors. Generally, the better the fit between measured and predicted data, the more reliable model. The 2D inversion of the TE and DET-mode data using a code from Siripunvaraporn and Egbert (2000) were performed. The data were calculated as apparent resistivities and phases. Apparent resistivity and phase data exhibited fairly different characteristics in the TE and DET -modes. we used the model obtained from the TE -mode data as an interpretation model. The resistivity model obtained from the TE -mode is consistent with the geological model of the Mighan region down to five kilometers.
The 2D models significantly illustrate two conductive blocks and a fault structure and resolved layers with sharp resistivity contrasts. As significant results, in collaboration with geological information about the presence of the Tabarteh fault, the conductivity features can be attributed to the fault. Besides, a probable hidden fault is also recognizable. The bedrock was also detected with high apparent resistivity by the two dimensional model.
Magnetotelluric (MT) method is a passive electromagnetic technique that uses the natural, time varying electric and magnetic field components measured at right angles at the surface of the earth to make inferences about the earth’s electrical structure which, in turn, can be related to the geology tectonics and subsurface conditions. Reflection and refraction of electromagnetic (EM) signals at both horizontal and vertical interfaces separate media of different electrical parameters. Electromagnetic methods have been developed and employed to recognize the geological features and particularly fault zones in many regions. To achieve higher lateral resolution and also greater depth penetration, the MT method is one of the most effective electromagnetic techniques to create image of the subsurface structures electrically.
In 2011 wide frequency range of magnetotelluric measurements were carried out at Mighan plain in the southern part of the Markazi province in Iran to understand the crustal electrical conductivity of the region by putting emphasis on locating the geological structures and recognizing the bedrock and probable fault. The electric and magnetic field components were acquired along a profile at 6 stations with a 1000-meter distance between stations using GMS05 (Metronix, Germany) systems. Three magnetometers and two pairs of non-polarizable electrodes were connected to this ﬁve-channel data logger. The experimental setup included four electrodes distributed at a distance of 100 m in north–south (Ex) and east–west (Ey) direction.
Measurements of the horizontal components of the natural electromagnetic field were used to construct the full complex impedance tensor, Z, as a function of frequency. Using the effective impedance, determinant apparent resistivities and phases were computed and used for the inversion. MT data were processed using a code from Smirnov (2003) aiming at a robust single site estimate of electromagnetic transfer functions. As the area of study is populated and close to noise sources, the recorded data has not good quality which justifies the low coherency between the electric and magnetic channels. We performed 1D inversion of the determinant data using a code from Pedersen (2004) for all sites. The 2D modeling was applied to the data to explain the data if their responses fitted the measured data within their errors. Generally, the better the fit between measured and predicted data, the more reliable model. The 2D inversion of the TE and DET-mode data using a code from Siripunvaraporn and Egbert (2000) were performed. The data were calculated as apparent resistivities and phases. Apparent resistivity and phase data exhibited fairly different characteristics in the TE and DET -modes. we used the model obtained from the TE -mode data as an interpretation model. The resistivity model obtained from the TE -mode is consistent with the geological model of the Mighan region down to five kilometers.
The 2D models significantly illustrate two conductive blocks and a fault structure and resolved layers with sharp resistivity contrasts. As significant results, in collaboration with geological information about the presence of the Tabarteh fault, the conductivity features can be attributed to the fault. Besides, a probable hidden fault is also recognizable. The bedrock was also detected with high apparent resistivity by the two dimensional model.
https://jesphys.ut.ac.ir/article_52815_bc3b24720c8e76fe88b6d01ff7d510d2.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41220150723Moho depth and lithospheric thickness of the Arabian and Eurasian collision zone from potential field dataMoho depth and lithospheric thickness of the Arabian and Eurasian collision zone from potential field data2492565280710.22059/jesphys.2015.52807FASeyed HaniMotavalli AnbaranAssistant Professor, Department of Earth Physics, Institute of Geophysics, University of Tehran, IranVahidEbrahimzade ArdestaniProfessor, Department of Earth Physics, Institute of Geophysics, University of Tehran, IranHermanZeyenProfessor, Faculty of Earth Sciences, Paris University, FranceJournal Article20140723The targeted area of this research includes E Anatoly, NW Zagros, and Caucasus. These structures are known as a complex and active area and in the early stage of continent-continent collision, which give us unique possibility to monitor such collision in real time. Therefore, it is very important to study this active area to have a better knowledge about its tectonic behavior and lithospheric structure. Key parameters that we are looking for in this research are Moho depth, lithosphere-asthenosphere boundary (LAB) and average crustal density.
There are methods, which can give us some information about lithospheric structure such as the seismological method, seismic (controlled source) method, magnetotelluric, volcanology etc. The method used here is a direct, linearized, iterative inversion procedure in order to determine lateral variations in crustal thickness, average crustal density and lithospheric thickness via potential field data. The area of interest is subdivided into rectangular columns of constant size in E-W (X) and N-S (Y) directions. In depth (Z), each column is subdivided into four layers: seawater if present (with known thickness, i.e. bathymetry, and a density of 1030 kg/m3), crust, lithospheric mantle, and asthenosphere. For our research, the definition of the LAB is an isotherm and we try to calculate the temperature distribution in the lithosphere. During the inversion process, a cost function has to be minimized defined as C=E<sub>d</sub>+lE<sub>p</sub>+mE<sub>s</sub>. The factor l allows controlling the overall importance of parameter variability (E<sub>p</sub>) with respect to data adjustment (E<sub>d</sub>), whereas m is a factor controlling the importance of smoothing, which can be different for each parameter set.
The method uses potential field data (free air gravity, geoid, and topography) which are globally available by satellite measurement and are freely accessible on the internet. The potential field data are sensitive to the lateral density variations, which happen across these two boundaries but at different depth. Free air gravity data are 2.5×2.5 arc-minute grid, which was taken from the database of Bureau Gravimétrique International (BGI). Geoid height variations correspond to the EGM2008 model. In order to avoid the effects of sublithospheric density variations on the geoid, we have removed the long-wavelength geoid signature corresponding to spherical harmonics until degree and order 10, tapered between 8 and 12. Topography data are taken from the 1-minute TOPEX global data sets. All data were interpolated on a regular 10x10 km grid.
Inverting potential field and topography data suffers from non-uniqueness since these data are not sensitive to vertical density variations, which may produce instabilities of the solution. Stabilization of the inversion process may be obtained through parameter damping and smoothing as well as the use of a priori information like crustal thicknesses from seismic profiles.
The 3D results show an important crustal root under Caucasus and relatively thick Moho for the eastern part of Anatolia and NW Zagros and a thin crust under the southern part of the Black Sea, which is thickening northward. Regarding LAB, the 3D results show thin lithosphere under the E-Anatolia, NW Zagros and the western part of Caucasus. The LAB thickens northward towards the Eurasia and in the western part of Anatolia.
The targeted area of this research includes E Anatoly, NW Zagros, and Caucasus. These structures are known as a complex and active area and in the early stage of continent-continent collision, which give us unique possibility to monitor such collision in real time. Therefore, it is very important to study this active area to have a better knowledge about its tectonic behavior and lithospheric structure. Key parameters that we are looking for in this research are Moho depth, lithosphere-asthenosphere boundary (LAB) and average crustal density.
There are methods, which can give us some information about lithospheric structure such as the seismological method, seismic (controlled source) method, magnetotelluric, volcanology etc. The method used here is a direct, linearized, iterative inversion procedure in order to determine lateral variations in crustal thickness, average crustal density and lithospheric thickness via potential field data. The area of interest is subdivided into rectangular columns of constant size in E-W (X) and N-S (Y) directions. In depth (Z), each column is subdivided into four layers: seawater if present (with known thickness, i.e. bathymetry, and a density of 1030 kg/m3), crust, lithospheric mantle, and asthenosphere. For our research, the definition of the LAB is an isotherm and we try to calculate the temperature distribution in the lithosphere. During the inversion process, a cost function has to be minimized defined as C=E<sub>d</sub>+lE<sub>p</sub>+mE<sub>s</sub>. The factor l allows controlling the overall importance of parameter variability (E<sub>p</sub>) with respect to data adjustment (E<sub>d</sub>), whereas m is a factor controlling the importance of smoothing, which can be different for each parameter set.
The method uses potential field data (free air gravity, geoid, and topography) which are globally available by satellite measurement and are freely accessible on the internet. The potential field data are sensitive to the lateral density variations, which happen across these two boundaries but at different depth. Free air gravity data are 2.5×2.5 arc-minute grid, which was taken from the database of Bureau Gravimétrique International (BGI). Geoid height variations correspond to the EGM2008 model. In order to avoid the effects of sublithospheric density variations on the geoid, we have removed the long-wavelength geoid signature corresponding to spherical harmonics until degree and order 10, tapered between 8 and 12. Topography data are taken from the 1-minute TOPEX global data sets. All data were interpolated on a regular 10x10 km grid.
Inverting potential field and topography data suffers from non-uniqueness since these data are not sensitive to vertical density variations, which may produce instabilities of the solution. Stabilization of the inversion process may be obtained through parameter damping and smoothing as well as the use of a priori information like crustal thicknesses from seismic profiles.
The 3D results show an important crustal root under Caucasus and relatively thick Moho for the eastern part of Anatolia and NW Zagros and a thin crust under the southern part of the Black Sea, which is thickening northward. Regarding LAB, the 3D results show thin lithosphere under the E-Anatolia, NW Zagros and the western part of Caucasus. The LAB thickens northward towards the Eurasia and in the western part of Anatolia.
https://jesphys.ut.ac.ir/article_52807_fc5310ae7f8e2999ee5a4bc642fc1d3f.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41220150723Using new approach ‘ExtR method’ to retrack satellite radar altimetry; case study: Persian GulfUsing new approach ‘ExtR method’ to retrack satellite radar altimetry; case study: Persian Gulf2572715323710.22059/jesphys.2015.53237FAMehdiKhakiM.Sc. Student, Department of Surveying and Geomatics Engineering, University College of Engineering, University of Tehran, IranEhsanForootanAstronomical, Physical and Mathematical Geodesy (APMG) Group, Institute for Geodesy and Geoninformation (IGG), University of Bonn, GermanyMohammad AliSharifiAssistant Professor, Department of Surveying and Geomatics Engineering, University of College Engineering, University of Tehran, Iran0000-0003-0745-4147AbdolrezaSafariAssociate Professor, Department of Surveying and Geomatics Engineering, University of College Engineering, University of Tehran, IranJournal Article20140421Monitoring of water levels within the seas and oceans has been enhanced by application of satellite radar altimetry missions, compared to the traditional in-situ tide gauge measurements, due to their vast coverage and better spatial resolution. Satellite radar altimetry, which is originally designed to measure global ocean surface height, has been applied to inland surface water hydrologic studies. Satellite radar altimetry, well known as TOPEX/POSEIDON, JASON, ENVISAT, which have been originally designed to measure global ocean surface height, nowadays, also demonstrated with great potential for applications of inland water body studies. Altimetry was designed to determine the sea surface height based on spatial technology, electronic technology and microwave technology and basically work with sending and receiving electromagnetic pulse. Waveform is actually a curve which shows the power of mentioned pulse reflected back to the altimeter. Altimeter on board of the satellite measures the range by sending and receiving a short pulse and calculating its travel time. The most important outputs of this procedure are the altimeter range. Due to the effect of topography and heterogeneity of reflecting surface and atmospheric propagation, the expected waveform for altimeter returns over land differs from that over ocean surfaces and subsequently range is not accurate. As a result, sea surface height values derived from altimetry over ice sheets and inland water bodies (particularly close to the coast lines) represent more errors in compared to the waveforms returned from other part of the water body and include missing data. We have developed a water-detection algorithm based on statistical analysis of decadal TOPEX/POSEIDON and JASON-1 height measurement time series and also their ground passes sea surface height in Persian Gulf. The Persian Gulf is certainly one of the most vital bodies of water on the planet, as gas and oil from Middle Eastern countries flow through it, supplying much of the world's energy needs.This algorithm contains a noise elimination process include Outlier detection and Elimination of Unwanted Waveforms, an unsupervised classification of the satellite waveforms and finally a retracking procedure. An unsupervised classification algorithm is implemented to classify the waveforms into consistent groups, for which the appropriate retracking algorithms are performed. On the other hand the waveforms belong to the same group almost refer to the land with common properties. The waveform retracking method is mainly used to calculate the offset between the practical middle point of waveform leading edge and the designed gate, based on which the retracked distance correction can be computed. Four different methods are implemented for retracking the waveforms. This includes the three previously introduced algorithms, including off center of gravity, threshold retracking, and optimized iterative least squares fitting, after some improvements. We also introduce a new method based on edge detection and extracting extremum point which is called ‘ExtR retracking method’. At the end two different methods for validation of our results are get to work, first consider the SSH time series before and after retracking then compare those with in situ data, second retrack the ground pass track lines data of two satellites and compare with geoid data.Monitoring of water levels within the seas and oceans has been enhanced by application of satellite radar altimetry missions, compared to the traditional in-situ tide gauge measurements, due to their vast coverage and better spatial resolution. Satellite radar altimetry, which is originally designed to measure global ocean surface height, has been applied to inland surface water hydrologic studies. Satellite radar altimetry, well known as TOPEX/POSEIDON, JASON, ENVISAT, which have been originally designed to measure global ocean surface height, nowadays, also demonstrated with great potential for applications of inland water body studies. Altimetry was designed to determine the sea surface height based on spatial technology, electronic technology and microwave technology and basically work with sending and receiving electromagnetic pulse. Waveform is actually a curve which shows the power of mentioned pulse reflected back to the altimeter. Altimeter on board of the satellite measures the range by sending and receiving a short pulse and calculating its travel time. The most important outputs of this procedure are the altimeter range. Due to the effect of topography and heterogeneity of reflecting surface and atmospheric propagation, the expected waveform for altimeter returns over land differs from that over ocean surfaces and subsequently range is not accurate. As a result, sea surface height values derived from altimetry over ice sheets and inland water bodies (particularly close to the coast lines) represent more errors in compared to the waveforms returned from other part of the water body and include missing data. We have developed a water-detection algorithm based on statistical analysis of decadal TOPEX/POSEIDON and JASON-1 height measurement time series and also their ground passes sea surface height in Persian Gulf. The Persian Gulf is certainly one of the most vital bodies of water on the planet, as gas and oil from Middle Eastern countries flow through it, supplying much of the world's energy needs.This algorithm contains a noise elimination process include Outlier detection and Elimination of Unwanted Waveforms, an unsupervised classification of the satellite waveforms and finally a retracking procedure. An unsupervised classification algorithm is implemented to classify the waveforms into consistent groups, for which the appropriate retracking algorithms are performed. On the other hand the waveforms belong to the same group almost refer to the land with common properties. The waveform retracking method is mainly used to calculate the offset between the practical middle point of waveform leading edge and the designed gate, based on which the retracked distance correction can be computed. Four different methods are implemented for retracking the waveforms. This includes the three previously introduced algorithms, including off center of gravity, threshold retracking, and optimized iterative least squares fitting, after some improvements. We also introduce a new method based on edge detection and extracting extremum point which is called ‘ExtR retracking method’. At the end two different methods for validation of our results are get to work, first consider the SSH time series before and after retracking then compare those with in situ data, second retrack the ground pass track lines data of two satellites and compare with geoid data.https://jesphys.ut.ac.ir/article_53237_3d5ab2d5d56c01bac6460d22a5511538.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41220150723Evaluating the operating forces in formation and development of the Gonu tropical cyclone using Kiue analytical model and numerical modelsEvaluating the operating forces in formation and development of the Gonu tropical cyclone using Kiue analytical model and numerical models2732805288610.22059/jesphys.2015.52886FAMajidMazrae FarahniAssistant Professor, Department of Space Physics, Institute of Geophysics, University of Tehran, IranMarziyehAhmadiPh.D. Student, School of Marine Science and Technology, Hormozgan University, IranMohammad AliSaghafiکارشناس ارشد بخش هواشناسی، مؤسسة ژئوفیزیک، دانشگاه تهران، ایرانJournal Article20140428Gonu storm was formed in tropical basin of Indian Ocean early in June 2007. Gonu is the strongest tropical cyclone that happened in Arabian Sea and North Indian Ocean and moved toward Oman Sea and Persian Gulf and bouncing the lands around the area. This tropical cyclone made extensive damages and human and financial causalities to countries of Oman, Iran, Afghanistan, Pakistan, and India.
In this study, an analytical model (Kiue et al., 2010) is applied to the tropical cyclone of Gonu to examine the predicted surface pressure and tangential wind velocity relation. This model is an analytical model includes momentum equation in polar coordinates with a primary hypothesis of the wind blowing is in a Rankine vortex regime. The pattern of surface wind speeds in tropical cyclone complies with the Rankin function, so that in the inner core region, the tangential and radial wind velocity increases linearly with increasing of the radius, and wind speed in the external region is decreasing as radius increases. To examine the dynamic model (Kiue), first the pressure reduction is calculated by this model for Gonu tropical cyclone. Then the minimum sea level pressure is compared with the equivalent reported pressure by Joint Typhoon Warning Center (JTWC). The results show that the model is capable of predicting the magnitude of falling pressure of tropical cyclone Gonu. Then to investigate the role of different forces on the formation and development of this tropical cyclone, the proposed equation by Kiue was applied and 3 forces of centrifugal, Coriolis and frictional were calculated and also the contribution of those forces on the falling of the pressure in eye of cyclones were computed. It is revealed that the weight of centrifugal force effect is dominant.
The data used in this study to examine the analytical model of Kiue, is Gono cyclone best track of JTWC database. The JTWC is the Joint committee of the US Air Force and Marine to warn hurricanes formation and development. It measures the intensity of the storm via Dvorak (1974) method. This study does not mention how this method works, but the Dvorak method is based on satellite imagery, which is operational in most of the storm warning centers.
In this study, we also run the ARPS and WRF models and to verify that if calculated surface wind speed of these models for cyclone Gonu is comparable to JTWC database weather they can estimate the intensity of this tropical cyclone. The results show that the surface wind speed output of WRF models do not show the way of Gonu formation. In addition, the maximum surface wind (VMAX) shows that Gonu did not convert to a tropical cyclone as well. According to the results of ARP's model, maximum surface wind (VMAX) is calculated about 82 m/s. However, it has about 9.5 m/s discrepancy with maximum surface wind (VMAX) of JTWS for Gonu. However, this speed shows that the Gonu cyclone is reached to category five hurricanes and converted to a tropical cyclone. Therefore, the model ARPS is more successful.
Gonu storm was formed in tropical basin of Indian Ocean early in June 2007. Gonu is the strongest tropical cyclone that happened in Arabian Sea and North Indian Ocean and moved toward Oman Sea and Persian Gulf and bouncing the lands around the area. This tropical cyclone made extensive damages and human and financial causalities to countries of Oman, Iran, Afghanistan, Pakistan, and India.
In this study, an analytical model (Kiue et al., 2010) is applied to the tropical cyclone of Gonu to examine the predicted surface pressure and tangential wind velocity relation. This model is an analytical model includes momentum equation in polar coordinates with a primary hypothesis of the wind blowing is in a Rankine vortex regime. The pattern of surface wind speeds in tropical cyclone complies with the Rankin function, so that in the inner core region, the tangential and radial wind velocity increases linearly with increasing of the radius, and wind speed in the external region is decreasing as radius increases. To examine the dynamic model (Kiue), first the pressure reduction is calculated by this model for Gonu tropical cyclone. Then the minimum sea level pressure is compared with the equivalent reported pressure by Joint Typhoon Warning Center (JTWC). The results show that the model is capable of predicting the magnitude of falling pressure of tropical cyclone Gonu. Then to investigate the role of different forces on the formation and development of this tropical cyclone, the proposed equation by Kiue was applied and 3 forces of centrifugal, Coriolis and frictional were calculated and also the contribution of those forces on the falling of the pressure in eye of cyclones were computed. It is revealed that the weight of centrifugal force effect is dominant.
The data used in this study to examine the analytical model of Kiue, is Gono cyclone best track of JTWC database. The JTWC is the Joint committee of the US Air Force and Marine to warn hurricanes formation and development. It measures the intensity of the storm via Dvorak (1974) method. This study does not mention how this method works, but the Dvorak method is based on satellite imagery, which is operational in most of the storm warning centers.
In this study, we also run the ARPS and WRF models and to verify that if calculated surface wind speed of these models for cyclone Gonu is comparable to JTWC database weather they can estimate the intensity of this tropical cyclone. The results show that the surface wind speed output of WRF models do not show the way of Gonu formation. In addition, the maximum surface wind (VMAX) shows that Gonu did not convert to a tropical cyclone as well. According to the results of ARP's model, maximum surface wind (VMAX) is calculated about 82 m/s. However, it has about 9.5 m/s discrepancy with maximum surface wind (VMAX) of JTWS for Gonu. However, this speed shows that the Gonu cyclone is reached to category five hurricanes and converted to a tropical cyclone. Therefore, the model ARPS is more successful.
https://jesphys.ut.ac.ir/article_52886_83d4e8ba733cd0dbdb6a5c00cfe8ed00.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41220150723Modeling instantaneous water level changes of Caspian Sea using satellite altimetry observationsModeling instantaneous water level changes of Caspian Sea using satellite altimetry observations2812995333110.22059/jesphys.2015.53331FAAbdolrezaSafariAssociate Professor, Department of Surveying and Geomatics Engineering, University College of Engineering, University of Tehran, IranSiminKalantariounM.Sc. student of Geodesy (Hydrography) , Department of Surveying and Geomatics Engineering, University College of Engineering, University of TehranHadiAminM.Sc., Department of Surveying and Geomatics Engineering, University College of Engineering, University of Tehran, IranJournal Article20140517Sea level changes are of particular significance because of their influence on the industries such as fishery, shipping, marine transport, harbor design, and power constructions in marine and coastal regions. Considering the possible irreparable environmental and economic damages, studying the sea level changes with time is necessary.
Several methods, including experimental methods and numerical models, have been developed to predict the sea wave behavior. Time series analysis is one of the approaches for detecting sea level variations and short-term and long-term prediction. For this purpose, different methods have been proposed including ARMA, MA and AR time series models.
With the advent of satellite altimetry in 1973, it was made possible to monitor the sea level with high accuracy (Anzenhofer et al. 1999). Altimetry satellites collect height information from different points on the Earth along the determined orbits. The main mission of these satellites is to measure the sea level at different times and locations.
In this study, data from the altimetry satellite Jason-2 from 2008 to 2012 is analyzed using different time series processes.
The time series for the instantaneous water level is influenced by seasonal, periodical and stochastic variations due to environmental factors.
Fourier analysis is a convenient and efficient mathematical tool for modeling the behavior of a periodic phenomenon.
Having the water level and the measurement time, a time series of the variations is derived. The water level model is considered as a linear combination of trigonometric functions as below:
The above equation is called the Fourier series for the sequence in which and are the Fourier coefficients. If and are determined for each given frequency, the numerical value of the periodic phenomena can be computed at each epoch.
Frequency estimation is one of the most important steps in modeling. For this purpose Fourier spectral analysis method was used to find the time series’ frequency, and least square method relying on the concept of time series stationary to achieve more accurate frequency. The results show that after removal of main frequencies of two steps with a period greater than 19 days and greater than 4 hours, data were completely stationary and were prepared for the modeling using time series.
The main purpose of this study is to choose the best model for the prediction of the sea level in the region under study based on the prediction error criteria. The trend for water level variations from 2008 to 2012, the improvement in relative accuracy of estimation, and the water level prediction in the long-term interval are also investigated.
To investigate the performance of the different models in estimation of the time series values in the long-term interval, absolute mean error, root mean square error, Akaike information criterion (AIC), Bayesian information criterion (BIC) and Schwarz Bayesian criterion (SBC) are used. The results show that the AR(6) time series model is more efficient than MA(q) and ARMA(p,q) models, predicting the variations with lower errors. The statistical analysis of the instantaneous water level shows that the absolute mean error is 3.8 mm, and the root mean square error is 1.43 cm/day.Sea level changes are of particular significance because of their influence on the industries such as fishery, shipping, marine transport, harbor design, and power constructions in marine and coastal regions. Considering the possible irreparable environmental and economic damages, studying the sea level changes with time is necessary.
Several methods, including experimental methods and numerical models, have been developed to predict the sea wave behavior. Time series analysis is one of the approaches for detecting sea level variations and short-term and long-term prediction. For this purpose, different methods have been proposed including ARMA, MA and AR time series models.
With the advent of satellite altimetry in 1973, it was made possible to monitor the sea level with high accuracy (Anzenhofer et al. 1999). Altimetry satellites collect height information from different points on the Earth along the determined orbits. The main mission of these satellites is to measure the sea level at different times and locations.
In this study, data from the altimetry satellite Jason-2 from 2008 to 2012 is analyzed using different time series processes.
The time series for the instantaneous water level is influenced by seasonal, periodical and stochastic variations due to environmental factors.
Fourier analysis is a convenient and efficient mathematical tool for modeling the behavior of a periodic phenomenon.
Having the water level and the measurement time, a time series of the variations is derived. The water level model is considered as a linear combination of trigonometric functions as below:
The above equation is called the Fourier series for the sequence in which and are the Fourier coefficients. If and are determined for each given frequency, the numerical value of the periodic phenomena can be computed at each epoch.
Frequency estimation is one of the most important steps in modeling. For this purpose Fourier spectral analysis method was used to find the time series’ frequency, and least square method relying on the concept of time series stationary to achieve more accurate frequency. The results show that after removal of main frequencies of two steps with a period greater than 19 days and greater than 4 hours, data were completely stationary and were prepared for the modeling using time series.
The main purpose of this study is to choose the best model for the prediction of the sea level in the region under study based on the prediction error criteria. The trend for water level variations from 2008 to 2012, the improvement in relative accuracy of estimation, and the water level prediction in the long-term interval are also investigated.
To investigate the performance of the different models in estimation of the time series values in the long-term interval, absolute mean error, root mean square error, Akaike information criterion (AIC), Bayesian information criterion (BIC) and Schwarz Bayesian criterion (SBC) are used. The results show that the AR(6) time series model is more efficient than MA(q) and ARMA(p,q) models, predicting the variations with lower errors. The statistical analysis of the instantaneous water level shows that the absolute mean error is 3.8 mm, and the root mean square error is 1.43 cm/day.https://jesphys.ut.ac.ir/article_53331_b0bbe6bfc0e7dc0733dd03884da1fe5f.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41220150723Investigation and Comparison of the mantle convection cells for different assumptions of the heating sources of inner EarthInvestigation and Comparison of the mantle convection cells for different assumptions of the heating sources of inner Earth3013125285810.22059/jesphys.2015.52858FARezaZeynoddini MeymandProfesor Association, Shahid Bahonar University of Kerman,HosseinJalal KamaliAssistant Profesor, Faculty of Physics, Shahid Bahonar University of Kerman, IranJournal Article20140608Earth is the third planet of solar system with approximate radius of 6300 km and an 87 mWm<sup>-2</sup> thermal rate is transferred from the surface to surrounding atmosphere. The heat is brought about by different sources inside the ground. Among these sources, estimate that about 80% of the present surface heat flow can be attributed to the decay of radioactive isotopes presently in the mantle and the crust while about 20% comes from the cooling of the Earth. The high heat inside the mantle is displaced by convection that is most important heat transfer to lower levels of the crust. Form of convectional cells, flow speed and total temperature of the mantle depend on thermal sources inside the ground. In present study, properties of convectional cells(assuming that the viscosity and the special thermal capacity are held constant) have been dealt with in three models: A) Heating from the lowest layer of the mantle which is connected to the core (according to some researchers, it is rarely possible) -B) distribution of thermal sources throughout the mantle -C) distribution of heating sources on the zone 150 km farther than lower mantle. Results obtained from simulation show that the more extensive the distribution of thermal sources in the mantle, the more the width of convectional cells so that the highest number of convectional cells are brought about by core heating and the lowest number are brought about by distribution of thermal sources throughout the mantle. Concerning floor heating and thermal distribution in 150 km from lower mantle, the ascending velocity of convectional cells is nearly equal to their subduction velocity while in thermal distribution throughout the mantle; the width in which the mantle moves upward is broader than that of subduction. Therefore, mantle ascending speed is lower than that of subduction due to mass conservation and stable fluid movement. If all thermal sources are concentrated in 150 km from lower mantle, the temperature is higher than the other two models in this zone. But when thermal sources are dispersal throughout the mantle, the highest temperature seen inside the mantle is relates to middle part of the mantle since the velocity is very slow in middle of convectional cells. Therefore, temperature is increased in this zone compared to other zones because the mantle has low conduction coefficient and high heat capacity. Therefore, the heat is transferred by displacement and the heat produced in middle of convectional cells by decay of radioactive elements is not transferred (due to low mantle speed in these zones) and the temperature is increased in these zones. Cold fluid close to the crust is deepened due to subduction and the temperature is reduced in depth. More over, simulation indicates that the second models convectional speed is higher than the other two models. The dimension and the number of convection cells of these models and their comparison to the earth observation measures(plates side) shows that the two internal heating models have better correspondence with the earth observations. As a result, the feature of convection cells and the temperature of mantle have strong dependence on the distribution of thermal resources inside the earth. Therefore, identifying properties of convectional cells may contribute surface activities and some of them have been addressed in present research. <br /> Earth is the third planet of solar system with approximate radius of 6300 km and an 87 mWm<sup>-2</sup> thermal rate is transferred from the surface to surrounding atmosphere. The heat is brought about by different sources inside the ground. Among these sources, estimate that about 80% of the present surface heat flow can be attributed to the decay of radioactive isotopes presently in the mantle and the crust while about 20% comes from the cooling of the Earth. The high heat inside the mantle is displaced by convection that is most important heat transfer to lower levels of the crust. Form of convectional cells, flow speed and total temperature of the mantle depend on thermal sources inside the ground. In present study, properties of convectional cells(assuming that the viscosity and the special thermal capacity are held constant) have been dealt with in three models: A) Heating from the lowest layer of the mantle which is connected to the core (according to some researchers, it is rarely possible) -B) distribution of thermal sources throughout the mantle -C) distribution of heating sources on the zone 150 km farther than lower mantle. Results obtained from simulation show that the more extensive the distribution of thermal sources in the mantle, the more the width of convectional cells so that the highest number of convectional cells are brought about by core heating and the lowest number are brought about by distribution of thermal sources throughout the mantle. Concerning floor heating and thermal distribution in 150 km from lower mantle, the ascending velocity of convectional cells is nearly equal to their subduction velocity while in thermal distribution throughout the mantle; the width in which the mantle moves upward is broader than that of subduction. Therefore, mantle ascending speed is lower than that of subduction due to mass conservation and stable fluid movement. If all thermal sources are concentrated in 150 km from lower mantle, the temperature is higher than the other two models in this zone. But when thermal sources are dispersal throughout the mantle, the highest temperature seen inside the mantle is relates to middle part of the mantle since the velocity is very slow in middle of convectional cells. Therefore, temperature is increased in this zone compared to other zones because the mantle has low conduction coefficient and high heat capacity. Therefore, the heat is transferred by displacement and the heat produced in middle of convectional cells by decay of radioactive elements is not transferred (due to low mantle speed in these zones) and the temperature is increased in these zones. Cold fluid close to the crust is deepened due to subduction and the temperature is reduced in depth. More over, simulation indicates that the second models convectional speed is higher than the other two models. The dimension and the number of convection cells of these models and their comparison to the earth observation measures(plates side) shows that the two internal heating models have better correspondence with the earth observations. As a result, the feature of convection cells and the temperature of mantle have strong dependence on the distribution of thermal resources inside the earth. Therefore, identifying properties of convectional cells may contribute surface activities and some of them have been addressed in present research. <br /> https://jesphys.ut.ac.ir/article_52858_c5e943186c20cc21c43e1acbad2b4112.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41220150723Assessment of meteorological drought in Iran using standardized precipitation and evapotranspiration index (SPEI)Assessment of meteorological drought in Iran using standardized precipitation and evapotranspiration index (SPEI)3133215288810.22059/jesphys.2015.52888FASaharTajbakhshIslamic Republic of Iran Meteorological Organization (IRIMO)NasrinEisakhaniM.Sc, Expert of National Drought Center I.R. of Iran Meteorological Organization (IRIMO), Tehran, IranAminFazl KazemiM.Sc. Student, Expert of National Drought Center I.R. of Iran Meteorological Organization (IRIMO), Tehran, IranJournal Article20140701Drought is one of the main natural causes of damages to the agriculture, economy, and environment. After a long period without precipitation, drought usually occur. Determination of the start time, end, and extent of drought is very difficult. Quantitative determination of the severity, magnitude, and duration of drought is very difficult. Several factors such as rainfall, temperature, evaporation, relative humidity affect the incidence, severity, and duration of droughts. The basic characteristic of drought, according to the available water resources, including ground water, surface water, snow pack, and water supply have been discussed by many scientists. In addition, some studies have examined the importance of temperature in determination of drought conditions systematically. Precipitation and temperature assessments in Palmer index show that this index has similar the same changes of precipitation and temperature parameters, and only small fluctuations of temperature can be controlled by precipitation. Thus, drought indices, which include temperature data in formulas (like Palmer Index), especially for applications of climate forecasts are appropriate. However, the necessity of several quantitative drought indices has not been considered in different hydrological systems and only different values of Palmer Index for drought types has been used. Thus, the drought is formulated according to three variables precipitation, temperature, and potentiality of evapotranspiration (PET) in a new index called the Standardized Precipitation and evapotranspiration (SPEI). SPEI merges the Palmer Index sensitivity with evapotranspiration (based on temperature fluctuation) using simple computation while considering multi-scale nature of the Standardized Precipitation Index (SPI). This index was first introduced in 2009 by Vicente Serrano et al. The present study is an attempt to investigate the use of SPEI in drought evaluation in Iran.
Total precipitation and average temperature data are considered for 104 synoptic stations across Iran. The meteorological data have been obtained from the Islamic Republic of Iran Meteorological Organization (IRIMO). The statistical periods are between 25 to 30 years (25 stations on a 25-year period and information about the rest of the stations is for a 30-year period). The interpolation and visualization of meteorological parameters and indices were performed using Arc Map 9.3 GIS software. In order to calculate SPEI, first, total precipitation is determined for the considered period (month, quarter, etc.) and year for each station. Then using the data and methodology the precipitation for each station was calculated using Thornthwaite method. Then potential evapotranspiration is deducted from the total precipitation for each station in considered time periods and years. With skewness calculation, the mean and standard deviations of the data set are determined. By assuming “n” is the number of precipitation data and” m” is the sequence number, the probability of the amounts of precipitation is calculated. Using the probability of precipitation and the inverse gamma function, the corresponding precipitation is determined. Next, using the probability of precipitation and the inverse normal function with mean and standard deviation, the corresponding precipitation is counted. Now, the reported station precipitation, gamma precipitation and normalized precipitation for each station are available. Thus, the probability density function of station precipitation and the corresponding cumulative function of the probability density function can be calculated and SPEI can be determined after normalization. Thus, drawing and analysis of the abnormal patterns of temperature, precipitation, and evapotranspiration for the long-term average and seasonal SPEI can be made. The results show that due to considerable decrease in temperature in winter, the effect of evapotranspiration may not be significant. During spring, summer and autumn the effect of evapotranspiration is influenced heavily on precipitation in most provinces, especially the southern provinces of Iran (including Hormozgan, Sistan- Baluchestan, Fars and Khuzestan) and drought has intensified (weaken) while precipitation abnormal has increased(decreased). Regarding the geographical situation of Iran (arid and semiarid), index of evapotranspiration, especially during the warm season in most parts of the country had an impact on the determination of droughts and hence, it will be better to consider in addition to precipitation, for assessing drought.Drought is one of the main natural causes of damages to the agriculture, economy, and environment. After a long period without precipitation, drought usually occur. Determination of the start time, end, and extent of drought is very difficult. Quantitative determination of the severity, magnitude, and duration of drought is very difficult. Several factors such as rainfall, temperature, evaporation, relative humidity affect the incidence, severity, and duration of droughts. The basic characteristic of drought, according to the available water resources, including ground water, surface water, snow pack, and water supply have been discussed by many scientists. In addition, some studies have examined the importance of temperature in determination of drought conditions systematically. Precipitation and temperature assessments in Palmer index show that this index has similar the same changes of precipitation and temperature parameters, and only small fluctuations of temperature can be controlled by precipitation. Thus, drought indices, which include temperature data in formulas (like Palmer Index), especially for applications of climate forecasts are appropriate. However, the necessity of several quantitative drought indices has not been considered in different hydrological systems and only different values of Palmer Index for drought types has been used. Thus, the drought is formulated according to three variables precipitation, temperature, and potentiality of evapotranspiration (PET) in a new index called the Standardized Precipitation and evapotranspiration (SPEI). SPEI merges the Palmer Index sensitivity with evapotranspiration (based on temperature fluctuation) using simple computation while considering multi-scale nature of the Standardized Precipitation Index (SPI). This index was first introduced in 2009 by Vicente Serrano et al. The present study is an attempt to investigate the use of SPEI in drought evaluation in Iran.
Total precipitation and average temperature data are considered for 104 synoptic stations across Iran. The meteorological data have been obtained from the Islamic Republic of Iran Meteorological Organization (IRIMO). The statistical periods are between 25 to 30 years (25 stations on a 25-year period and information about the rest of the stations is for a 30-year period). The interpolation and visualization of meteorological parameters and indices were performed using Arc Map 9.3 GIS software. In order to calculate SPEI, first, total precipitation is determined for the considered period (month, quarter, etc.) and year for each station. Then using the data and methodology the precipitation for each station was calculated using Thornthwaite method. Then potential evapotranspiration is deducted from the total precipitation for each station in considered time periods and years. With skewness calculation, the mean and standard deviations of the data set are determined. By assuming “n” is the number of precipitation data and” m” is the sequence number, the probability of the amounts of precipitation is calculated. Using the probability of precipitation and the inverse gamma function, the corresponding precipitation is determined. Next, using the probability of precipitation and the inverse normal function with mean and standard deviation, the corresponding precipitation is counted. Now, the reported station precipitation, gamma precipitation and normalized precipitation for each station are available. Thus, the probability density function of station precipitation and the corresponding cumulative function of the probability density function can be calculated and SPEI can be determined after normalization. Thus, drawing and analysis of the abnormal patterns of temperature, precipitation, and evapotranspiration for the long-term average and seasonal SPEI can be made. The results show that due to considerable decrease in temperature in winter, the effect of evapotranspiration may not be significant. During spring, summer and autumn the effect of evapotranspiration is influenced heavily on precipitation in most provinces, especially the southern provinces of Iran (including Hormozgan, Sistan- Baluchestan, Fars and Khuzestan) and drought has intensified (weaken) while precipitation abnormal has increased(decreased). Regarding the geographical situation of Iran (arid and semiarid), index of evapotranspiration, especially during the warm season in most parts of the country had an impact on the determination of droughts and hence, it will be better to consider in addition to precipitation, for assessing drought.https://jesphys.ut.ac.ir/article_52888_b30817a0013927e1f067b745a7b7ed88.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41220150723Assessment of wind power in Kurdistan provinceAssessment of wind power in Kurdistan province3233355281710.22059/jesphys.2015.52817FABakhtiarMohammadiAssistant Professor, Department of Climatology, Faculty of Natural Recourses, University of Kurdistan, IranJournal Article20140723In recent years, the kinetic energy of wind as a source of renewable and inexhaustible energy is considered by many countries. This research aimed to evaluate the wind power in the Province of Kurdistan. In this research, direction and wind speed data of the Synoptic stations in Kurdistan (Sanandaj, Saghez, Marivan, Bane, Bijar, Ghorveh and Zarineh Aobato) and also 21 synoptic stations outside the Province, Since founding of these synoptic stations up to 2005, was used. Direction and wind speed data in the studied stations (28 stations) were converted to zonal and meridional wind components. Using these data, the zonal and meridional winds for 2068 cells (approximate dimensions 7/3 × 7/3 km²) in Kurdistan Province for all days using Kriging interpolation method was estimated. Wind power in the Province of Kurdistan was presented by the maps. The results of the wind power estimation (using three types of turbines with radius of rotors of 10, 15, and 25 m) showed that based turbines with radius of 10 meters can be up to 170 thousand Watts per square meter of energy in every cell. However, only limited areas of the Province of Kurdistan (especially Zarineh Aobato, Ghorveh and Bijar) have the ability to produce this amount of energy. For turbines of rotor of 15 meters radius, roughly the same area can be up to 370 thousand Watts per square meter per cell to produce energy in the Kurdistan Province. Finally, using turbines rotor with radius 25 meters, the harnessing energy can be more than 1 million Watts per square meter per cell. Although there is the possibility of energy production, but wind energy production in some parts of the Province of Kurdistan (large parts of Sanandaj, Marivan and Bane) may not be economically affordable. According to the estimates of wind power in Kurdistan Province, it seems that Zarineh Aobato and surrounding areas are the most appropriate place to install wind turbines. In fact, based on the estimated wind power, this region was identified as the maximum power generated for wind energy in this Province. In the next places Zarineh Aobato, some parts of the Ghorveh and Bijar also have high potential for wind energy production.In recent years, the kinetic energy of wind as a source of renewable and inexhaustible energy is considered by many countries. This research aimed to evaluate the wind power in the Province of Kurdistan. In this research, direction and wind speed data of the Synoptic stations in Kurdistan (Sanandaj, Saghez, Marivan, Bane, Bijar, Ghorveh and Zarineh Aobato) and also 21 synoptic stations outside the Province, Since founding of these synoptic stations up to 2005, was used. Direction and wind speed data in the studied stations (28 stations) were converted to zonal and meridional wind components. Using these data, the zonal and meridional winds for 2068 cells (approximate dimensions 7/3 × 7/3 km²) in Kurdistan Province for all days using Kriging interpolation method was estimated. Wind power in the Province of Kurdistan was presented by the maps. The results of the wind power estimation (using three types of turbines with radius of rotors of 10, 15, and 25 m) showed that based turbines with radius of 10 meters can be up to 170 thousand Watts per square meter of energy in every cell. However, only limited areas of the Province of Kurdistan (especially Zarineh Aobato, Ghorveh and Bijar) have the ability to produce this amount of energy. For turbines of rotor of 15 meters radius, roughly the same area can be up to 370 thousand Watts per square meter per cell to produce energy in the Kurdistan Province. Finally, using turbines rotor with radius 25 meters, the harnessing energy can be more than 1 million Watts per square meter per cell. Although there is the possibility of energy production, but wind energy production in some parts of the Province of Kurdistan (large parts of Sanandaj, Marivan and Bane) may not be economically affordable. According to the estimates of wind power in Kurdistan Province, it seems that Zarineh Aobato and surrounding areas are the most appropriate place to install wind turbines. In fact, based on the estimated wind power, this region was identified as the maximum power generated for wind energy in this Province. In the next places Zarineh Aobato, some parts of the Ghorveh and Bijar also have high potential for wind energy production.https://jesphys.ut.ac.ir/article_52817_13f0641acdce10aeeee368728d7966d1.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X41220150723Variation of Iran’s air temperature from Earth surface to lower stratosphere as an index of climate change during 1979-2014Variation of Iran’s air temperature from Earth surface to lower stratosphere as an index of climate change during 1979-20143373505283410.22059/jesphys.2015.52834FAMohammadDarandAssistant Professor, Department of Climatology, Faculty of Natural Recourses, University of Kurdistan, IranJournal Article20140830Radiation input from the Sun is the source of energy for the Earth’s climate system (Hartmann, 1994). Most of the solar radiation absorbed at the surface, the rest is absorbed by the atmosphere. The global temperature profile of the atmosphere reflects a balance between the radiative, convective, and dynamical heating/cooling of the surface-atmosphere system. Understanding of the climate change in recent decades is important for the prediction of the future climate. Observed modifications in the vertical temperature structure of the atmosphere have been proposed as a primary indicator of climate change (Marshal, 2002). Radiosonde data are the primary source for monitoring changes in upper-air parameters. The second source is satellite-derived data from the microwave-sounding unit (MSU). Third source of ‘‘observed’’ upper-air data are the reanalysis projects as NCEP/NCAR and ECMWF. In this study, we attempt to estimate trends in the Irans surface and upper atmosphere temperature as an index of climate change on the basis of ECMWF data, which offer substantially higher vertical resolution than radiosounds and the microwave sounding unit (MSU), thus allowing a more accurate identification of the upper atmosphere and possible multiple upper atmosphere levels. In addition, the tempo-spatial resolution of applied data is higher than other data sources. <br />The monthly surface and upper atmospheric air temperature data of Iran during 1/1979 to 4/2014 extracted from European Centre for Medium-Range Weather Forecasts (ECMWF). The spatial resolution 0.125 degree has been applied. Based on selected spatial resolution, 9965 pixels located on the Iran political boundry. The variation of spatial mean air temperature over Iran from surface to 10 hPa was analyzied. The radiosonds recorded air temperature from 11 upper stations over Iran compared to the ECMWF data to evaluate accuracy of applied data. Two non parametric tests of Mann-Kendal and Sen<sup>,</sup>s estimatotor used to decide about significancy of trend and slope of trend respectively. <br />The results of this study show that using ECMWF data to evaluate varation of surface and upper atmospheric air temperature is useful. The tempo-spatial resolution of applied data is very high in horizontal and vertical. This implies that the ECMWF data do a reasonable job of capturing the variability of upper atmospheric temperature and are more adequate rather than Microwave Sounding Unit (MSU) and radiosounds data. The results also show that trend of surface and upper atmospheric air tempertre is significant at 95% confidence level. The observed trend near the earth surface and low and high troposphere is positive while is negative in the stratosphere. Althoght the trend of Iran’s middle troposphere layer temperature is not significant statistically at the 95% but fitting regression line on the standardized air temperature time series show that trend is positive. The slope rate of Iran’s surface temperature is 0.65<sup>°</sup>C per decades and is higher that other levels. The observed warming rate in the lower troposphere is higher than upper troposphere. The spatial distribution of the trend slope near the surface show that the highest warimimg observed between 34 to 37 latitudes. In the southern parts of Alborz and eastern parts of Zagros, the slope rate of surface temperature rate is 1.3 to 1.6 degrees C per decade. <br />The obsereved increased tropospheric temperature and cooling of stratosphere is in good agreement with previous studies findings. The temperature change near the surface and lower troposphere is high in the semi northern parts of the country. The rate of upper troposphere temperature is not significant in the semi northern parts. The increase of upper troposphere temperature in the southern parts results in change tropopouse height. The depletion of ozone in the stratosphere (upper atmosphere) maybe contributing to the cooling of the stratosphere layer. The increased man made pollutants, green house gases and ozone in troposphere is also contributing to the warming of the troposphere. In temporal view from 1998, a positive anomay in temperature is observed near the surface and lower troposphere. The highest warming occurred in 2010 and 2001. According to other researchers finding warming of troposphere results in the displacement of Hadley cells and subtropical jet streams towards north and changes in tropical circulation patterns. <br /> Radiation input from the Sun is the source of energy for the Earth’s climate system (Hartmann, 1994). Most of the solar radiation absorbed at the surface, the rest is absorbed by the atmosphere. The global temperature profile of the atmosphere reflects a balance between the radiative, convective, and dynamical heating/cooling of the surface-atmosphere system. Understanding of the climate change in recent decades is important for the prediction of the future climate. Observed modifications in the vertical temperature structure of the atmosphere have been proposed as a primary indicator of climate change (Marshal, 2002). Radiosonde data are the primary source for monitoring changes in upper-air parameters. The second source is satellite-derived data from the microwave-sounding unit (MSU). Third source of ‘‘observed’’ upper-air data are the reanalysis projects as NCEP/NCAR and ECMWF. In this study, we attempt to estimate trends in the Irans surface and upper atmosphere temperature as an index of climate change on the basis of ECMWF data, which offer substantially higher vertical resolution than radiosounds and the microwave sounding unit (MSU), thus allowing a more accurate identification of the upper atmosphere and possible multiple upper atmosphere levels. In addition, the tempo-spatial resolution of applied data is higher than other data sources. <br />The monthly surface and upper atmospheric air temperature data of Iran during 1/1979 to 4/2014 extracted from European Centre for Medium-Range Weather Forecasts (ECMWF). The spatial resolution 0.125 degree has been applied. Based on selected spatial resolution, 9965 pixels located on the Iran political boundry. The variation of spatial mean air temperature over Iran from surface to 10 hPa was analyzied. The radiosonds recorded air temperature from 11 upper stations over Iran compared to the ECMWF data to evaluate accuracy of applied data. Two non parametric tests of Mann-Kendal and Sen<sup>,</sup>s estimatotor used to decide about significancy of trend and slope of trend respectively. <br />The results of this study show that using ECMWF data to evaluate varation of surface and upper atmospheric air temperature is useful. The tempo-spatial resolution of applied data is very high in horizontal and vertical. This implies that the ECMWF data do a reasonable job of capturing the variability of upper atmospheric temperature and are more adequate rather than Microwave Sounding Unit (MSU) and radiosounds data. The results also show that trend of surface and upper atmospheric air tempertre is significant at 95% confidence level. The observed trend near the earth surface and low and high troposphere is positive while is negative in the stratosphere. Althoght the trend of Iran’s middle troposphere layer temperature is not significant statistically at the 95% but fitting regression line on the standardized air temperature time series show that trend is positive. The slope rate of Iran’s surface temperature is 0.65<sup>°</sup>C per decades and is higher that other levels. The observed warming rate in the lower troposphere is higher than upper troposphere. The spatial distribution of the trend slope near the surface show that the highest warimimg observed between 34 to 37 latitudes. In the southern parts of Alborz and eastern parts of Zagros, the slope rate of surface temperature rate is 1.3 to 1.6 degrees C per decade. <br />The obsereved increased tropospheric temperature and cooling of stratosphere is in good agreement with previous studies findings. The temperature change near the surface and lower troposphere is high in the semi northern parts of the country. The rate of upper troposphere temperature is not significant in the semi northern parts. The increase of upper troposphere temperature in the southern parts results in change tropopouse height. The depletion of ozone in the stratosphere (upper atmosphere) maybe contributing to the cooling of the stratosphere layer. The increased man made pollutants, green house gases and ozone in troposphere is also contributing to the warming of the troposphere. In temporal view from 1998, a positive anomay in temperature is observed near the surface and lower troposphere. The highest warming occurred in 2010 and 2001. According to other researchers finding warming of troposphere results in the displacement of Hadley cells and subtropical jet streams towards north and changes in tropical circulation patterns. <br /> https://jesphys.ut.ac.ir/article_52834_bb0ed3de4fc8ae22bdece1a05976a6d5.pdf