Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X40420141222Optimization of earthquake locations in northwest of Iran during the period 2006-2013 using azimuth and ray parametersOptimization of earthquake locations in northwest of Iran during the period 2006-2013 using azimuth and ray parameters1145240310.22059/jesphys.2014.52403FAKEmamiEBayramnejad0000-0002-9683-459XM. R.GheitanchiJournal Article20131224An important aim of seismology is to determine focuses of earthquakes. The more precisely the process is carried out, the better results will be achieved in various studies, for example, in vestigation of the crustal structure and seismogenic zone or the determination of the plane of faults causing earthquakes. The classic methods of locating are based on Geiger's equations (1910). The relation between the arrival time and coordination of the earthquake <br />is linearized in this method using the first term of the Taylor's expansion and a velocity model available for the study area. The optimum responses in the linearized methods are obtained by minimizing the differences between the observed and calculated data using iterative Least Squares (L.S.) method. <br />A considerable error is usually observed in determination of earthquake focus because of the absence of a proper azimuthal bearing, as well as the mere use of the travel time data. With the purpose of reduction of the errors, researchers have used azimuth and ray path parameters for optimization of their performance. <br />The study area is located within 44-50<sup>o</sup> E and northern latitude 36-40<sup>o</sup>N, which is located in northwestern Iran, forming part of the central Alborz-Azerbaijan tectonic region. In this research the earthquakes of a magnitude equal or greater than 1.4 in the body wave scale occurred in the area within the period from 2006 to 2013, included 11,200 events, first were located using only P wave traveltimes and then process was repeated by adding ray path and P wave azimuthal parameters data. To this effect, the aforesaid earthquakes were classified into two general classes, consisting of earthquakes of magnitudes 1.4-3.4 and earthquakes of magnitudes exceeding 3.4. Concerning the first class of earthquakes, processing was made only for the data obtained from the stations in the local seismic network of the Institute of Geophysics, University of Tehran. The velocity model used in this part of the work was local Velocity Model (Bayramnejad, 2008) that is specific to the study area. Because of a large number of earthquakes, each of the aforesaid classes of earthquakes were subdivided into two subclasses consisting of the earthquakes with azimathal gaps less than 180 degrees and those with azimuthal gaps greater than 180 degrees. The results obtained by the earthquakes locating are represented by histograms which indicate that utilization of ray parameter and azimuthal parameter considerably reduce the horizontal error in relocating of earthquakes, specially for the earthquakes with the azimuthal gaps exceeding 180 degrees compared to utilization of mere travel time data. For the second class of earthquakes, we have used all the data from the stations located within the whole seismic range of Iran. It is evident that locating of earthquakes is carried out with a greater precision where more data is available. The velocity model used in this part was similar to the one used for the whole. Optimization of relocating of earthquakesare indicated by histograms for the magnitude of 3.4-4.6earthquakes.Results indicated reduction in the horizontal error when the azimuthal and ray path parameters have been used. For the magnitude 4.6-6-2 earthquakes, it is proved, by drawing a certainty ellipse, that the use of azimuthal and ray parameters may optimize locating of earthquakes,and lower the dimensions of the certainty ellipse.An important aim of seismology is to determine focuses of earthquakes. The more precisely the process is carried out, the better results will be achieved in various studies, for example, in vestigation of the crustal structure and seismogenic zone or the determination of the plane of faults causing earthquakes. The classic methods of locating are based on Geiger's equations (1910). The relation between the arrival time and coordination of the earthquake <br />is linearized in this method using the first term of the Taylor's expansion and a velocity model available for the study area. The optimum responses in the linearized methods are obtained by minimizing the differences between the observed and calculated data using iterative Least Squares (L.S.) method. <br />A considerable error is usually observed in determination of earthquake focus because of the absence of a proper azimuthal bearing, as well as the mere use of the travel time data. With the purpose of reduction of the errors, researchers have used azimuth and ray path parameters for optimization of their performance. <br />The study area is located within 44-50<sup>o</sup> E and northern latitude 36-40<sup>o</sup>N, which is located in northwestern Iran, forming part of the central Alborz-Azerbaijan tectonic region. In this research the earthquakes of a magnitude equal or greater than 1.4 in the body wave scale occurred in the area within the period from 2006 to 2013, included 11,200 events, first were located using only P wave traveltimes and then process was repeated by adding ray path and P wave azimuthal parameters data. To this effect, the aforesaid earthquakes were classified into two general classes, consisting of earthquakes of magnitudes 1.4-3.4 and earthquakes of magnitudes exceeding 3.4. Concerning the first class of earthquakes, processing was made only for the data obtained from the stations in the local seismic network of the Institute of Geophysics, University of Tehran. The velocity model used in this part of the work was local Velocity Model (Bayramnejad, 2008) that is specific to the study area. Because of a large number of earthquakes, each of the aforesaid classes of earthquakes were subdivided into two subclasses consisting of the earthquakes with azimathal gaps less than 180 degrees and those with azimuthal gaps greater than 180 degrees. The results obtained by the earthquakes locating are represented by histograms which indicate that utilization of ray parameter and azimuthal parameter considerably reduce the horizontal error in relocating of earthquakes, specially for the earthquakes with the azimuthal gaps exceeding 180 degrees compared to utilization of mere travel time data. For the second class of earthquakes, we have used all the data from the stations located within the whole seismic range of Iran. It is evident that locating of earthquakes is carried out with a greater precision where more data is available. The velocity model used in this part was similar to the one used for the whole. Optimization of relocating of earthquakesare indicated by histograms for the magnitude of 3.4-4.6earthquakes.Results indicated reduction in the horizontal error when the azimuthal and ray path parameters have been used. For the magnitude 4.6-6-2 earthquakes, it is proved, by drawing a certainty ellipse, that the use of azimuthal and ray parameters may optimize locating of earthquakes,and lower the dimensions of the certainty ellipse.https://jesphys.ut.ac.ir/article_52403_63db94279a98a7a43825259310128b74.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X40420141222Seismic hazard zoning in Iran and estimating peak ground acceleration in provincial capitalsSeismic hazard zoning in Iran and estimating peak ground acceleration in provincial capitals15385240410.22059/jesphys.2014.52404FAH.MousaviN.MirzaeiE.ShabaniM.Eskandari GhadiJournal Article20140222 Growing environmental and social concerns, both on the part of decision makers and public opinion, have brought a new perspective to the perception of hazard assessments a valid alternative in the long-term, and an effective complement in short and medium terms, to traditional design procedure for a resistant and safe environment. Results of the gradual development of research on the probabilistic seismic hazard assessment (PSHA) in the past 40 years make a framework that could be used for estimation of probability of <br />occurrences of earthquakes, at certain return periods on each site. The primary advantage of the PSHA over alternative representations of the earthquake threat is that PSHA integrates over all possible earthquake occurrences and ground motions to calculate a combined probability of exceedance that incorporates the relative frequencies of occurrence of different earthquakes and ground-motion characteristics. Features of the PSHA allow the ground-motion hazard to be expressed at multiple sites consistently in terms of the earthquake sizes, frequencies of occurrence, attenuation, and associated ground motion. Potential seismic sources, seismicity models, ground motion prediction equations (GMPE) and site effects, are the most important factors in seismic hazard studies. In this research, a modified probabilistic seismic hazard assessment, developed by Chinese researchers, is used to estimate the level of the potential seismic ground motion in Iran. A unified catalog of de-clustered earthquakes containing both historical and recent seismicity until late 2012 in the area encompassed by 22-42ºN and 42-66ºE is used. An area source model which contains 238 potential seismic sources within 5 major seismotectonic provinces in the study region has been delineated. Considering magnitude uncertainty and incompleteness of the earthquake data, the seismicity parameters of the seismotectonic provinces are determined. Spatial distribution function is used to determine occurrence rates of potential seismic sources for different magnitude intervals. Also, the background seismicity has been determined for each province. Seismic hazard assessment of Iran for a grid of over 40,000 points with 10 km interval is carried out using OpenQuake software by three different GMPEs and two models of seismicity for potential seismic sources in a logic tree. The peak ground horizontal acceleration (PGA) and spectral accelerations (SA) for 5% damping ratio at 0.2 and 2 seconds corresponding to 10% and 63% probability of exceedances within 50 years (475- and 50-years mean return periods, respectively) are calculated. The resultant seismic hazard maps display a probabilistic estimate of PGA and 0.2 and 2 sec SA for different mean return periods of 50 and 475 years. Resultant peak ground horizontal acceleration for 475-years return period varies from 0.63g in North-East of Lorestan to 0.1g in central Iran. The resultant PGAs for the 475-year return period in provincial capitals indicate the maximum value (0.35g) in Bandar Abbas and Tabriz, and the minimum one (0.11g) in Esfahan and Yazd. Comparison of the results of this study with the last map of seismic hazard in the Iranian code of practice for seismic resistance design of buildings, seismic macrozonation hazard map of Iran, Standard 2800, shows significant differences. Seismic hazard levels estimated in this study in southern Iran, Sistan-Baluchestan, Hormozgan and Fars provinces, show significantly higher values. <br /> Growing environmental and social concerns, both on the part of decision makers and public opinion, have brought a new perspective to the perception of hazard assessments a valid alternative in the long-term, and an effective complement in short and medium terms, to traditional design procedure for a resistant and safe environment. Results of the gradual development of research on the probabilistic seismic hazard assessment (PSHA) in the past 40 years make a framework that could be used for estimation of probability of <br />occurrences of earthquakes, at certain return periods on each site. The primary advantage of the PSHA over alternative representations of the earthquake threat is that PSHA integrates over all possible earthquake occurrences and ground motions to calculate a combined probability of exceedance that incorporates the relative frequencies of occurrence of different earthquakes and ground-motion characteristics. Features of the PSHA allow the ground-motion hazard to be expressed at multiple sites consistently in terms of the earthquake sizes, frequencies of occurrence, attenuation, and associated ground motion. Potential seismic sources, seismicity models, ground motion prediction equations (GMPE) and site effects, are the most important factors in seismic hazard studies. In this research, a modified probabilistic seismic hazard assessment, developed by Chinese researchers, is used to estimate the level of the potential seismic ground motion in Iran. A unified catalog of de-clustered earthquakes containing both historical and recent seismicity until late 2012 in the area encompassed by 22-42ºN and 42-66ºE is used. An area source model which contains 238 potential seismic sources within 5 major seismotectonic provinces in the study region has been delineated. Considering magnitude uncertainty and incompleteness of the earthquake data, the seismicity parameters of the seismotectonic provinces are determined. Spatial distribution function is used to determine occurrence rates of potential seismic sources for different magnitude intervals. Also, the background seismicity has been determined for each province. Seismic hazard assessment of Iran for a grid of over 40,000 points with 10 km interval is carried out using OpenQuake software by three different GMPEs and two models of seismicity for potential seismic sources in a logic tree. The peak ground horizontal acceleration (PGA) and spectral accelerations (SA) for 5% damping ratio at 0.2 and 2 seconds corresponding to 10% and 63% probability of exceedances within 50 years (475- and 50-years mean return periods, respectively) are calculated. The resultant seismic hazard maps display a probabilistic estimate of PGA and 0.2 and 2 sec SA for different mean return periods of 50 and 475 years. Resultant peak ground horizontal acceleration for 475-years return period varies from 0.63g in North-East of Lorestan to 0.1g in central Iran. The resultant PGAs for the 475-year return period in provincial capitals indicate the maximum value (0.35g) in Bandar Abbas and Tabriz, and the minimum one (0.11g) in Esfahan and Yazd. Comparison of the results of this study with the last map of seismic hazard in the Iranian code of practice for seismic resistance design of buildings, seismic macrozonation hazard map of Iran, Standard 2800, shows significant differences. Seismic hazard levels estimated in this study in southern Iran, Sistan-Baluchestan, Hormozgan and Fars provinces, show significantly higher values. <br /> https://jesphys.ut.ac.ir/article_52404_22f11c7610eea91d284e83cf332e7702.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X40420141222Spectral decomposition of seismic data using modified deconvolutive spectrogramSpectral decomposition of seismic data using modified deconvolutive spectrogram39525240510.22059/jesphys.2014.52405FAY.MohammadiH.Siahkoohi0000-0002-9227-2972Journal Article20130529 Spectral decomposition or time-frequency representation (TFR) is a powerful tool for analyzing time varying nature of seismic data. Mapping of a one dimensional seismic time trace to a two dimensional function of time and frequency reveals some characteristics of seismic signals that are not available in only time or frequency representations. This approach has been widely used in seismic exploration applications such as: denoising, direct hydrocarbon detection, seismic sequence analysis, reservoir characterization, and resolution enhancement. Obtaining a method with higher resolution in TFR and less computational cost is of fundamental significant in this field. Among different methods for spectral decomposition, the short time Fourier transform (STFT) or its squared modulus (the spectrogram), is the fastest one. However, it has limited resolution because of the uncertainty principle. The Wigner-Ville distribution (WVD) is another method that has superior TFR resolution, but its practical application is limited by undesirable cross terms because of its bilinear nature. The spectrogram obtained by the STFT method is related to <br />the WVD via a 2-D deconvolution. The Deconvolutive Short Time Fourier Transform Spectrogram (DSTFTS) is another method that has been introduced recently to increase the TFR resolution of the spectrogram by applying the Lucy-Richardson algorithm for 2-D deconvolution operation. However, as expected theoretically, the resolution of the DSTFTS is not very close to the resolution of the WVD. <br />In this paper, we first explain why the resolution of the deconvolutive spectrogram, introduced by Lu and Zhang (2009), is not as close as to that of the WVD, and then we introduce its modified version which effectively improves the TFR resolution. In the 2-D deconvolution process, the sampling interval of the input data must agree in both time and frequency. We have shown that the sampling interval of the WVD of the window function in the frequency direction is not equal to that of the spectrogram of the signal. A simple technique is proposed here to overcome this problem. The proposed modified deconvolutive spectrogram provides better results compared to those of Lu and Zhang (2009). The TFR resolution is very close to that of the WVD because of the correct implementation of the deconvolution process. As we performed deconvolution by the Lucy- Rchardson algorithm which is not an ideal and perfect process, our TFR is not the same as that of the WVD. <br />We have evaluated the performance of the modified deconvolutive spectrogram in spectral decomposition of a chirp signal; a synthetic signal consists of four Morlet wavelets and a real seismic trace. In comparison with the popular reassigned spectrogram method (Auger and Flandrin, 1995), the modified deconvolutive spectrogram effectively improves the TFR resolution and has no artifact and undesirable effects on TFR of any type of input signal. However, the reassigned spectrogram eliminates some components and disturbs the shape of the TFR of the complex input signals. These issues can make a method completely unusable. <br />We have shown that both the reassigned and the modified deconvolutive spectrograms have the same nature. The aim of the reassigned spectrogram is to improve the degree of the localization of the signal components by reallocating them over the time-frequency plane and applying a weighted integration. On the other hand, the aim of the modified deconvolutive spectrogram is to directly remove and compensate the damping effect of the window function by applying the 2-D deconvolution operation. From the utilized mathematical tools point of view, the 2-D deconvolution algorithms are more advanced and more reliable than the weighted integration (especially in case of signals with a complicated TFR such as seismic data). <br />Finally, we applied the method to detect low frequency shadow associated with a possible thin gas reservoir on a seismic section. The low frequency shadow has been used as a seismic indicator of a hydrocarbon accumulation. Several reasons have been proposed for this shadow, such as abnormally high attenuation in gas filled reservoir and some other mechanism in data processing. By spectral decomposition of all traces, a cube of data has been obtained from the seismic section. We have used single frequency seismic sections, extracted from the cube, for interpretation. According to the results, high amplitude energy on the 20 Hz single frequency seismic section has been disappeared on the 50 Hz single frequency seismic section. This high amplitude energy is a hydrocarbon indicator that exists beneath the reservoir. The superior resolution of the modified deconvolutive spectrogram resulted in a remarkably better localization of the reservoir. Therefore, the modified deconvolutive spectrogram is a fast and effective method for spectral decomposition of seismic data, especially when it is used for seismic attributes extraction. Spectral decomposition or time-frequency representation (TFR) is a powerful tool for analyzing time varying nature of seismic data. Mapping of a one dimensional seismic time trace to a two dimensional function of time and frequency reveals some characteristics of seismic signals that are not available in only time or frequency representations. This approach has been widely used in seismic exploration applications such as: denoising, direct hydrocarbon detection, seismic sequence analysis, reservoir characterization, and resolution enhancement. Obtaining a method with higher resolution in TFR and less computational cost is of fundamental significant in this field. Among different methods for spectral decomposition, the short time Fourier transform (STFT) or its squared modulus (the spectrogram), is the fastest one. However, it has limited resolution because of the uncertainty principle. The Wigner-Ville distribution (WVD) is another method that has superior TFR resolution, but its practical application is limited by undesirable cross terms because of its bilinear nature. The spectrogram obtained by the STFT method is related to <br />the WVD via a 2-D deconvolution. The Deconvolutive Short Time Fourier Transform Spectrogram (DSTFTS) is another method that has been introduced recently to increase the TFR resolution of the spectrogram by applying the Lucy-Richardson algorithm for 2-D deconvolution operation. However, as expected theoretically, the resolution of the DSTFTS is not very close to the resolution of the WVD. <br />In this paper, we first explain why the resolution of the deconvolutive spectrogram, introduced by Lu and Zhang (2009), is not as close as to that of the WVD, and then we introduce its modified version which effectively improves the TFR resolution. In the 2-D deconvolution process, the sampling interval of the input data must agree in both time and frequency. We have shown that the sampling interval of the WVD of the window function in the frequency direction is not equal to that of the spectrogram of the signal. A simple technique is proposed here to overcome this problem. The proposed modified deconvolutive spectrogram provides better results compared to those of Lu and Zhang (2009). The TFR resolution is very close to that of the WVD because of the correct implementation of the deconvolution process. As we performed deconvolution by the Lucy- Rchardson algorithm which is not an ideal and perfect process, our TFR is not the same as that of the WVD. <br />We have evaluated the performance of the modified deconvolutive spectrogram in spectral decomposition of a chirp signal; a synthetic signal consists of four Morlet wavelets and a real seismic trace. In comparison with the popular reassigned spectrogram method (Auger and Flandrin, 1995), the modified deconvolutive spectrogram effectively improves the TFR resolution and has no artifact and undesirable effects on TFR of any type of input signal. However, the reassigned spectrogram eliminates some components and disturbs the shape of the TFR of the complex input signals. These issues can make a method completely unusable. <br />We have shown that both the reassigned and the modified deconvolutive spectrograms have the same nature. The aim of the reassigned spectrogram is to improve the degree of the localization of the signal components by reallocating them over the time-frequency plane and applying a weighted integration. On the other hand, the aim of the modified deconvolutive spectrogram is to directly remove and compensate the damping effect of the window function by applying the 2-D deconvolution operation. From the utilized mathematical tools point of view, the 2-D deconvolution algorithms are more advanced and more reliable than the weighted integration (especially in case of signals with a complicated TFR such as seismic data). <br />Finally, we applied the method to detect low frequency shadow associated with a possible thin gas reservoir on a seismic section. The low frequency shadow has been used as a seismic indicator of a hydrocarbon accumulation. Several reasons have been proposed for this shadow, such as abnormally high attenuation in gas filled reservoir and some other mechanism in data processing. By spectral decomposition of all traces, a cube of data has been obtained from the seismic section. We have used single frequency seismic sections, extracted from the cube, for interpretation. According to the results, high amplitude energy on the 20 Hz single frequency seismic section has been disappeared on the 50 Hz single frequency seismic section. This high amplitude energy is a hydrocarbon indicator that exists beneath the reservoir. The superior resolution of the modified deconvolutive spectrogram resulted in a remarkably better localization of the reservoir. Therefore, the modified deconvolutive spectrogram is a fast and effective method for spectral decomposition of seismic data, especially when it is used for seismic attributes extraction.https://jesphys.ut.ac.ir/article_52405_1fab6b1214d3fe922f05ba2e664fb0b9.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X40420141222Detection of southern faults of Sabalan volcano on the resistivity section inferred from the magnetotelluric data inversionDetection of southern faults of Sabalan volcano on the resistivity section inferred from the magnetotelluric data inversion53655240610.22059/jesphys.2014.52406FAM.TakaluB.Oskooi0000-0003-3065-194XB.HabibianJournal Article20140226The magnetotelluric method is a frequency domain electromagnetic (EM) tool which utilizes natural variations in the Earth’s magnetic and electrical field as a source. Variations in the Earth’s natural fields supply information, providing the ability to study the electric substructure of the Earth in great depths. The large frequency range of the EM signals eradicates the problematic presence of conductive overburden or sampling frequencies, thereby allowing a deep penetration. Natural magnetotelluric (MT) signals arise from a variety of natural currents, including thunderstorms and solar winds. Total frequency range of the MT data can be from 40 kHz to less than 1<sup>-4</sup> Hz. Data is acquired in a passive mode using a combination of electric sensors and induction coil magnetometers, and can detect changes of resistivity in great depths (Simpson and Bahr, 2005). <br />Cagniard (1953) and Tikhonov (1950) developed a theory underlying the <br />magnetotelluric method independent of each other in the 1950’s. They both observed that the electric and magnetic fields associated with telluric currents that flow in the Earth as a result of variations in the Earth’s natural electromagnetic field, should relate to each other in a certain way depending on the electrical characteristics of the Earth. The ratio of the horizontal electric field to the orthogonal horizontal magnetic field gives the electromagnetic impedance. The major advantage of the MT method is that it simultaneously measures the electric and magnetic fields in two perpendicular directions. The electric sensors are used to determine the electric field, which is derived from measurements of the voltage difference between electrode pairs of E<sub>x</sub> and E<sub>y</sub>. Induction coils are used to measure the magnetic field components in 3 orthogonal directions. The ratio of the recorded electric and magnetic fields gives an estimate of the apparent resistivity of the Earth at any given depth. <br />The elements of the 2x2 impedance (Z) tensor are derived from complex ratios of the orthogonal components of the horizontal electric and magnetic fields in the frequency domain. As all the measurement stations are located over a line in our case, the data only permit the application of a two-dimensional interpretation process that requires the identification of the TE and the TM modes corresponding to electric and magnetic fields parallel to the geologic strike, respectively. As the geological strikes are not known in advance, the components of electromagnetic fields are measured in geomagnetic (or arbitrary) directions and the impedance tensor is rotated to principal axes. The strike direction often changes with depth in the field, accordingly, the rotational angle varies at each frequency. For two-dimensional structures, there are many conditions and consequently many possible schemes to determine the rotational angle (Simpson and Bahr, 2005). Here, we minimized the diagonal elements of the impedance tensor. There are two possible strike directions for a certain frequency and the interpreter identifies the TE and the TM modes using geological and geophysical information. <br />Volcanic activities of Sabalan started in Eocene and resumed in Pliocence by the eruption of trachy-andesitic lava flow through the main caldera. Four major lithostratigraphic units were defined in the studied area in the following order: Dizu formation (Quaternary alluvium and terrace), Kasra formation (post-caldera, latepliostocene), Taos formation (syn-caldera, early Plistocene) and Volhazir formation (pre caldera, Pliocence). The Sabalan fault complex, structurally consists of linear faulting trending NE-SW, N-S and WNW-ESE and arcuate structures forming inner and outer fractures to the caldera (Sahabi, 1378). The MT data were obtained using three sets of MTU-5A equipment, two roving sites within Mt. Sabalan and one remote reference site. The raw time series data were processed using the Phoenix Geophysics, Ltd. SSMT 2000 software, and the resulting EDI files were edited, analyzed and modeled using the Geosystem’s WinGlink software. Overall, the obtained MT data were in good quality down to 10<sup>-2</sup> Hz. However, some MT obtained data were of poor quality despite being retested mainly because of bad weather conditions (i.e. snowy, windy) or becuase the site was situated along the steep and rocky flanks of Mt. Sabalan. Along the profile MM (on Figure 5) a distinct feature is an almost continuous conductive anomaly ( The magnetotelluric method is a frequency domain electromagnetic (EM) tool which utilizes natural variations in the Earth’s magnetic and electrical field as a source. Variations in the Earth’s natural fields supply information, providing the ability to study the electric substructure of the Earth in great depths. The large frequency range of the EM signals eradicates the problematic presence of conductive overburden or sampling frequencies, thereby allowing a deep penetration. Natural magnetotelluric (MT) signals arise from a variety of natural currents, including thunderstorms and solar winds. Total frequency range of the MT data can be from 40 kHz to less than 1<sup>-4</sup> Hz. Data is acquired in a passive mode using a combination of electric sensors and induction coil magnetometers, and can detect changes of resistivity in great depths (Simpson and Bahr, 2005). <br />Cagniard (1953) and Tikhonov (1950) developed a theory underlying the <br />magnetotelluric method independent of each other in the 1950’s. They both observed that the electric and magnetic fields associated with telluric currents that flow in the Earth as a result of variations in the Earth’s natural electromagnetic field, should relate to each other in a certain way depending on the electrical characteristics of the Earth. The ratio of the horizontal electric field to the orthogonal horizontal magnetic field gives the electromagnetic impedance. The major advantage of the MT method is that it simultaneously measures the electric and magnetic fields in two perpendicular directions. The electric sensors are used to determine the electric field, which is derived from measurements of the voltage difference between electrode pairs of E<sub>x</sub> and E<sub>y</sub>. Induction coils are used to measure the magnetic field components in 3 orthogonal directions. The ratio of the recorded electric and magnetic fields gives an estimate of the apparent resistivity of the Earth at any given depth. <br />The elements of the 2x2 impedance (Z) tensor are derived from complex ratios of the orthogonal components of the horizontal electric and magnetic fields in the frequency domain. As all the measurement stations are located over a line in our case, the data only permit the application of a two-dimensional interpretation process that requires the identification of the TE and the TM modes corresponding to electric and magnetic fields parallel to the geologic strike, respectively. As the geological strikes are not known in advance, the components of electromagnetic fields are measured in geomagnetic (or arbitrary) directions and the impedance tensor is rotated to principal axes. The strike direction often changes with depth in the field, accordingly, the rotational angle varies at each frequency. For two-dimensional structures, there are many conditions and consequently many possible schemes to determine the rotational angle (Simpson and Bahr, 2005). Here, we minimized the diagonal elements of the impedance tensor. There are two possible strike directions for a certain frequency and the interpreter identifies the TE and the TM modes using geological and geophysical information. <br />Volcanic activities of Sabalan started in Eocene and resumed in Pliocence by the eruption of trachy-andesitic lava flow through the main caldera. Four major lithostratigraphic units were defined in the studied area in the following order: Dizu formation (Quaternary alluvium and terrace), Kasra formation (post-caldera, latepliostocene), Taos formation (syn-caldera, early Plistocene) and Volhazir formation (pre caldera, Pliocence). The Sabalan fault complex, structurally consists of linear faulting trending NE-SW, N-S and WNW-ESE and arcuate structures forming inner and outer fractures to the caldera (Sahabi, 1378). The MT data were obtained using three sets of MTU-5A equipment, two roving sites within Mt. Sabalan and one remote reference site. The raw time series data were processed using the Phoenix Geophysics, Ltd. SSMT 2000 software, and the resulting EDI files were edited, analyzed and modeled using the Geosystem’s WinGlink software. Overall, the obtained MT data were in good quality down to 10<sup>-2</sup> Hz. However, some MT obtained data were of poor quality despite being retested mainly because of bad weather conditions (i.e. snowy, windy) or becuase the site was situated along the steep and rocky flanks of Mt. Sabalan. Along the profile MM (on Figure 5) a distinct feature is an almost continuous conductive anomaly ( https://jesphys.ut.ac.ir/article_52406_6a3b2a99959f4b07cf5f653e4380dce0.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X40420141222Kinematic velocity determination for the low-Earth-orbit satellites using the extended Kalman filter: a case study, the GRACE twin satellitesKinematic velocity determination for the low-Earth-orbit satellites using the extended Kalman filter: a case study, the GRACE twin satellites67825241210.22059/jesphys.2014.52412FAA.Safari0000-0001-5938-5468M. A.Sharifi0000-0003-0745-4147S.Farzaneh0000-0002-3778-086XJournal Article20121212 Global Positioning System (GPS) receivers in gravimetric satellites continuously measure valuable information about 3D satellite position. However, the velocity of satellites, which has important applications in the satellite geodesy such as gravity field recovery, cannot be directly measured. These data are used in the energy integral method or other methods based on the Earth gravity field motion equation to determine the velocity or acceleration of satellite. In this study, the velocity vector is computed using the numerical differentiation and the Kalman filtering for the Gravity Recovery And Climate Experiment (GRACE) twin satellites. The Numerical results show that the Kalman filtering yields more accurate results than numerical differentiation when they are compared with the intersatellite range-rate measurements. <br />In the wake of the New Gravity Satellite era due to the launch of ChAllenging Minisatellite Payload (CHAMP), GRACE and GOCE, processing methods of enormously large orbit data has become the focus of the geodetic interest. The input data are different from earlier times as they contain some millions of continuous position data per satellite per year. The huge number of data arises from continuous observation from these satellites to the GPS system. This can be done due to the much higher altitude of the GPS satellites (20,000 km) compared to that of the gravity satellites (between 250 and 500 km). The latter is often referred to as Low Earth Orbiter, i.e. LEO. The GPS-LEO constellation as described above in technical terms is called High-Low Satellite to Satellite Tracking (High-Low SST). Thus some million position-data of the LEOs are the basis of the global gravity field determination techniques. The concept behind the Solutions is that satellites are in free-fall in the gravity field of the Earth. After modeling and removing all further force sources (e.g. gravitation of the Sun and the Moon and other planets, direct and indirect tides, surface forces (atmospheric drag, solar radiation pressure)) the remaining orbit is a trajectory in space, which is governed purely by the gravity field of the Earth. Therefore, the task is only to determine the force behind the motion. Conservation laws can be applied for satellites successfully. The Newton’s equation of law states the conservation of forces in a closed system. Applying it for a satellite requires information of the acceleration along the orbit. In this article the velocity vector is derived as a part of the unknown vector in Kalman filter algorithm. <br />Kalman filter is a well-known mathematical tool, which gives the answer to the most frequent engineering question: how can we get the best estimate of the system state from the noisy measurements? The Kalman filter is a data processing algorithm that estimates the state of a system from noisy measurements using least-squares. It gives the optimal system state estimate together with a measure of how precise is the state estimate compared to true state. The Kalman filter performs optimal solution for a linear process with uncorrelated, white, zero mean Gaussian process and measurement disturbances. The Kalman filter is a “one-step back recursive filter”, meaning that there is no need to store past measurements for the purpose of computing the present time. <br />We assume that the discrete random kinematic process to be estimated can be modeled with two main Kalman filter equations: <br />Process equation <br />Measurement equation <br />Where x is the system state vector, A is the state transition matrix and W is the process noise. From this discrete time linear formation of the Kalman filter, the discrete time nonlinear formation of the Extended Kalman Filter is based. For the state space model for the Extended Kalman Filter (EKF), the above linear equations are replaced by one nonlinear function: <br /> <br /> <br />In this study, the velocity vector is computed using numerical differentiation and the Kalman filtering for the GRACE twin satellites. The numerical analysis shows that the Extended Kalman filtering yields the optimal solution. The comparison is performed based on the intersatellite range-rate measurements. Global Positioning System (GPS) receivers in gravimetric satellites continuously measure valuable information about 3D satellite position. However, the velocity of satellites, which has important applications in the satellite geodesy such as gravity field recovery, cannot be directly measured. These data are used in the energy integral method or other methods based on the Earth gravity field motion equation to determine the velocity or acceleration of satellite. In this study, the velocity vector is computed using the numerical differentiation and the Kalman filtering for the Gravity Recovery And Climate Experiment (GRACE) twin satellites. The Numerical results show that the Kalman filtering yields more accurate results than numerical differentiation when they are compared with the intersatellite range-rate measurements. <br />In the wake of the New Gravity Satellite era due to the launch of ChAllenging Minisatellite Payload (CHAMP), GRACE and GOCE, processing methods of enormously large orbit data has become the focus of the geodetic interest. The input data are different from earlier times as they contain some millions of continuous position data per satellite per year. The huge number of data arises from continuous observation from these satellites to the GPS system. This can be done due to the much higher altitude of the GPS satellites (20,000 km) compared to that of the gravity satellites (between 250 and 500 km). The latter is often referred to as Low Earth Orbiter, i.e. LEO. The GPS-LEO constellation as described above in technical terms is called High-Low Satellite to Satellite Tracking (High-Low SST). Thus some million position-data of the LEOs are the basis of the global gravity field determination techniques. The concept behind the Solutions is that satellites are in free-fall in the gravity field of the Earth. After modeling and removing all further force sources (e.g. gravitation of the Sun and the Moon and other planets, direct and indirect tides, surface forces (atmospheric drag, solar radiation pressure)) the remaining orbit is a trajectory in space, which is governed purely by the gravity field of the Earth. Therefore, the task is only to determine the force behind the motion. Conservation laws can be applied for satellites successfully. The Newton’s equation of law states the conservation of forces in a closed system. Applying it for a satellite requires information of the acceleration along the orbit. In this article the velocity vector is derived as a part of the unknown vector in Kalman filter algorithm. <br />Kalman filter is a well-known mathematical tool, which gives the answer to the most frequent engineering question: how can we get the best estimate of the system state from the noisy measurements? The Kalman filter is a data processing algorithm that estimates the state of a system from noisy measurements using least-squares. It gives the optimal system state estimate together with a measure of how precise is the state estimate compared to true state. The Kalman filter performs optimal solution for a linear process with uncorrelated, white, zero mean Gaussian process and measurement disturbances. The Kalman filter is a “one-step back recursive filter”, meaning that there is no need to store past measurements for the purpose of computing the present time. <br />We assume that the discrete random kinematic process to be estimated can be modeled with two main Kalman filter equations: <br />Process equation <br />Measurement equation <br />Where x is the system state vector, A is the state transition matrix and W is the process noise. From this discrete time linear formation of the Kalman filter, the discrete time nonlinear formation of the Extended Kalman Filter is based. For the state space model for the Extended Kalman Filter (EKF), the above linear equations are replaced by one nonlinear function: <br /> <br /> <br />In this study, the velocity vector is computed using numerical differentiation and the Kalman filtering for the GRACE twin satellites. The numerical analysis shows that the Extended Kalman filtering yields the optimal solution. The comparison is performed based on the intersatellite range-rate measurements.https://jesphys.ut.ac.ir/article_52412_0b0a0561f42221a515683c295ea409c9.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X40420141222Improvement in gravity field modeling using collocation by means of crust density, global geopotential models and combination of heterogeneous observationsImprovement in gravity field modeling using collocation by means of crust density, global geopotential models and combination of heterogeneous observations83985241710.22059/jesphys.2014.52417FAA.Safari0000-0001-5938-5468S.Ramouz0000-0001-5085-0095A. A.JomegiJournal Article20131111 In this paper we use the Laest Squares Collocation (LSC) method for the "Geoid Determination" and the "Earth Gravity Field Modeling" in the Coastal Pars region in southern Iran. The LSC is one of the Earth Gravity Field Modeling methods which does not need regularization, opposite to the Geodetic Boundary Value Problem (GBVP) solutions, such as Stokes. Also, unlike statistical methods, the LSC has the ability to account for the systematic effects in the data (trend), it predicts quantities between the data points (interpolation), and estimates the quantity at the data point (filtering). The main advantage of LCS methods is their capability of incorporating heterogeneous data, for which gravimetric or geometric data can be used as inputs of the target function. <br />In the first Section, we briefly introduce the LSC. In Section 2, we describe fundamentals of the LSC in a geometric space, and the way it connects the statistical concept of the covariance function and error least squares constrain in order to reproduce the kernel function in the Hilbert space which leads to the "Least Squares Collocation". Furthermore, the Wiener-Kolmogrov formula (Equation 7) is introduced as a solution for the LSC. Further in Section 2, we explain our approach to use the LSC with random errors to adapt its theory to the noisy data (Equation 28). <br />In Section 3, the concept of "True Covariance Function"(Equation 12), and the procedure of estimating its "Empirical Covariance Function"(Equation 34) based on two essential assumptions: "Non Stationarity" and "Ergodicity", are described. We divide the covariance function into global and local subclasses and individually explain their structures. Also, we describe the covariance function modeling in the LSC by fitting an analytical covariance model (derived from a true covariance function) to an empirical covariance function (obtained from local gravity data) (Equation 63). We demonstrate that an analytical covariance model can be generated by determining the covariance model parameters using the least squares inverse (Equation 65). <br />In Section 4, we use gravity anomaly data for determining Geoid by applying the LSC. Tscherning's algorithm (Figure 5) is used for the purpose of implementing the LSC theory. As in the collocation theory, the function that was used in the Hilbert space must be harmonic, In our observational space (a sphere that represents the Earth), we assume that there is no mass above the Geoid surface. In order to guarantee this, the "Remove-Compute-Restore" method is used. Based on the field operation conducted the Department of Geophysics, (Exploration Directorate of National Iranian Oil Company, 2004) in our case study, the value of the topographic density has estimated about 2.3 . <br />Finally in the Section 5, we evaluate the results with 15 GPS/Leveling control points in the region and the root mean squared (RMS) value of 0.052544 meters is achieved. In another experiment we use the LSC for determination of the geoid, using the same data, but having topographic density of 2.67 . The achieved RMS in this experiment is 0.06695 meters. Comparing these two experiments indicates that, in the Coastal Pars region, the topographic density value (2.3 ) determined by the Department of Geophysics, (Exploration Directorate of National Iranian Oil Company, 2004), provides a better estimation compared to the global value (2.67 ). The Section is wrapped by further analysis between the Geoid results of the LSC and Geoid derived from the Earth Gravity Model released (EGM1) 1996 and the EGM 2008 Geopotential models in the region. Our analysis demonstrates that the Geoids obtained from the EGM's models have about 20 centimeters shift compared to those obtained by the LSC. In this paper we use the Laest Squares Collocation (LSC) method for the "Geoid Determination" and the "Earth Gravity Field Modeling" in the Coastal Pars region in southern Iran. The LSC is one of the Earth Gravity Field Modeling methods which does not need regularization, opposite to the Geodetic Boundary Value Problem (GBVP) solutions, such as Stokes. Also, unlike statistical methods, the LSC has the ability to account for the systematic effects in the data (trend), it predicts quantities between the data points (interpolation), and estimates the quantity at the data point (filtering). The main advantage of LCS methods is their capability of incorporating heterogeneous data, for which gravimetric or geometric data can be used as inputs of the target function. <br />In the first Section, we briefly introduce the LSC. In Section 2, we describe fundamentals of the LSC in a geometric space, and the way it connects the statistical concept of the covariance function and error least squares constrain in order to reproduce the kernel function in the Hilbert space which leads to the "Least Squares Collocation". Furthermore, the Wiener-Kolmogrov formula (Equation 7) is introduced as a solution for the LSC. Further in Section 2, we explain our approach to use the LSC with random errors to adapt its theory to the noisy data (Equation 28). <br />In Section 3, the concept of "True Covariance Function"(Equation 12), and the procedure of estimating its "Empirical Covariance Function"(Equation 34) based on two essential assumptions: "Non Stationarity" and "Ergodicity", are described. We divide the covariance function into global and local subclasses and individually explain their structures. Also, we describe the covariance function modeling in the LSC by fitting an analytical covariance model (derived from a true covariance function) to an empirical covariance function (obtained from local gravity data) (Equation 63). We demonstrate that an analytical covariance model can be generated by determining the covariance model parameters using the least squares inverse (Equation 65). <br />In Section 4, we use gravity anomaly data for determining Geoid by applying the LSC. Tscherning's algorithm (Figure 5) is used for the purpose of implementing the LSC theory. As in the collocation theory, the function that was used in the Hilbert space must be harmonic, In our observational space (a sphere that represents the Earth), we assume that there is no mass above the Geoid surface. In order to guarantee this, the "Remove-Compute-Restore" method is used. Based on the field operation conducted the Department of Geophysics, (Exploration Directorate of National Iranian Oil Company, 2004) in our case study, the value of the topographic density has estimated about 2.3 . <br />Finally in the Section 5, we evaluate the results with 15 GPS/Leveling control points in the region and the root mean squared (RMS) value of 0.052544 meters is achieved. In another experiment we use the LSC for determination of the geoid, using the same data, but having topographic density of 2.67 . The achieved RMS in this experiment is 0.06695 meters. Comparing these two experiments indicates that, in the Coastal Pars region, the topographic density value (2.3 ) determined by the Department of Geophysics, (Exploration Directorate of National Iranian Oil Company, 2004), provides a better estimation compared to the global value (2.67 ). The Section is wrapped by further analysis between the Geoid results of the LSC and Geoid derived from the Earth Gravity Model released (EGM1) 1996 and the EGM 2008 Geopotential models in the region. Our analysis demonstrates that the Geoids obtained from the EGM's models have about 20 centimeters shift compared to those obtained by the LSC.https://jesphys.ut.ac.ir/article_52417_04dd13aef85a2da2ecd806ccc24f2d3d.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X40420141222Separation of saltwater and freshwater using sequential Gaussian simulation in resistivity measurementsSeparation of saltwater and freshwater using sequential Gaussian simulation in resistivity measurements991105242110.22059/jesphys.2014.52421FAS.SoleimaniO.AsghariM. K.HafiziJournal Article20130709 Saltwater intrusion into freshwater in coastal areas has been a serious concern for many countries. Providing fresh water in some regions is very crucial. In fact, the areas that are prone to encountering salt water zones should be checked meticulously. The preferred method for such investigation is a precise 3-D model of distribution of fresh and salt water In order to reach such a model, reliable measurements and comprehensive resistivity interpretations are needed. The purpose of this study is to use geostatistical simulations in order to provide a 3-D aquifer model from the results of the resistivity studies. This means to delineate the boundary of saltwater and freshwater in the aquifer. Geostatistical simulation provides a robust tool for presentation of the results achieved from interpretation of resistivity data. Geostatistical simulations by assessing the risk and uncertainties regarding the measurements at hand, provides a method for a precise economical study and therefore a more detailed financing and planning scheme. Most of the prediction/estimation methods involve, in some way, an averaging method in which smoothing and reducing the amplitude of fluctuations among their characteristics are happened. However, geostatistical simulation methods are able to reproduce the minor and local differences more precisely than other methods. In other words, the simulation does not reduce the variance of the data so the minimum and maximum values are reproduced. The required data for this study were acquired in Borazjan plane in the Boushehr province, south of Iran. 82 Vertical Electrical Sondage (VES) with Schlumberger array were conducted along with 6 profiles in the Study area. The distance between 2 subsequent measurements are 200 m, and lateral distance between 2 neighbor profiles is 1000 m. Distances between current electrodes (AB) are increased from 1.5 m to 1000 m. Each logarithmic decade contains 6 different measurements. Direction of survey oriented North-West to South-East in each profile. After the data gathering, with the use of electrical software, apparent resistivity sections are provided. In the next step, data are inverted using a software and the standard curves. The best multi-layered ground for the Earth is obtained. After the interpretation of the initial data, the real resistivity values of the aquifer are introduced to sequential Gaussian simulation algorithm as input data. Regarding the concept of 1D resistivity inversion, those maps and sections are considered important that manifest coherent amplitude of resistivity variations. In this study, those simulations are considered and used that are capable to reproduce coherent amplitude of resistivity values. For this reason, we use Sequential Gaussian Simulation method which includes such a characteristic in nature, for simulation of the aquifer. For this reason, data are normalized into Gaussian distribution. <br />In order to investigate the anisotropy in the region, a directional variography is done; then the best variogram model is chosen using the cross validation test. The anisotropy shows range/sill variations of the variogram in different orientations; thus the variogram is a useful tool for identification the heterogeneities in the investigation area. Using 50 m×50 m×10 m blocks and the sequential Gaussian simulation algorithm, 100 times simulations were performed and 100 realizations were obtained. The simulation results (realizations) are only acceptable when they can reproduce the identical histogram/variogram, which in this case is the histogram and the variogram of the raw data of the aquifer. After the simulation results were validated the E-type map is derived. This map shows the average value simulated for each block by averaging the values of the 100 realizations. This map is a 3-D model of the real resistivity distribution within the aquifer. The increase of the resistivity values can be observed in this map. <br />Among the most important results, obtained from the realizations, are the probability maps. These maps show the probability of exceeding a defined value, and are driven by counting the number of times that the resistivity value of a block has passed a certain resistivity value in the all realizations. In fact, the probability map can be assumed as a good factor for determination of drilling position for freshwater exploitation. Using the probability maps, the freshwater positions can be identified with the probability of 1 or very close to 1. In order to make a comparison between the data of the drilled well which is placed in the farthest distance between the two profiles, and the estimated model, a network was designed by which it was possible to estimate the aquifer resistivity values at the position of the well. The acquired resistivity values, using the Geostatistical simulation within the designed network and the resistivity values in the aquifer in the position of the well, proves the accurate estimation of the model in accordance with the reality of the aquifer. Saltwater intrusion into freshwater in coastal areas has been a serious concern for many countries. Providing fresh water in some regions is very crucial. In fact, the areas that are prone to encountering salt water zones should be checked meticulously. The preferred method for such investigation is a precise 3-D model of distribution of fresh and salt water In order to reach such a model, reliable measurements and comprehensive resistivity interpretations are needed. The purpose of this study is to use geostatistical simulations in order to provide a 3-D aquifer model from the results of the resistivity studies. This means to delineate the boundary of saltwater and freshwater in the aquifer. Geostatistical simulation provides a robust tool for presentation of the results achieved from interpretation of resistivity data. Geostatistical simulations by assessing the risk and uncertainties regarding the measurements at hand, provides a method for a precise economical study and therefore a more detailed financing and planning scheme. Most of the prediction/estimation methods involve, in some way, an averaging method in which smoothing and reducing the amplitude of fluctuations among their characteristics are happened. However, geostatistical simulation methods are able to reproduce the minor and local differences more precisely than other methods. In other words, the simulation does not reduce the variance of the data so the minimum and maximum values are reproduced. The required data for this study were acquired in Borazjan plane in the Boushehr province, south of Iran. 82 Vertical Electrical Sondage (VES) with Schlumberger array were conducted along with 6 profiles in the Study area. The distance between 2 subsequent measurements are 200 m, and lateral distance between 2 neighbor profiles is 1000 m. Distances between current electrodes (AB) are increased from 1.5 m to 1000 m. Each logarithmic decade contains 6 different measurements. Direction of survey oriented North-West to South-East in each profile. After the data gathering, with the use of electrical software, apparent resistivity sections are provided. In the next step, data are inverted using a software and the standard curves. The best multi-layered ground for the Earth is obtained. After the interpretation of the initial data, the real resistivity values of the aquifer are introduced to sequential Gaussian simulation algorithm as input data. Regarding the concept of 1D resistivity inversion, those maps and sections are considered important that manifest coherent amplitude of resistivity variations. In this study, those simulations are considered and used that are capable to reproduce coherent amplitude of resistivity values. For this reason, we use Sequential Gaussian Simulation method which includes such a characteristic in nature, for simulation of the aquifer. For this reason, data are normalized into Gaussian distribution. <br />In order to investigate the anisotropy in the region, a directional variography is done; then the best variogram model is chosen using the cross validation test. The anisotropy shows range/sill variations of the variogram in different orientations; thus the variogram is a useful tool for identification the heterogeneities in the investigation area. Using 50 m×50 m×10 m blocks and the sequential Gaussian simulation algorithm, 100 times simulations were performed and 100 realizations were obtained. The simulation results (realizations) are only acceptable when they can reproduce the identical histogram/variogram, which in this case is the histogram and the variogram of the raw data of the aquifer. After the simulation results were validated the E-type map is derived. This map shows the average value simulated for each block by averaging the values of the 100 realizations. This map is a 3-D model of the real resistivity distribution within the aquifer. The increase of the resistivity values can be observed in this map. <br />Among the most important results, obtained from the realizations, are the probability maps. These maps show the probability of exceeding a defined value, and are driven by counting the number of times that the resistivity value of a block has passed a certain resistivity value in the all realizations. In fact, the probability map can be assumed as a good factor for determination of drilling position for freshwater exploitation. Using the probability maps, the freshwater positions can be identified with the probability of 1 or very close to 1. In order to make a comparison between the data of the drilled well which is placed in the farthest distance between the two profiles, and the estimated model, a network was designed by which it was possible to estimate the aquifer resistivity values at the position of the well. The acquired resistivity values, using the Geostatistical simulation within the designed network and the resistivity values in the aquifer in the position of the well, proves the accurate estimation of the model in accordance with the reality of the aquifer.https://jesphys.ut.ac.ir/article_52421_52efa37fd79e53fe4654b571a2fbcd56.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X40420141222Improving the results of singular value decomposition inversion using direct transformation of frequency-domain HEM dataImproving the results of singular value decomposition inversion using direct transformation of frequency-domain HEM data1111265242310.22059/jesphys.2014.52423FAA.AsadianA.MoradzadehA. R.ArabamiriA.NejatiD.RajabiJournal Article20131015 Helicopter-borne electromagnetic (HEM) is a fast and high resolution airborne electromagnetic (AEM) method that is frequently used for imaging of the subsurface resistivity structures. This is a versatile and cost effective method, frequently has used in mineral and groundwater exploration and various environmental problems. Modern frequency-domain HEM systems utilize small electromagnetic, magnetic, Global Positioning System (GPS) and laser altimeter sensors which are encapsulated in a “bird”, a cigar-shaped 9 m long tube, which is kept at about 30–40 m above the ground level. Separation between the rigidly mounted transmitter and receiver coils typically lies between 4 and 8 m. The modern HEM systems use a multi-frequency devices operating at 4–6 frequencies, ranging from 200 Hz to 200 kHz. In this method, a sinusoidal current flow through the transmitter coil generates a primary magnetic field at a frequency that is very close to a dipole field at some distance from the transmitter coil. The primary oscillating magnetic field induces eddy currents in the subsurface of the Earth. These currents, in turn, generate a secondary magnetic field, which is related to the Earth resistivity distribution. The receiver coils measure the induced secondary magnetic field with respect to the primary magnetic in parts per million (ppm). Due to the induction process of the Electromagnetic (EM) field, there is a small phase shift between the primary and secondary fields. In practice, the transmitter coil is horizontal (VMD: vertical magnetic dipole) or vertical (HMD: horizontal magnetic dipole) and the receiver coil is oriented in a maximally coupled position, resulting in horizontal coplanar (HCP), vertical coplanar (VCP), or vertical coaxial (VCA) coil systems. <br />The final results of the frequency domain HEM data are normally presented in the form of resistivity maps in various frequency or depth levels or as resistivity depth sections along the survey lines for interpretation. The vertical resistivity sections are constructed by concatenating the resistivity models for every measuring point along a survey line. Several methods have been developed to prepare these resistivity maps or depth sections. Many techniques have been developed to model the measured HEM data during the recent 35 years. They are classified into two general groups: (1) direct transform of the data into a generalized model such as a half-space, and (2) inversion of the data to a specific model such as a layered Earth, for which a starting model is used, followed by iterative fitting of the data in the least-squares sense. <br />In the direct transformation approaches (e.g. Sengpiel (1988) and Siemon (2001) centroid depth method), upon the calculation of the centroid depth and apparent resistivity values for each frequency, a vertical pseudo-section of the resistivity is created by concatenating the resistivity-depth curve (smooth model) for each point of a survey line. In these schemes an approximate resistivity model is quickly acquired without a need of a starting or initial resistivity model. In the iterative inversion methods, however, the EM data are modeled inversely using a starting model to get a precise resistivity model. The final outputs of these inversion techniques are highly dependent on the correct selection of the starting model. One of the most effective and accurate methods is the layered earth inversion using the Levenberg-Marquardt (LM) technique based on the singular value decomposition (SVD). Despite the high capability of this inversion technique, it has not been used for modeling of the HEM data in Iran. Because of this reason, this paper aims to verify the accuracy of the final inversion results of the HEM data using various choice of a starting model. Here the required inversion computer codes were developed in the Matlab software. This inversion routine was tested on noise-free and noise-contaminated synthetic data of layered and 3-D models. The obtained results indicate that the final resolved model is in a great accordance with the true model in each case. In addition a set of real HEM data, in south parts of Damghan city in Semnan Province, has finally been inverted with this program, and its results have been compared with those obtained with the direct transformation methods. Results show that the SVD inversion may go to the wrong path when there is not a good starting model. Results also indicate that if the Sengpiel or Siemon centroid depth resistivity models are used as the starting model of the SVD inversion, the final resistivity models would be superior to the final resistivity models obtained by the staring models, yielded by the other direct transformation methods. Helicopter-borne electromagnetic (HEM) is a fast and high resolution airborne electromagnetic (AEM) method that is frequently used for imaging of the subsurface resistivity structures. This is a versatile and cost effective method, frequently has used in mineral and groundwater exploration and various environmental problems. Modern frequency-domain HEM systems utilize small electromagnetic, magnetic, Global Positioning System (GPS) and laser altimeter sensors which are encapsulated in a “bird”, a cigar-shaped 9 m long tube, which is kept at about 30–40 m above the ground level. Separation between the rigidly mounted transmitter and receiver coils typically lies between 4 and 8 m. The modern HEM systems use a multi-frequency devices operating at 4–6 frequencies, ranging from 200 Hz to 200 kHz. In this method, a sinusoidal current flow through the transmitter coil generates a primary magnetic field at a frequency that is very close to a dipole field at some distance from the transmitter coil. The primary oscillating magnetic field induces eddy currents in the subsurface of the Earth. These currents, in turn, generate a secondary magnetic field, which is related to the Earth resistivity distribution. The receiver coils measure the induced secondary magnetic field with respect to the primary magnetic in parts per million (ppm). Due to the induction process of the Electromagnetic (EM) field, there is a small phase shift between the primary and secondary fields. In practice, the transmitter coil is horizontal (VMD: vertical magnetic dipole) or vertical (HMD: horizontal magnetic dipole) and the receiver coil is oriented in a maximally coupled position, resulting in horizontal coplanar (HCP), vertical coplanar (VCP), or vertical coaxial (VCA) coil systems. <br />The final results of the frequency domain HEM data are normally presented in the form of resistivity maps in various frequency or depth levels or as resistivity depth sections along the survey lines for interpretation. The vertical resistivity sections are constructed by concatenating the resistivity models for every measuring point along a survey line. Several methods have been developed to prepare these resistivity maps or depth sections. Many techniques have been developed to model the measured HEM data during the recent 35 years. They are classified into two general groups: (1) direct transform of the data into a generalized model such as a half-space, and (2) inversion of the data to a specific model such as a layered Earth, for which a starting model is used, followed by iterative fitting of the data in the least-squares sense. <br />In the direct transformation approaches (e.g. Sengpiel (1988) and Siemon (2001) centroid depth method), upon the calculation of the centroid depth and apparent resistivity values for each frequency, a vertical pseudo-section of the resistivity is created by concatenating the resistivity-depth curve (smooth model) for each point of a survey line. In these schemes an approximate resistivity model is quickly acquired without a need of a starting or initial resistivity model. In the iterative inversion methods, however, the EM data are modeled inversely using a starting model to get a precise resistivity model. The final outputs of these inversion techniques are highly dependent on the correct selection of the starting model. One of the most effective and accurate methods is the layered earth inversion using the Levenberg-Marquardt (LM) technique based on the singular value decomposition (SVD). Despite the high capability of this inversion technique, it has not been used for modeling of the HEM data in Iran. Because of this reason, this paper aims to verify the accuracy of the final inversion results of the HEM data using various choice of a starting model. Here the required inversion computer codes were developed in the Matlab software. This inversion routine was tested on noise-free and noise-contaminated synthetic data of layered and 3-D models. The obtained results indicate that the final resolved model is in a great accordance with the true model in each case. In addition a set of real HEM data, in south parts of Damghan city in Semnan Province, has finally been inverted with this program, and its results have been compared with those obtained with the direct transformation methods. Results show that the SVD inversion may go to the wrong path when there is not a good starting model. Results also indicate that if the Sengpiel or Siemon centroid depth resistivity models are used as the starting model of the SVD inversion, the final resistivity models would be superior to the final resistivity models obtained by the staring models, yielded by the other direct transformation methods.https://jesphys.ut.ac.ir/article_52423_456b4b1c1afa1c0fc5176ae2ce411d07.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X40420141222Investigation of the climatological effects of the stratospheric polar vortex in Southwest AsiaInvestigation of the climatological effects of the stratospheric polar vortex in Southwest Asia1271385242410.22059/jesphys.2014.52424FAK.AbbaszadehA. R.MohebalhojehF.Ahmadi-GiviJournal Article20131014 The relationship between the stratospheric conditions and evolution, and the surface weather, not only has revolutionized our understanding of the functions of different atmospheric layers and their dynamics, but has also brought about potential implications for weather and climate predictions in almost any region of the planet. Obtaining a better understanding of the field, particularly in Southwest Asia, is the motivation for this study. The, NCEP/NCAR reanalysis data including minimum and average daily temperature, geopotential height, the precipitation rate, pressure and relative humidity at the surface or various atmospheric levels are used. Due to the fact that between 1948 and 1957, observations of the upper atmosphere were less frequent and were made at synoptic hours different from today’s main synoptic times, the reanalysis data are less reliable (Kistler et al., 2001); hence this period has been omitted in the present study; consequently only the data from 1958 to 2011 were used. The study region consists of an area between 25-45°N and longitudes 35-65°E, which includes Iran and extends westward to the Mediterranean Sea. The grid points are 2.5 degrees apart in both longitude and latitude. In a procedure similar to that of Thompson et al. (2002), the mean daily surface temperature and the frequency of cold events are compared in a 60-day interval, following weak and strong vortex conditions. A cold event is defined as a day in which the minimum temperature falls more than one standard deviation beneath the January to March (JFM) climatological mean. The stratospheric polar vortex is called weak or strong when the absolute value of the daily geopotential height anomaly at 10 hPa, averaged from 60- 90°N, is more than twice the JFM climatological standard deviation. Other variables such as relative humidity at 850 hPa, surface pressure, the precipitation rate and temperature anomaly are also compared during these intervals. For each day, in evaluating the temperature anomaly and its standard deviation, use is made of the climatological mean value for the month that the day is in it. Values of a winter severity index (Thompson et al., 2002) are calculated and compared for every winter day using 54 years of temperature data in the two cases of weak and strong polar vortex. The index is proportional to the standardized squared minimum daily temperature and is nonzero if and only if the latter temperature is below a specified threshold value. The winter severity index is averaged over all grid points within our domain of study. A randomization test is used to estimate the significance of the differences observed in the number of cold events after the weak and strong vortex conditions.
It is shown in weak vortex events, daily mean surface temperatures decrease compared to the strong vortex conditions in most parts of the under study region, however cold events become less frequent. Exceptions are two distinct locations in the east and northwest part of the country which appear to have higher daily mean temperatures following the weak vortex conditions. Also, the entire region shows a decrease in surface pressure relative to the strong events for which it can reach up to 30 hPa in some parts including a region located in the southwest of Iran. Weak vortex events are followed by higher relative humidity conditions in most parts of the region— the enhancement reaches six percent at certain areas— which may be linked to the aforementioned drop in surface pressure. The difference in precipitation rate following weak and strong vortices varies from one part of the region to another, showing both increases and decreases, with a maximum absolute value of 12.5 millimeters per month. A positive temperature anomaly differenceis identified in almost the entire region, in agreement with the expectations the frequency and magnitude of positive and negative temperature anomalies. The performed randomization test reveals a confidence level of 92 to 99 percent for the observed differences between the frequency of cold events after the weak and strong vortex conditions depending on the severity of events. The winter severity index also confirms previous findings regarding the frequency of cold events following weak and strong vortices, demonstrating a higher value in almost every winter day for the case of a strong vortex compared to its climatological mean value and the weak vortex conditions.
Although the strength of the stratospheric polar vortex has a less dramatic impact on the study region compared to regions like northern parts of Asia and North America, the current study reveals patterns of weather and climate variability related to the polar vortex conditions in the region which can have important implications for long-term forecasting purposes. The relationship between the stratospheric conditions and evolution, and the surface weather, not only has revolutionized our understanding of the functions of different atmospheric layers and their dynamics, but has also brought about potential implications for weather and climate predictions in almost any region of the planet. Obtaining a better understanding of the field, particularly in Southwest Asia, is the motivation for this study. The, NCEP/NCAR reanalysis data including minimum and average daily temperature, geopotential height, the precipitation rate, pressure and relative humidity at the surface or various atmospheric levels are used. Due to the fact that between 1948 and 1957, observations of the upper atmosphere were less frequent and were made at synoptic hours different from today’s main synoptic times, the reanalysis data are less reliable (Kistler et al., 2001); hence this period has been omitted in the present study; consequently only the data from 1958 to 2011 were used. The study region consists of an area between 25-45°N and longitudes 35-65°E, which includes Iran and extends westward to the Mediterranean Sea. The grid points are 2.5 degrees apart in both longitude and latitude. In a procedure similar to that of Thompson et al. (2002), the mean daily surface temperature and the frequency of cold events are compared in a 60-day interval, following weak and strong vortex conditions. A cold event is defined as a day in which the minimum temperature falls more than one standard deviation beneath the January to March (JFM) climatological mean. The stratospheric polar vortex is called weak or strong when the absolute value of the daily geopotential height anomaly at 10 hPa, averaged from 60- 90°N, is more than twice the JFM climatological standard deviation. Other variables such as relative humidity at 850 hPa, surface pressure, the precipitation rate and temperature anomaly are also compared during these intervals. For each day, in evaluating the temperature anomaly and its standard deviation, use is made of the climatological mean value for the month that the day is in it. Values of a winter severity index (Thompson et al., 2002) are calculated and compared for every winter day using 54 years of temperature data in the two cases of weak and strong polar vortex. The index is proportional to the standardized squared minimum daily temperature and is nonzero if and only if the latter temperature is below a specified threshold value. The winter severity index is averaged over all grid points within our domain of study. A randomization test is used to estimate the significance of the differences observed in the number of cold events after the weak and strong vortex conditions.
It is shown in weak vortex events, daily mean surface temperatures decrease compared to the strong vortex conditions in most parts of the under study region, however cold events become less frequent. Exceptions are two distinct locations in the east and northwest part of the country which appear to have higher daily mean temperatures following the weak vortex conditions. Also, the entire region shows a decrease in surface pressure relative to the strong events for which it can reach up to 30 hPa in some parts including a region located in the southwest of Iran. Weak vortex events are followed by higher relative humidity conditions in most parts of the region— the enhancement reaches six percent at certain areas— which may be linked to the aforementioned drop in surface pressure. The difference in precipitation rate following weak and strong vortices varies from one part of the region to another, showing both increases and decreases, with a maximum absolute value of 12.5 millimeters per month. A positive temperature anomaly differenceis identified in almost the entire region, in agreement with the expectations the frequency and magnitude of positive and negative temperature anomalies. The performed randomization test reveals a confidence level of 92 to 99 percent for the observed differences between the frequency of cold events after the weak and strong vortex conditions depending on the severity of events. The winter severity index also confirms previous findings regarding the frequency of cold events following weak and strong vortices, demonstrating a higher value in almost every winter day for the case of a strong vortex compared to its climatological mean value and the weak vortex conditions.
Although the strength of the stratospheric polar vortex has a less dramatic impact on the study region compared to regions like northern parts of Asia and North America, the current study reveals patterns of weather and climate variability related to the polar vortex conditions in the region which can have important implications for long-term forecasting purposes.https://jesphys.ut.ac.ir/article_52424_0b1b3c85c274d15bbe85f2135c0d4ec2.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X40420141222Study of atmospheric total ozone variations due to winter synoptic scale disturbances over IranStudy of atmospheric total ozone variations due to winter synoptic scale disturbances over Iran1391545242510.22059/jesphys.2014.52425FAZ.SharieA. A.BidokhtiJournal Article20131029 Ozone is an important chemical constituent of the atmosphere, and stratosphere it protects the Earth surface from harmful UV radiations, such that it is known as “good ozone” while, in the lower troposphere it is a highly oxidizing pollutant and known as “bad ozone”. It is also a greenhoure gas that can affect the climate. Hence, the study of spatial and temporal variations of ozone is important in the atmosphere. Large scale atmospheric flows, specially large scale wave activities can contribute to stratosphere-troposphere interaction, leading to the vertical exchange of ozone. Specially transfer of good ozone to the troposphere where it can chemically react fast, through which regionally changes the total atmospheric ozone. These large-scale synoptic systems often occur in winter when the condition of baroclinic conversion of the potential energy to the kinetic energy in the middle latitudes is the largest. The wave activities at this time are also greatest, leading to meridional and vertical transfer of air. <br />In this work the total atmospheric ozone variations over Iran due to upper tropospheric wave activities in winter for the period of 2005-2013 have been investigated. The data are acquired from the Ozone Monitoring Instrument (OMI) satellite, while the ground ozone data are from the Geophysics Station (51。23’E 35。44’N and 1419 m above sea level), University of Tehran. The OMI data has a resolution of 1 degree and have been acquired from TOMS site The synoptic data have also been obtained from NOAA. <br />Results show that large-scale synoptic troughs and ridges are associated with the daily variations of increase (up to 140 DU) and decrease (down to 75 DU) of total ozone, respectively. The pattern of total ozone distributions over this area are well correlated with the 300hpa geopotentail maps. <br />The variations of total ozone cover most of the Iranian Plateau, particularly 30-35°N and 50-60°E. Regions with the maximum variations of total ozone are also found in areas with the largest gradients of ozone concentrations. The areas with the largest decrease or increase of total ozone are found at the axes of troughs and ridges, respectively where large vertical motions are expected, and the westerly component of the subtropical jet stream has the least intensity. It is also found that at the axes of troughs, the gradient of ozone with respect to the geopotentail height is between 0.2 and 0.8 DU/gpm, with an average of 0.5 DU/gpm. In the dominant synoptic patterns associated with variations of total ozone, the vertical motions of 0.2 Pa/s and typical meridional velocity of 30 m.s<sup>-1</sup> are found, indicating large wave activities in the region. Also, regions of maximum ozone appear as bands with their axes often being in the northeast-southwest direction, corresponding to the final stage of the development of large-scale mid-latitude baroclinic disturbances. These regions with such activities also cover the whole area of Iran usually from the Persian Gulf to the Caspian Sea. Ozone is an important chemical constituent of the atmosphere, and stratosphere it protects the Earth surface from harmful UV radiations, such that it is known as “good ozone” while, in the lower troposphere it is a highly oxidizing pollutant and known as “bad ozone”. It is also a greenhoure gas that can affect the climate. Hence, the study of spatial and temporal variations of ozone is important in the atmosphere. Large scale atmospheric flows, specially large scale wave activities can contribute to stratosphere-troposphere interaction, leading to the vertical exchange of ozone. Specially transfer of good ozone to the troposphere where it can chemically react fast, through which regionally changes the total atmospheric ozone. These large-scale synoptic systems often occur in winter when the condition of baroclinic conversion of the potential energy to the kinetic energy in the middle latitudes is the largest. The wave activities at this time are also greatest, leading to meridional and vertical transfer of air. <br />In this work the total atmospheric ozone variations over Iran due to upper tropospheric wave activities in winter for the period of 2005-2013 have been investigated. The data are acquired from the Ozone Monitoring Instrument (OMI) satellite, while the ground ozone data are from the Geophysics Station (51。23’E 35。44’N and 1419 m above sea level), University of Tehran. The OMI data has a resolution of 1 degree and have been acquired from TOMS site The synoptic data have also been obtained from NOAA. <br />Results show that large-scale synoptic troughs and ridges are associated with the daily variations of increase (up to 140 DU) and decrease (down to 75 DU) of total ozone, respectively. The pattern of total ozone distributions over this area are well correlated with the 300hpa geopotentail maps. <br />The variations of total ozone cover most of the Iranian Plateau, particularly 30-35°N and 50-60°E. Regions with the maximum variations of total ozone are also found in areas with the largest gradients of ozone concentrations. The areas with the largest decrease or increase of total ozone are found at the axes of troughs and ridges, respectively where large vertical motions are expected, and the westerly component of the subtropical jet stream has the least intensity. It is also found that at the axes of troughs, the gradient of ozone with respect to the geopotentail height is between 0.2 and 0.8 DU/gpm, with an average of 0.5 DU/gpm. In the dominant synoptic patterns associated with variations of total ozone, the vertical motions of 0.2 Pa/s and typical meridional velocity of 30 m.s<sup>-1</sup> are found, indicating large wave activities in the region. Also, regions of maximum ozone appear as bands with their axes often being in the northeast-southwest direction, corresponding to the final stage of the development of large-scale mid-latitude baroclinic disturbances. These regions with such activities also cover the whole area of Iran usually from the Persian Gulf to the Caspian Sea. https://jesphys.ut.ac.ir/article_52425_9bfe9a36af2cce3feefed64435c5eebc.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X40420141222Regionalization of Iran based on extreme warm temperaturesRegionalization of Iran based on extreme warm temperatures1551685242610.22059/jesphys.2014.52426FAA.AsadiA.Masoodian0000-0001-6227-6713Journal Article20131118 Temperature is one of the essential elements of forming a climate and plays a crucial role in the lives of flora, fauna and human activities. The extreme temperature is one of the thermal indexes in meteorological and climatological studies. The extreme temperature is divided into two types: the extreme warm and extreme cold. The extreme warm includes the temperatures much above the normal value and the extreme cold includes temperatures much below the normal value. Studying the extreme warm events due to their social and economical effects and their impact on human<sup>’</sup>s health has prominent importance. In order to regionalize the extreme warm of Iran, we used Sphezari dataset. The Sphezari base has been provided from the average temperature based on daily data from 663 synoptic and climatological stations from 1 January 1961 to 31 December 2004. The pixel of this dataset has been calculated in the form of 15 × 15 km<sup>2</sup> and by kriging method. Therefore, the matrix dimensions of day to day temperature of Iran is in the form of 15992 × 7187 Sphezari dataset. In this dataset the rows (915992 days) represent the time and the columns (7187 pixel) represent the place. We have used normalized temperature departure index to identify the events of extreme warm events in this survey.The index has been introduced by Fujibi et al.(2007) .To obtain this index, the long term average temperature of calendar days must first be calculated. <br />The thermal amounts of 44 years are averaged to calculate the long term mean temperature of the given days. To avoid the existing noise in the daily mean temperature,the nine-day running average was applied three times in order to filter out day-to-day irregularities. After carrying out this phases , temperature departure ( ) of each of the 15992 days is investigated in the long term mean of the same day. Thus, it is necessary that the amount of the absolute temperature departure becomes standardized by the averages of . In this way, the amount of temperature departure in different times of a geographical point and different spatials in a particular time can be compared to each other. As an index of day-to-day variability, the variance of Δ<em>T</em> in the 31 days centered on each calendar day was calculated as Then the moving mean of nine days in three times will be conducted to dimnish the noise. Then normalized temperature departure (NTD) indexed with x* symbol was calculated. This index was calculated for 7187 pixels, each pixel for 15992 days. Then, the index of location x* was investigated over Iran and the percent area of Iran which had the amount of x*≥2 was determined. In this way, an index of 15992 × 2 was obtained, indicating the greatness highest temperatures of Iran for the period of 1 Jan 1961 to 31 Dec 2004. This matrix was arranged according to the mean of NTD and area amount. The first 264 days was selected as the sample. Whereas the temperature was in over of Iran, at least, 2 standard deviation more than its long term mean (x*≥2) and a large area was warmmer of Iran. The NTD of 7187 pixels in the selected 264 days was classified using the cluster analysis technique and agglomeration based on the entered method. Results of this research showed that according to the extreme warm events, Iran can be classified into five distinctive regions.The most important characteristics of the extreme warm events in Iran are as follow: Most of the extreme warm events of Iran have occurred in winter and autumn days. The maximum warm events of Iran has occurred in west and southwest of Iran, specially, in recent years. NTD is one degree above the other areas. The setting of this region with the maximum rate of the NTD index shows that the systems creating the extreme warm events was entered from west and southwest of Iran ; thus there are regions was influenced more and prior to the other regions. The highest spatial standard deviation belongs to these regions. It means that these regions have little spatial similarity from the viewpoint of the NTD index. It means that the extreme warm events creating systems donot attack this region equally. Some regions are influenced more and some less than others by these systems. <br />Maximum temporal standard deviation belongs to northern and western regions. This means that events of the extreme warm events happen in these regions in some months. Therefore the systems creating the extreme warm events in these regions are activated in part of the year. The least temporal standard deviation belongs to the northeastern region and the least spatial standard deviation belongs to south and southeast regions. Temperature is one of the essential elements of forming a climate and plays a crucial role in the lives of flora, fauna and human activities. The extreme temperature is one of the thermal indexes in meteorological and climatological studies. The extreme temperature is divided into two types: the extreme warm and extreme cold. The extreme warm includes the temperatures much above the normal value and the extreme cold includes temperatures much below the normal value. Studying the extreme warm events due to their social and economical effects and their impact on human<sup>’</sup>s health has prominent importance. In order to regionalize the extreme warm of Iran, we used Sphezari dataset. The Sphezari base has been provided from the average temperature based on daily data from 663 synoptic and climatological stations from 1 January 1961 to 31 December 2004. The pixel of this dataset has been calculated in the form of 15 × 15 km<sup>2</sup> and by kriging method. Therefore, the matrix dimensions of day to day temperature of Iran is in the form of 15992 × 7187 Sphezari dataset. In this dataset the rows (915992 days) represent the time and the columns (7187 pixel) represent the place. We have used normalized temperature departure index to identify the events of extreme warm events in this survey.The index has been introduced by Fujibi et al.(2007) .To obtain this index, the long term average temperature of calendar days must first be calculated. <br />The thermal amounts of 44 years are averaged to calculate the long term mean temperature of the given days. To avoid the existing noise in the daily mean temperature,the nine-day running average was applied three times in order to filter out day-to-day irregularities. After carrying out this phases , temperature departure ( ) of each of the 15992 days is investigated in the long term mean of the same day. Thus, it is necessary that the amount of the absolute temperature departure becomes standardized by the averages of . In this way, the amount of temperature departure in different times of a geographical point and different spatials in a particular time can be compared to each other. As an index of day-to-day variability, the variance of Δ<em>T</em> in the 31 days centered on each calendar day was calculated as Then the moving mean of nine days in three times will be conducted to dimnish the noise. Then normalized temperature departure (NTD) indexed with x* symbol was calculated. This index was calculated for 7187 pixels, each pixel for 15992 days. Then, the index of location x* was investigated over Iran and the percent area of Iran which had the amount of x*≥2 was determined. In this way, an index of 15992 × 2 was obtained, indicating the greatness highest temperatures of Iran for the period of 1 Jan 1961 to 31 Dec 2004. This matrix was arranged according to the mean of NTD and area amount. The first 264 days was selected as the sample. Whereas the temperature was in over of Iran, at least, 2 standard deviation more than its long term mean (x*≥2) and a large area was warmmer of Iran. The NTD of 7187 pixels in the selected 264 days was classified using the cluster analysis technique and agglomeration based on the entered method. Results of this research showed that according to the extreme warm events, Iran can be classified into five distinctive regions.The most important characteristics of the extreme warm events in Iran are as follow: Most of the extreme warm events of Iran have occurred in winter and autumn days. The maximum warm events of Iran has occurred in west and southwest of Iran, specially, in recent years. NTD is one degree above the other areas. The setting of this region with the maximum rate of the NTD index shows that the systems creating the extreme warm events was entered from west and southwest of Iran ; thus there are regions was influenced more and prior to the other regions. The highest spatial standard deviation belongs to these regions. It means that these regions have little spatial similarity from the viewpoint of the NTD index. It means that the extreme warm events creating systems donot attack this region equally. Some regions are influenced more and some less than others by these systems. <br />Maximum temporal standard deviation belongs to northern and western regions. This means that events of the extreme warm events happen in these regions in some months. Therefore the systems creating the extreme warm events in these regions are activated in part of the year. The least temporal standard deviation belongs to the northeastern region and the least spatial standard deviation belongs to south and southeast regions.https://jesphys.ut.ac.ir/article_52426_82ffbacd6f81c005e893f594cd725afa.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X40420141222Dispersion of suspended aerosols in a turbulent flowDispersion of suspended aerosols in a turbulent flow1691805242710.22059/jesphys.2014.52427FAO.GhaffarpasandF.HosseiniE.HassanzadehJournal Article20131217In this study, the dispersion mechanisms of aerosols suspended in a turbulent plane channel flow is investigated using a novel numerical approach. A turbulent channel flow is simulated by a Direct Numerical Simulation (DNS) method, for which no-slip boundary conditions are assumed at the top and bottom walls, while periodicity conditions are applied on the other sides. DNS, in particular, allows a detailed analysis of the near wall region, where most of the particle transfer mechanisms take place. Hence, it is found the best simulating method for detailed analyzing the dispersion mechanisms compared to the other available methods. The simulation procedure of the turbulent flow is continued as along as enough, 14000 time units, when fully developed turbulent condition are achieved. <br /> The aerosols with two Stokes number, 15 and 25, are then introduced in the simulated turbulent channel flow, and tracked by a Lagrangian approach. The drag force compared to the effect of Brownian motion is a dominant force due to the aerosols size. The initial concentration of suspended aerosols is also assumed considerably low, so that the simulations conducted under the one-way coupling condition. Besides, the collisions of aerosols with the walls are assumed elastically. The particle tracking was continued throughout the fluid simulation time to obtain the all reliable interesting statistics. <br />Comparison of the particle flux intensities indicates that turbophoretic and turbulent diffusion fluxes are the dominant dispersion mechanisms. In other words, the free-flight flux can be neglected in comparison with the other fluxes in the wall region. The steady-state concentration distribution is not uniform across the channel, primarily due to the opposing actions of the turbophoretic and turbulent diffusion flux. <br />Turbulent diffusion flux separated the aerosols from the core and gathered them in the near wall region, while the turbophoretic flux migrate the particles from the near wall to the wall region. It was also observed that the turbophoretic flux for smaller aerosols is more efficient than that of larger ones. However, the opposite was observed for the turbulent diffusions flux. The smaller particles were less gathered in the near wall region due to a stronger turbulent diffusion flux and more migrated to the wall region due to stronger turbophoretic flux. We also investigated the cross channel fluid and particles velocity profiles. It was shown that the aerosol velocity components lag the fluid velocities in the near wall, but lead it in the core region. This is due to the transverse migration of aerosols across the channel.In this study, the dispersion mechanisms of aerosols suspended in a turbulent plane channel flow is investigated using a novel numerical approach. A turbulent channel flow is simulated by a Direct Numerical Simulation (DNS) method, for which no-slip boundary conditions are assumed at the top and bottom walls, while periodicity conditions are applied on the other sides. DNS, in particular, allows a detailed analysis of the near wall region, where most of the particle transfer mechanisms take place. Hence, it is found the best simulating method for detailed analyzing the dispersion mechanisms compared to the other available methods. The simulation procedure of the turbulent flow is continued as along as enough, 14000 time units, when fully developed turbulent condition are achieved. <br /> The aerosols with two Stokes number, 15 and 25, are then introduced in the simulated turbulent channel flow, and tracked by a Lagrangian approach. The drag force compared to the effect of Brownian motion is a dominant force due to the aerosols size. The initial concentration of suspended aerosols is also assumed considerably low, so that the simulations conducted under the one-way coupling condition. Besides, the collisions of aerosols with the walls are assumed elastically. The particle tracking was continued throughout the fluid simulation time to obtain the all reliable interesting statistics. <br />Comparison of the particle flux intensities indicates that turbophoretic and turbulent diffusion fluxes are the dominant dispersion mechanisms. In other words, the free-flight flux can be neglected in comparison with the other fluxes in the wall region. The steady-state concentration distribution is not uniform across the channel, primarily due to the opposing actions of the turbophoretic and turbulent diffusion flux. <br />Turbulent diffusion flux separated the aerosols from the core and gathered them in the near wall region, while the turbophoretic flux migrate the particles from the near wall to the wall region. It was also observed that the turbophoretic flux for smaller aerosols is more efficient than that of larger ones. However, the opposite was observed for the turbulent diffusions flux. The smaller particles were less gathered in the near wall region due to a stronger turbulent diffusion flux and more migrated to the wall region due to stronger turbophoretic flux. We also investigated the cross channel fluid and particles velocity profiles. It was shown that the aerosol velocity components lag the fluid velocities in the near wall, but lead it in the core region. This is due to the transverse migration of aerosols across the channel.https://jesphys.ut.ac.ir/article_52427_3ea44ccda7107fc74a5fdaeda37ee613.pdf