Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Two dimensional Rayleigh wave group velocity tomography in Northwestern part of Iranian PlateauTwo dimensional Rayleigh wave group velocity tomography in Northwestern part of Iranian Plateau2212325569910.22059/jesphys.2016.55699FAHoshmandZandiJournal Article20150106Summary<br />The explanation of the elastic, or velocity structure of the Earth has long been a goal of the world’s seismologists. For the first few decades of seismological research, the investigation on velocity structure was restricted to the determination of one-dimensional models of the solid Earth and of various regions within it. Seismologists are currently obtaining three dimensional velocity models and are working to resolve finer and finer features in the Earth. The knowledge of seismic velocity structure of the crust and the upper mantle is important for several reasons: these include accurate location of earthquakes, determination of the composition and origin of the outer layers of the Earth, improvement of our ability to discriminate nuclear explosions from earthquake, interpretation of large-scale tectonics and reliable assessment of earthquake hazard. The Iranian part of the Alpine-Himalayan collision zone consists of an assemblage of lithospheric blocks, features a complex tectonic setting, which results from the collision and convergence of the Arabian plate towards Eurasia, thus investigation of the structure of the lithosphere and the asthenosphere of the Iranian plateau is of great interest. <br />The North West of Iran is affected by important seismic activity concentrated along the North Tabriz Fault. In recent centuries, more than five successive and destructive seismic events have occurred along the North Tabriz Fault. The North West of Iran is particularly rich in geological and Most of NW Iran is located in a volcanic arc zone of Cenozoic age, including the Quaternary. Some of the main geological features in the North western of Iranian plateau include the Sahand Volcano, the Urmia Lake, salt deposits, travertine deposits, springs, limestone caves, tectonic structures and Cenozoic vertebrate fossils. Sahand and Sabalan peaks are the most prominent geological as well as topographic feature in the region. <br />The aim of this study is to obtain two dimensional tomographic maps of Rayleigh wave group velocity of the Northwest part of Iran plateau. To do this, the local earthquakes data during the period 2006 to 2013, recorded at 10 broadband stations of the International Institute of Earthquake Engineering and Seismology (IIEES) were used. firstly, Rayleigh wave fundamental mode dispersion curves using single-station method were estimated. In single-station method after the preliminary correction, Rayleigh wave group velocity for each source-station using time-frequency analysis (FTAN) were estimated. After estimating group velocity dispersion curves, using a 2D-linear inversion procedure, the group velocity tomographic maps for the period 2-50 s were obtained. Each tomographic map has been discretized with a grid of 0.5° of latitude per 0.5° of longitude. The results at period 5 s show a low velocity anomaly beneath the Sabalan volcano, whereas beneath the Sahand volcano a high velocity anomaly is observed. At period 10 s the results are different. Beneath the Sabalan volcano a high velocity anomaly is observed, whereas beneath the Sahand volcano and also along the Urumieh-Dokhtar Magmatic Arc a low velocity anomaly are observed. At period 20 s in most of the study area, a low velocity anomaly is observed. The results at period 40 s are different, so that a low velocity anomaly in the southern part and a high velocity anomaly in the northern part are observed.Summary<br />The explanation of the elastic, or velocity structure of the Earth has long been a goal of the world’s seismologists. For the first few decades of seismological research, the investigation on velocity structure was restricted to the determination of one-dimensional models of the solid Earth and of various regions within it. Seismologists are currently obtaining three dimensional velocity models and are working to resolve finer and finer features in the Earth. The knowledge of seismic velocity structure of the crust and the upper mantle is important for several reasons: these include accurate location of earthquakes, determination of the composition and origin of the outer layers of the Earth, improvement of our ability to discriminate nuclear explosions from earthquake, interpretation of large-scale tectonics and reliable assessment of earthquake hazard. The Iranian part of the Alpine-Himalayan collision zone consists of an assemblage of lithospheric blocks, features a complex tectonic setting, which results from the collision and convergence of the Arabian plate towards Eurasia, thus investigation of the structure of the lithosphere and the asthenosphere of the Iranian plateau is of great interest. <br />The North West of Iran is affected by important seismic activity concentrated along the North Tabriz Fault. In recent centuries, more than five successive and destructive seismic events have occurred along the North Tabriz Fault. The North West of Iran is particularly rich in geological and Most of NW Iran is located in a volcanic arc zone of Cenozoic age, including the Quaternary. Some of the main geological features in the North western of Iranian plateau include the Sahand Volcano, the Urmia Lake, salt deposits, travertine deposits, springs, limestone caves, tectonic structures and Cenozoic vertebrate fossils. Sahand and Sabalan peaks are the most prominent geological as well as topographic feature in the region. <br />The aim of this study is to obtain two dimensional tomographic maps of Rayleigh wave group velocity of the Northwest part of Iran plateau. To do this, the local earthquakes data during the period 2006 to 2013, recorded at 10 broadband stations of the International Institute of Earthquake Engineering and Seismology (IIEES) were used. firstly, Rayleigh wave fundamental mode dispersion curves using single-station method were estimated. In single-station method after the preliminary correction, Rayleigh wave group velocity for each source-station using time-frequency analysis (FTAN) were estimated. After estimating group velocity dispersion curves, using a 2D-linear inversion procedure, the group velocity tomographic maps for the period 2-50 s were obtained. Each tomographic map has been discretized with a grid of 0.5° of latitude per 0.5° of longitude. The results at period 5 s show a low velocity anomaly beneath the Sabalan volcano, whereas beneath the Sahand volcano a high velocity anomaly is observed. At period 10 s the results are different. Beneath the Sabalan volcano a high velocity anomaly is observed, whereas beneath the Sahand volcano and also along the Urumieh-Dokhtar Magmatic Arc a low velocity anomaly are observed. At period 20 s in most of the study area, a low velocity anomaly is observed. The results at period 40 s are different, so that a low velocity anomaly in the southern part and a high velocity anomaly in the northern part are observed.https://jesphys.ut.ac.ir/article_55699_338640f5ad3f429489b25abb11d69a04.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Determination of slippage field of strike-slip faults using inverse solution methodsDetermination of slippage field of strike-slip faults using inverse solution methods2332455590810.22059/jesphys.2016.55908FAJournal Article20150209In this paper, a theoretical approach is proposed, in which spatial distribution of the strength of interplate coupling between two faces of strike-slip faults is investigated in detail through the inverse analysis of synthesized geodetic data. Synthesized (or available) geodetic data representing free surface movements is implemented to determine the solution of undertaken inverse problem that computes slippage vectors’ rates. Analytical approaches for treatment of faults in crustal deformation analysis involve some limitations. One important limitation of these methods is idealization of uniform dislocation on a rectangular fault plane in a uniform medium or half space. In fact, the real source is more complex than that supposed in these models and thus only the first-order aspects of the source characteristics can be evaluated from a uniform dislocation model. Isotropic and homogeneous material properties are the main assumptions of these methods. The Finite Element Method (FEM) on the other hand, allows easy treatment of complex boundary shape (interface zone) and internal variations of material properties. The FEM can simulate source geometry flexibly, and is also able to regard geological regimes and various layered structures. The standard equations of inverse problems offer a straightforward way for finding slippage vectors at two faces of the considered fault. One of the new aspects of the current study is evaluation of Green’s Operator Matrix (GOM) by means of FEM. This concept enables us to overcome all limitations of traditional inverse methods. In other words, the Green’s functions are not only functions of interface’s geometry, but are also functions of some other parameters like far-field boundary conditions, and geological structures (various material properties) which are not regarded in the traditional analytical inversion analysis. To implement fault sliding in a continuum-based FEM program, Split Node Technique (SNT) as a simple and efficient method is applied. This method does not increase the number of Degree Of Freedom (DOF) and the global stiffness matrix of system remains unchanged, which is the major advantage of this method. Furthermore, no net forces or moments are induced on the finite element mesh. This method is a direct approach and does not need any iteration, which is a common feature of other methods (e.g., contact problem techniques, or interface/joint elements). The initial idea of SNT for simple one-dimensional element is developed to 2D and 3D domains in the present research. How to find the Green’s functions by the FEM? By applying unit slippage vectors in each DOF of the interface nodes, we can determine corresponding component of the GOM. As other common inversion problems, singularity of coefficient matrix is the main problem. This problem particularly emerges if the number of DOFs is too large. The numerical procedure does not fail algorithmically, however it returns a set of slippage vectors that are wrong, even though direct substitution back into the original equations results in acceptable free-surface deformations. Singular Value Decomposition (SVD) diagnoses precisely what the problem is. In some cases, SVD will not only diagnose the problem, it will also solve it. The approach in current research is based on kinematic modeling of seismological problem. In other words, we only investigate fault movement, not causes of the occurred movement. In this research, both forward and inverse steps are considered to completely solve the problem. The forward step is performed by applying the slip along the fault’s faces and determining the displacement at the ground surface. This step is done using the FEM, whose results are compared with the analytical ones to verify the forward step. In the inverse solution on the other hand, our goal is to reach fault slip field using of surface displacement obtained from the first step as input data. Here, using this technique, 2D and 3D models of different types of strike-slip faults are presented in elastic mode for splitting purposes. The final step is to verify the inverse solution obtained for all models, from which the coupled zones of the considered faults are determined with acceptable accuracies.In this paper, a theoretical approach is proposed, in which spatial distribution of the strength of interplate coupling between two faces of strike-slip faults is investigated in detail through the inverse analysis of synthesized geodetic data. Synthesized (or available) geodetic data representing free surface movements is implemented to determine the solution of undertaken inverse problem that computes slippage vectors’ rates. Analytical approaches for treatment of faults in crustal deformation analysis involve some limitations. One important limitation of these methods is idealization of uniform dislocation on a rectangular fault plane in a uniform medium or half space. In fact, the real source is more complex than that supposed in these models and thus only the first-order aspects of the source characteristics can be evaluated from a uniform dislocation model. Isotropic and homogeneous material properties are the main assumptions of these methods. The Finite Element Method (FEM) on the other hand, allows easy treatment of complex boundary shape (interface zone) and internal variations of material properties. The FEM can simulate source geometry flexibly, and is also able to regard geological regimes and various layered structures. The standard equations of inverse problems offer a straightforward way for finding slippage vectors at two faces of the considered fault. One of the new aspects of the current study is evaluation of Green’s Operator Matrix (GOM) by means of FEM. This concept enables us to overcome all limitations of traditional inverse methods. In other words, the Green’s functions are not only functions of interface’s geometry, but are also functions of some other parameters like far-field boundary conditions, and geological structures (various material properties) which are not regarded in the traditional analytical inversion analysis. To implement fault sliding in a continuum-based FEM program, Split Node Technique (SNT) as a simple and efficient method is applied. This method does not increase the number of Degree Of Freedom (DOF) and the global stiffness matrix of system remains unchanged, which is the major advantage of this method. Furthermore, no net forces or moments are induced on the finite element mesh. This method is a direct approach and does not need any iteration, which is a common feature of other methods (e.g., contact problem techniques, or interface/joint elements). The initial idea of SNT for simple one-dimensional element is developed to 2D and 3D domains in the present research. How to find the Green’s functions by the FEM? By applying unit slippage vectors in each DOF of the interface nodes, we can determine corresponding component of the GOM. As other common inversion problems, singularity of coefficient matrix is the main problem. This problem particularly emerges if the number of DOFs is too large. The numerical procedure does not fail algorithmically, however it returns a set of slippage vectors that are wrong, even though direct substitution back into the original equations results in acceptable free-surface deformations. Singular Value Decomposition (SVD) diagnoses precisely what the problem is. In some cases, SVD will not only diagnose the problem, it will also solve it. The approach in current research is based on kinematic modeling of seismological problem. In other words, we only investigate fault movement, not causes of the occurred movement. In this research, both forward and inverse steps are considered to completely solve the problem. The forward step is performed by applying the slip along the fault’s faces and determining the displacement at the ground surface. This step is done using the FEM, whose results are compared with the analytical ones to verify the forward step. In the inverse solution on the other hand, our goal is to reach fault slip field using of surface displacement obtained from the first step as input data. Here, using this technique, 2D and 3D models of different types of strike-slip faults are presented in elastic mode for splitting purposes. The final step is to verify the inverse solution obtained for all models, from which the coupled zones of the considered faults are determined with acceptable accuracies.https://jesphys.ut.ac.ir/article_55908_6e8f3ff4d48127bd59da15754cd888e6.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Empirical relationships between magnitude and some faulting parameters for earthquakes in Iran and adjacent regionsEmpirical relationships between magnitude and some faulting parameters for earthquakes in Iran and adjacent regions2472625773610.22059/jesphys.2016.57736FAForoughMohammad AminiElhamShabaniNoorbakhshMirzaeiJournal Article20150808To assess seismic hazard, it is essential to estimate the potential seismicity, particularly estimation of size of the largest earthquake that can be identified by a distinct fault or earthquake source. One of the methods applied to estimate magnitude of earthquake is use of empirical relationships between magnitude and fault parameters. Fault parameters for the earthquakes from early instrumental period to 2014 are compiled to develop a series of empirical relationships among M_w and M_s with surface rupture length, horizontal and vertical displacements. The objective of this study is provision of an accurate piece of information about previous earthquakes through studies and investigations as well as a comprehensive and through catalog of the recent earthquakes so as to represent precise empirical relationships between the magnitude and earthquake fault parameters. Moreover, the information about earthquakes with magnitudes greater than M_w and M_s ≥5.5 were selected. In the beginning, the relationships between magnitudes and fault parameters were acquired, then the database on the basis of three slip types, consisting of strike-slip, normal and thrust, were manipulated and eventually relations were obtained. One also should not overlook the fact that three regression models, including SR, ISR and OR, were employed in this study. In SR and ISR methods, no error are considered for independent variables, whereas the mentioned error is taken into account in OR method. The OR method is obtained by minimization of the squares of the orthogonal distances to the best-fit line, whereas SR is derived by minimizing the squares of the vertical offset and also inverted standard least-squares regression (ISR) is derived by minimizing the squares of the horizontal offsets. However, roughly equal uncertainties for the two variables are regarded in the SR and OR methods. According to the obtained results, the best relationship between M_w and M_s with surface rupture length was established with correlation coefficients of 0.87 and 0.86, respectively. Also the relationships between M_s with maximum horizontal and vertical displacements with correlation coefficient of 0.63 and 0.62 in a respect way, are far better than the relationships between M_w with maximum horizontal and vertical displacements with correlation coefficient of 0.59 and 0.59. In addition, the relationships between magnitudes (M_w or M_s) and maximum horizontal displacements indicate a better fit than the relationships between the magnitudes and maximum vertical displacements. It is also worth mentioning that the best fit between M_w or M_s and surface rupture length was acquired with correlation coefficients of 0.87 and 0.86, respectively, by separating the database based on slip type. Whatever was mentioned about relationships between magnitudes and maximum horizontal and vertical displacements, is still valid by separating the database on the basis of slip type. According to the results of the direct relations, the SR and OR regression methods are far better than the ISR regression method with less error than ISR regression method. What’s more, in inverse relations, the ISR regression method estimates the coefficients with the lowest error rate in comparison with other methods. As an outcome, the findings were established from the current study are better than the global relationships for Iran and its adjacent regions. For example, in the light of the relationship between M_s and surface rupture length, the utilized global relation has been overestimated and then underestimated up to M=7.2 with respect to OR regression. The obtained relations have been simultaneously compared to two global relations within relationship between MS and surface rupture length. The results would seem to suggest that both global relations are overestimated and then underestimated up to M=6.5 in comparison with other regression methods.To assess seismic hazard, it is essential to estimate the potential seismicity, particularly estimation of size of the largest earthquake that can be identified by a distinct fault or earthquake source. One of the methods applied to estimate magnitude of earthquake is use of empirical relationships between magnitude and fault parameters. Fault parameters for the earthquakes from early instrumental period to 2014 are compiled to develop a series of empirical relationships among M_w and M_s with surface rupture length, horizontal and vertical displacements. The objective of this study is provision of an accurate piece of information about previous earthquakes through studies and investigations as well as a comprehensive and through catalog of the recent earthquakes so as to represent precise empirical relationships between the magnitude and earthquake fault parameters. Moreover, the information about earthquakes with magnitudes greater than M_w and M_s ≥5.5 were selected. In the beginning, the relationships between magnitudes and fault parameters were acquired, then the database on the basis of three slip types, consisting of strike-slip, normal and thrust, were manipulated and eventually relations were obtained. One also should not overlook the fact that three regression models, including SR, ISR and OR, were employed in this study. In SR and ISR methods, no error are considered for independent variables, whereas the mentioned error is taken into account in OR method. The OR method is obtained by minimization of the squares of the orthogonal distances to the best-fit line, whereas SR is derived by minimizing the squares of the vertical offset and also inverted standard least-squares regression (ISR) is derived by minimizing the squares of the horizontal offsets. However, roughly equal uncertainties for the two variables are regarded in the SR and OR methods. According to the obtained results, the best relationship between M_w and M_s with surface rupture length was established with correlation coefficients of 0.87 and 0.86, respectively. Also the relationships between M_s with maximum horizontal and vertical displacements with correlation coefficient of 0.63 and 0.62 in a respect way, are far better than the relationships between M_w with maximum horizontal and vertical displacements with correlation coefficient of 0.59 and 0.59. In addition, the relationships between magnitudes (M_w or M_s) and maximum horizontal displacements indicate a better fit than the relationships between the magnitudes and maximum vertical displacements. It is also worth mentioning that the best fit between M_w or M_s and surface rupture length was acquired with correlation coefficients of 0.87 and 0.86, respectively, by separating the database based on slip type. Whatever was mentioned about relationships between magnitudes and maximum horizontal and vertical displacements, is still valid by separating the database on the basis of slip type. According to the results of the direct relations, the SR and OR regression methods are far better than the ISR regression method with less error than ISR regression method. What’s more, in inverse relations, the ISR regression method estimates the coefficients with the lowest error rate in comparison with other methods. As an outcome, the findings were established from the current study are better than the global relationships for Iran and its adjacent regions. For example, in the light of the relationship between M_s and surface rupture length, the utilized global relation has been overestimated and then underestimated up to M=7.2 with respect to OR regression. The obtained relations have been simultaneously compared to two global relations within relationship between MS and surface rupture length. The results would seem to suggest that both global relations are overestimated and then underestimated up to M=6.5 in comparison with other regression methods.https://jesphys.ut.ac.ir/article_57736_33da4e628612de97ab2d8ddfc2b6e31d.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822ML Velocity Tomography in Iranian PlateauML Velocity Tomography in Iranian Plateau2632795773110.22059/jesphys.2016.57731FAMehdiMaheri PeyrovMadjidAbbasiJournal Article20150906We useWe use ML shear wave velocity to derive a high resolution 2D ML shear wave velocity map for the Iranian Plateau. The ML amplitudes and arrival times are routinely measured for the calculation of local magnitude. ML shear wave velocity is very sensitive to the lateral change of crustal thickness and switches between the velocity of Lg and Sn waves. An Lg wave will die out as soon as encounter a sudden crustal change in favor of formation of mantle Sn wave. The collected data base is consisted of 56152 ML velocity belong to 2943 precisely relocated events happened during 1996 to 2012. The arrival time of ML amplitudes were read from waveforms of permanent and temporary networks in Iran. Using the arrival time of an ML amplitude and its ray length, we calculate average shear wave velocity for each ray. The selected events are consisted of 63 clusters with epicentral location uncertainty of 5 km or less. The cluster approach adopted in this work allows us to easily calculate empirical velocity error for each summary ray connecting a given observing station to the corresponding cluster. This also reduces drastically the number of initial 56152 rays to just 3107 summary rays and thus significantly reduces the required computation time for the seismic tomography. Except for the Makran region, the summary rays provide a good coverage for most of Iran. Using a constrained direct damped weighted least square inversion scheme, we inverted the ML velocity for a 2D ML shear wave velocity map of Iran along with its cluster and station correction terms. In our tomography, we constrained the velocity of each cell based on the azimuthal coverage of the hitting rays. The input average velocity for each ML ray was also weighted based on its empirical reading spread. The computed ML shear velocity varies mostly between 2.9 and 3.6 km/s, so suggesting that the majority of the rays are indeed Lg rays. The map shows a general similarity with previous maps of Pn velocity indicating that ML shear wave velocity is strongly affected by lateral changes of crustal thickness and upper mantle velocity. Our results show that Caspian Basin, and Zagros regions are Lg blocking regions. We speculate that the blockage of Lg wave in Zagros is related to strong lateral crustal thickness changes caused by the orogenic processes. We also noted that the shear wave velocity border between the Zagros and Central Iran is considerably deviating from Zagros suture line indicating a partial underthrusting of the cold Arabian plate beneath the Central Iran. The Lg blockage in South Caspian basin is either related to its postulated oceanic type crust and/or strong lateral change in its crustal thickness. East of the Caspian Sea shows high velocities likewise its interior, implying the low plain is underlane by either an oceanic type crust or a transitional crust with large lateral variations of crustal thickness. The ML velocity map also shows a velocity in the range of Lg velocity for the Lut block and thus implying a continental nature for the unknown Lut block. Alborz, most of the Central Iran and especially the northwestern Iran show rather low Lg velocities suggesting a warm continental crust.We useWe use ML shear wave velocity to derive a high resolution 2D ML shear wave velocity map for the Iranian Plateau. The ML amplitudes and arrival times are routinely measured for the calculation of local magnitude. ML shear wave velocity is very sensitive to the lateral change of crustal thickness and switches between the velocity of Lg and Sn waves. An Lg wave will die out as soon as encounter a sudden crustal change in favor of formation of mantle Sn wave. The collected data base is consisted of 56152 ML velocity belong to 2943 precisely relocated events happened during 1996 to 2012. The arrival time of ML amplitudes were read from waveforms of permanent and temporary networks in Iran. Using the arrival time of an ML amplitude and its ray length, we calculate average shear wave velocity for each ray. The selected events are consisted of 63 clusters with epicentral location uncertainty of 5 km or less. The cluster approach adopted in this work allows us to easily calculate empirical velocity error for each summary ray connecting a given observing station to the corresponding cluster. This also reduces drastically the number of initial 56152 rays to just 3107 summary rays and thus significantly reduces the required computation time for the seismic tomography. Except for the Makran region, the summary rays provide a good coverage for most of Iran. Using a constrained direct damped weighted least square inversion scheme, we inverted the ML velocity for a 2D ML shear wave velocity map of Iran along with its cluster and station correction terms. In our tomography, we constrained the velocity of each cell based on the azimuthal coverage of the hitting rays. The input average velocity for each ML ray was also weighted based on its empirical reading spread. The computed ML shear velocity varies mostly between 2.9 and 3.6 km/s, so suggesting that the majority of the rays are indeed Lg rays. The map shows a general similarity with previous maps of Pn velocity indicating that ML shear wave velocity is strongly affected by lateral changes of crustal thickness and upper mantle velocity. Our results show that Caspian Basin, and Zagros regions are Lg blocking regions. We speculate that the blockage of Lg wave in Zagros is related to strong lateral crustal thickness changes caused by the orogenic processes. We also noted that the shear wave velocity border between the Zagros and Central Iran is considerably deviating from Zagros suture line indicating a partial underthrusting of the cold Arabian plate beneath the Central Iran. The Lg blockage in South Caspian basin is either related to its postulated oceanic type crust and/or strong lateral change in its crustal thickness. East of the Caspian Sea shows high velocities likewise its interior, implying the low plain is underlane by either an oceanic type crust or a transitional crust with large lateral variations of crustal thickness. The ML velocity map also shows a velocity in the range of Lg velocity for the Lut block and thus implying a continental nature for the unknown Lut block. Alborz, most of the Central Iran and especially the northwestern Iran show rather low Lg velocities suggesting a warm continental crust.https://jesphys.ut.ac.ir/article_57731_09d93fbdec213f97590fb4bd3090ed46.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Study of expert systems in predicting earthquakeStudy of expert systems in predicting earthquake2812925603010.22059/jesphys.2016.56030FAZinatMehdidoust JalaliAsadollahShahbahramiAssociate Professor Department of the Computer Engineering, Faculty of Engineering,University of Guilan, Rasht, Iran.Journal Article20150918The crisis in the event of an incident or accident occurs suddenly and unexpectedly that urgent attention is needed for proper decision making. Despite technological advances, the suffering caused by natural disaster such as earthquake, flood, avalanche, hurricane, volcano, fire and there is abnormal. Mining activities are always risks with the risks of mine called, has been associated. Seismic hazard in underground mines is one of the threats to human life. Seismic hazard identification much more difficult to identify the natural hazards to an earthquake. Using advanced seismic earthquake and Seismoacoustic predict. The accuracy of the information generated is not optimal. The complexity of seismic processes and the disproportion between the low-energy seismic events and a number of high-energy phenomena (eg greater than 10 ^ 4 J) makes statistical techniques to predict the seismic hazard is not enough. So the search for better opportunities for risk prediction has become imperative. Clustering techniques for seismic hazard data and artificial neural networks were used to predict. In most applications, the results obtained by the methods listed in the situation "dangerous" and "safe" have been reported. Unbalanced distribution of positive samples (dangerous situation) and negative (safe condition), is a serious problem in seismic hazard forecasting. Currently, the methods used can not be expected to take appropriate and high sensitivity. A number of factors related to the risk of earthquakes, was proposed. Among other factors, the shake with energy greater than 10 ^ 4 J mentioned earthquake prediction can be defined with different methods. But the main goal is for all methods of seismic hazard assessment. In some cases, the data on the time and date, an increase in seismic activity that could lead to the destruction of rocks using pressure (Rockburst), the predicted. <br />You can not focus on one parameter, occurrence or non-occurrence of earthquakes in the area or within the time limit specified. Should study several parameters at the same time be an acceptable technique for predicting earthquake found that one way to do this is to use expert system. The news systems provided that these systems have the ability to use more, better and smarter them. For example, in hydrology analysis only considered changes in water ions other parameters such as fault behavior, changes in sea level, etc., are not considered to predict earthquakes. On the other hand all expert systems to predict earthquakes from Precursor not use to predict. Some expert systems by the time information, location and depth of previous earthquakes, earthquakes predict the future. Studies have shown that large earthquakes decision tree and artificial neural network down accurately predicted. In this article by support vector machines based on particle swarm big earthquakes with higher accuracy than was some other expert systems.<br />Expert systems commonly used in earthquake prediction. In these expert systems, various parameters are used such as fault behavior, the concentration of radon, energy, pulse and the number of bumps. The occurrence of earthquakes can be measured by checking these parameters. The accuracy of earthquake prediction by expert systems is relatively higher than non-expert system methods. In this paper, the accuracy and the kinds of data used in different expert systems in the field of earthquake prediction were studied. In addition, different algorithms to predict earthquake with an expert system based on support vector machines, decision trees, neural networks, support vector machines based on particle swarm optimization, Bayesian and MLP network was implemented in Rapidminer. The results show that support vector machine-based expert system that is optimized by particle swarm algorithm in comparison to neural network-based expert systems, support vector machines, Bayesian, decision tree and network MLP has a better prediction accuracy.The crisis in the event of an incident or accident occurs suddenly and unexpectedly that urgent attention is needed for proper decision making. Despite technological advances, the suffering caused by natural disaster such as earthquake, flood, avalanche, hurricane, volcano, fire and there is abnormal. Mining activities are always risks with the risks of mine called, has been associated. Seismic hazard in underground mines is one of the threats to human life. Seismic hazard identification much more difficult to identify the natural hazards to an earthquake. Using advanced seismic earthquake and Seismoacoustic predict. The accuracy of the information generated is not optimal. The complexity of seismic processes and the disproportion between the low-energy seismic events and a number of high-energy phenomena (eg greater than 10 ^ 4 J) makes statistical techniques to predict the seismic hazard is not enough. So the search for better opportunities for risk prediction has become imperative. Clustering techniques for seismic hazard data and artificial neural networks were used to predict. In most applications, the results obtained by the methods listed in the situation "dangerous" and "safe" have been reported. Unbalanced distribution of positive samples (dangerous situation) and negative (safe condition), is a serious problem in seismic hazard forecasting. Currently, the methods used can not be expected to take appropriate and high sensitivity. A number of factors related to the risk of earthquakes, was proposed. Among other factors, the shake with energy greater than 10 ^ 4 J mentioned earthquake prediction can be defined with different methods. But the main goal is for all methods of seismic hazard assessment. In some cases, the data on the time and date, an increase in seismic activity that could lead to the destruction of rocks using pressure (Rockburst), the predicted. <br />You can not focus on one parameter, occurrence or non-occurrence of earthquakes in the area or within the time limit specified. Should study several parameters at the same time be an acceptable technique for predicting earthquake found that one way to do this is to use expert system. The news systems provided that these systems have the ability to use more, better and smarter them. For example, in hydrology analysis only considered changes in water ions other parameters such as fault behavior, changes in sea level, etc., are not considered to predict earthquakes. On the other hand all expert systems to predict earthquakes from Precursor not use to predict. Some expert systems by the time information, location and depth of previous earthquakes, earthquakes predict the future. Studies have shown that large earthquakes decision tree and artificial neural network down accurately predicted. In this article by support vector machines based on particle swarm big earthquakes with higher accuracy than was some other expert systems.<br />Expert systems commonly used in earthquake prediction. In these expert systems, various parameters are used such as fault behavior, the concentration of radon, energy, pulse and the number of bumps. The occurrence of earthquakes can be measured by checking these parameters. The accuracy of earthquake prediction by expert systems is relatively higher than non-expert system methods. In this paper, the accuracy and the kinds of data used in different expert systems in the field of earthquake prediction were studied. In addition, different algorithms to predict earthquake with an expert system based on support vector machines, decision trees, neural networks, support vector machines based on particle swarm optimization, Bayesian and MLP network was implemented in Rapidminer. The results show that support vector machine-based expert system that is optimized by particle swarm algorithm in comparison to neural network-based expert systems, support vector machines, Bayesian, decision tree and network MLP has a better prediction accuracy.https://jesphys.ut.ac.ir/article_56030_e74295b5e39020cc9402faf6165a6d1d.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Seismic imaging of complex structures by integrating pre-stack time migration and surface stacking methodsSeismic imaging of complex structures by integrating pre-stack time migration and surface stacking methods2933085773710.22059/jesphys.2016.57737FAMehrdadSoleimani MonfaredAliKhalilzadehPost graduate of M.Sc. in Petroleum Exploration/Shahrood, Shahrood.Journal Article20150105The conventional approach to seismic data analysis consists of two main steps: estimating seismic velocities, (the subsurface macro-model), and seismic imaging, (mapping of the reflected seismic energy to the reflector positions). The aim and the major challenge in the seismic data analysis is the construction of the best undistorted image. This challenge would be more problematic when geometrical complexity and lateral heterogeneity increase. It is obvious that conventional reflection seismic data processing methods cannot solve the problem of seismic imaging in complex geological structures. It is because of that most of those processing methods are strongly depended on seismic velocity propagation model. However, obtaining a precise velocity model as accurate as possible is always a controversial task. For this purpose, imaging methods are employed that do not rely on the explicit knowledge of subsurface velocity model. Therefore, in most of the researches of new seismic imaging methods, efforts are oriented to develop velocity independent imaging algorithms. The first idea of velocity-independent time-domain seismic imaging belongs to authors considered decomposing seismic data into a range of local slopes. Then methods that consider inversion of full waveform from the data were introduced. However, these methods are not the ultimate solution, because we need the velocity model for final depth imaging. Thus some methods are introducing to use advantage of pre-stack migration with iterative velocity model updating while using seismic imaging methods that don’t fully rely on velocity model. These integration methods aims to combine new updating formula for the first part (estimating seismic velocities) of the processing chain and use new velocity independent methods for the second part (Seismic imaging). <br />Time migration is a common fast and robust process of obtaining seismic image. This process is considered adequate for the areas with mild lateral velocity variation. Moreover, time migration produces images in very specific time migration coordinates (x0; t0). However, even mild lateral velocity variations can significantly distort subsurface structures on the time migrated images. The main reason that this velocity variation will make distortion in the section is that reflected and diffracted energies will be placed in wrong positions. If this displacing could be slightly removed by any operator, e. g. Kirchhoff migration operator, in each offset section, another surface operator could be used to stack offset sections and enhance the final migrated section. In this study, we selected pre-stack time imaging with Kirchhoff migration algorithm method and the common reflection surface stack method for integration. Common reflection surface stack method is among velocity independent methods used for imaging in complex structures. The CRS operator will gathers any reflected and/or diffracted energy that could not be gathered by the conventional Kirchhoff summation operator. Thus if the geometrical distortions were corrected by the Kirchhoff operator and reflected energies were placed in their true locations, CRS operator will collects all the related diffracted energy from a depth point and will coherently stack these energies to image that point. The equation that comes in the following is the Kirchhoff operator: <br /> <br />that defines the wave-field parameter, ΔP in each (x, t) point. To integrate these two methods, the CRS operator would be created for each (x, t) point and will be inserted in the above equation. These equations are: <br /> and <br /> . <br />where the angles are the take-off and emergence angle of the central ray and Ki are wave-front curvatures. aCO and bCO are related to x and t, respectively. Diffraction curve would be obtained for each point and the CRS surface would be created for that point. <br />To investigate the efficiency of this method, the algorithm applied on a 2D seismic data. This data is from west of Iran which contains complex geometry with mild to strong lateral velocity change. After pre-processing steps, a smooth initial velocity model was derived for performing ray tracing. The kinematic ray tracing was used to define the common reflection surface operator. Afterwards, data were processed by Kirchhoff migration. In the next step, velocity model were corrected for residual move out. Finally pre-stack data was migrated again by the new corrected velocity model. This section should be compared with the result of PSTM and CRS integration method. The new migrated section could better shows faulting and bending of the reflectors. High thickness of Gachsaran formation in the region and strong lateral velocity change in different parts of the section, makes low illumination of the beneath Gachsaran structure. However, the new algorithm could gathers as much as possible reflected and/or diffracted energy from those structures in the data. Therefore, more clear structures and reflectors would be observed in the section and the general quality of the data would be enhanced. Finally, it could be concluded that by applying this proposed integration method will gives high quality image by increasing the signal to noise ratio and solving the problem of conflicting dips.The conventional approach to seismic data analysis consists of two main steps: estimating seismic velocities, (the subsurface macro-model), and seismic imaging, (mapping of the reflected seismic energy to the reflector positions). The aim and the major challenge in the seismic data analysis is the construction of the best undistorted image. This challenge would be more problematic when geometrical complexity and lateral heterogeneity increase. It is obvious that conventional reflection seismic data processing methods cannot solve the problem of seismic imaging in complex geological structures. It is because of that most of those processing methods are strongly depended on seismic velocity propagation model. However, obtaining a precise velocity model as accurate as possible is always a controversial task. For this purpose, imaging methods are employed that do not rely on the explicit knowledge of subsurface velocity model. Therefore, in most of the researches of new seismic imaging methods, efforts are oriented to develop velocity independent imaging algorithms. The first idea of velocity-independent time-domain seismic imaging belongs to authors considered decomposing seismic data into a range of local slopes. Then methods that consider inversion of full waveform from the data were introduced. However, these methods are not the ultimate solution, because we need the velocity model for final depth imaging. Thus some methods are introducing to use advantage of pre-stack migration with iterative velocity model updating while using seismic imaging methods that don’t fully rely on velocity model. These integration methods aims to combine new updating formula for the first part (estimating seismic velocities) of the processing chain and use new velocity independent methods for the second part (Seismic imaging). <br />Time migration is a common fast and robust process of obtaining seismic image. This process is considered adequate for the areas with mild lateral velocity variation. Moreover, time migration produces images in very specific time migration coordinates (x0; t0). However, even mild lateral velocity variations can significantly distort subsurface structures on the time migrated images. The main reason that this velocity variation will make distortion in the section is that reflected and diffracted energies will be placed in wrong positions. If this displacing could be slightly removed by any operator, e. g. Kirchhoff migration operator, in each offset section, another surface operator could be used to stack offset sections and enhance the final migrated section. In this study, we selected pre-stack time imaging with Kirchhoff migration algorithm method and the common reflection surface stack method for integration. Common reflection surface stack method is among velocity independent methods used for imaging in complex structures. The CRS operator will gathers any reflected and/or diffracted energy that could not be gathered by the conventional Kirchhoff summation operator. Thus if the geometrical distortions were corrected by the Kirchhoff operator and reflected energies were placed in their true locations, CRS operator will collects all the related diffracted energy from a depth point and will coherently stack these energies to image that point. The equation that comes in the following is the Kirchhoff operator: <br /> <br />that defines the wave-field parameter, ΔP in each (x, t) point. To integrate these two methods, the CRS operator would be created for each (x, t) point and will be inserted in the above equation. These equations are: <br /> and <br /> . <br />where the angles are the take-off and emergence angle of the central ray and Ki are wave-front curvatures. aCO and bCO are related to x and t, respectively. Diffraction curve would be obtained for each point and the CRS surface would be created for that point. <br />To investigate the efficiency of this method, the algorithm applied on a 2D seismic data. This data is from west of Iran which contains complex geometry with mild to strong lateral velocity change. After pre-processing steps, a smooth initial velocity model was derived for performing ray tracing. The kinematic ray tracing was used to define the common reflection surface operator. Afterwards, data were processed by Kirchhoff migration. In the next step, velocity model were corrected for residual move out. Finally pre-stack data was migrated again by the new corrected velocity model. This section should be compared with the result of PSTM and CRS integration method. The new migrated section could better shows faulting and bending of the reflectors. High thickness of Gachsaran formation in the region and strong lateral velocity change in different parts of the section, makes low illumination of the beneath Gachsaran structure. However, the new algorithm could gathers as much as possible reflected and/or diffracted energy from those structures in the data. Therefore, more clear structures and reflectors would be observed in the section and the general quality of the data would be enhanced. Finally, it could be concluded that by applying this proposed integration method will gives high quality image by increasing the signal to noise ratio and solving the problem of conflicting dips.https://jesphys.ut.ac.ir/article_57737_95e14c4d6646e39ff75b3be5691b12d8.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Estimation of Vs profiling by joint inversion of Refraction Microtremor and Seismic Refraction data using GA multi-objective optimization approachEstimation of Vs profiling by joint inversion of Refraction Microtremor and Seismic Refraction data using GA multi-objective optimization approach3093235501210.22059/jesphys.2016.55012FARashedPoormirzaeeTabriz-IranAhmadZareanRasoulHamidzade MoghadamJournal Article20150124It is very important to study and simulate S-wave structure for near surface (alluvium parts, in particular) owing to its direct relationship to urban facilities in geotechnical and earthquake engineering studies. So, in seismic microzonation, the first step is to study and identify S-wave pattern in alluvium in order to categorize different parts of cites according to S-wave velocity. Current techniques of estimating shallow shear velocities for assessment of earthquake site response are too costly for use at most construction sites. They require large sources to be effective in noisy urban settings, or specialized independent recorders laid out in an extensive array. Recently, refraction microtremor (ReMi) data have been frequently used for estimating of dispersion curves and simulating velocity of S-waves, because ReMi method is fast and cheap. However, inversion is the main problem in processing ReMi data for estimating velocity of S-waves. With the development of computer science, emergence of single-and multi-objective optimization techniques and inspiration of science from nature, an opportunity has been provided for decrease in non-uniqueness of inversion and finding the best possible solution. In this study, the joint inversion of microtremor and seismic refraction data was proposed using multi-objective Genetic Algorithm optimization and Pareto front concept for estimating S-wave velocity. After programming the multi-objective Genetic algorithm in Matlab, its efficiency was investigated by synthetic models and real datasets. Real datasets were obtained from 1 stations in south part of Tabriz (near Elgoli Road) that contain Miocene –Pliocene and pyroclastic bedrocks. For actual dataset we used Refraction microtremor (ReMi) as a passive method for achieve Rayleigh wave and seismic refraction data as an active method for getting travel time. For ReMi and seismic refraction data acquisition, the same layout can easily provide both P-wave travel times and surface wave dispersion curves if the sampling parameters are properly designed to satisfy the requirements of the two techniques. In current study ReMi and seismic refraction method was performed with using an OYO 24-channal seismograph and 4.5Hz and 28Hz geophones with a receiver spacing of 5m. for ReMi, Unfiltered 17 second records were collected at study site. Also in this study, resistivity data, as auxiliary information, are used. Resistivity method can provide information about bed rock and water table in study area. For this goal, resistivity measurements were carried out by using high-resolution RESECS resistivity meter system. In this study Wenner array, with 32 electrodes (2m unit electrode spacing), for measurements of 1-D resistivity imaging profile is used. For evaluation of proposed joint inversion algorithm, the results were compared with single inversion of ReMi data by Particle Swarm Optimization (PSO) algorithm. with Using joint inversion algorithm, a three layer subsurface model was found, which the first layer velocity is 321m/s and its thickness is 5.8m, second layer velocity is 365m/s and its thickness is 4.6m and last layer velocity is 547m/s.. The results of inversion in both synthetic and real dataset proved the reliability of proposed method, as a powerful technique for joint inversion, in comparison to current methods. . Also by Pareto concept the quality of inversion procedure can be easily detected. Because symmetry of Pareto front is strongly depends to accuracy of estimations. By using joint inversion algorithm we can achieve to a more correct Vs structure and decrease the non-uniqueness of Rayleigh wave inversion.It is very important to study and simulate S-wave structure for near surface (alluvium parts, in particular) owing to its direct relationship to urban facilities in geotechnical and earthquake engineering studies. So, in seismic microzonation, the first step is to study and identify S-wave pattern in alluvium in order to categorize different parts of cites according to S-wave velocity. Current techniques of estimating shallow shear velocities for assessment of earthquake site response are too costly for use at most construction sites. They require large sources to be effective in noisy urban settings, or specialized independent recorders laid out in an extensive array. Recently, refraction microtremor (ReMi) data have been frequently used for estimating of dispersion curves and simulating velocity of S-waves, because ReMi method is fast and cheap. However, inversion is the main problem in processing ReMi data for estimating velocity of S-waves. With the development of computer science, emergence of single-and multi-objective optimization techniques and inspiration of science from nature, an opportunity has been provided for decrease in non-uniqueness of inversion and finding the best possible solution. In this study, the joint inversion of microtremor and seismic refraction data was proposed using multi-objective Genetic Algorithm optimization and Pareto front concept for estimating S-wave velocity. After programming the multi-objective Genetic algorithm in Matlab, its efficiency was investigated by synthetic models and real datasets. Real datasets were obtained from 1 stations in south part of Tabriz (near Elgoli Road) that contain Miocene –Pliocene and pyroclastic bedrocks. For actual dataset we used Refraction microtremor (ReMi) as a passive method for achieve Rayleigh wave and seismic refraction data as an active method for getting travel time. For ReMi and seismic refraction data acquisition, the same layout can easily provide both P-wave travel times and surface wave dispersion curves if the sampling parameters are properly designed to satisfy the requirements of the two techniques. In current study ReMi and seismic refraction method was performed with using an OYO 24-channal seismograph and 4.5Hz and 28Hz geophones with a receiver spacing of 5m. for ReMi, Unfiltered 17 second records were collected at study site. Also in this study, resistivity data, as auxiliary information, are used. Resistivity method can provide information about bed rock and water table in study area. For this goal, resistivity measurements were carried out by using high-resolution RESECS resistivity meter system. In this study Wenner array, with 32 electrodes (2m unit electrode spacing), for measurements of 1-D resistivity imaging profile is used. For evaluation of proposed joint inversion algorithm, the results were compared with single inversion of ReMi data by Particle Swarm Optimization (PSO) algorithm. with Using joint inversion algorithm, a three layer subsurface model was found, which the first layer velocity is 321m/s and its thickness is 5.8m, second layer velocity is 365m/s and its thickness is 4.6m and last layer velocity is 547m/s.. The results of inversion in both synthetic and real dataset proved the reliability of proposed method, as a powerful technique for joint inversion, in comparison to current methods. . Also by Pareto concept the quality of inversion procedure can be easily detected. Because symmetry of Pareto front is strongly depends to accuracy of estimations. By using joint inversion algorithm we can achieve to a more correct Vs structure and decrease the non-uniqueness of Rayleigh wave inversion.https://jesphys.ut.ac.ir/article_55012_71c0a03eadd9008299f175960b5e87d4.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Modeling of Micro-gravity data with Linear Regression Method for Inclined Cylinder
Case study: A shaft in siahbishe regionModeling of Micro-gravity data with Linear Regression Method for Inclined Cylinder
Case study: A shaft in siahbishe region3253355772910.22059/jesphys.2016.57729FAMortezaErfanian Norozzadehuniversity of tehranVahidEbrahim Zadeh ArdestaniJournal Article20150207In this paper, a new formula is applied for the calculation of gravity anomalies from a cylinder model representing a geological body. Compared to conventional methods, this new development allows the cylinder to be freely oriented in space. 38 gravity forward models are produced each of which anomaly attributes are calculated. Then a linear relationship is established between attributes and source parameters as a new formula. These attributes are relations which calculated from coordinates of special parts of residual gravity anomaly curve. Using this linear relationship, source parameters can be then estimated by gravity anomaly attributes. Linear relationship is obtained by least-square method and minimizing differences. Consequently, Compared to previous methods, this new development considers multiple factors that have impact on geophysical observations (some neglected in previous studies) and more variables are considered such as dip angle, strike direction, size, depth to the top, and density of the cylinder. These parameters are important for determination of the geometric of subsurface geological body. Based on a series of forward modeling using the new formula, a multiple linear regression system has been developed. The multiple linear regression method relates the variations of residual gravity anomaly which is changed by variations source parameters. Based on previous studies, we suppose that the shape as well the amplitude of the gravity anomaly will change with the changes of cylinder occurrence. We use the multiple linear regression to examine if there is a linear relationship between each parameter of the cylinder and a series of attributes from the gravity anomaly. Actually, in this algorithm, we assessed the variability of a residual gravity anomaly as a function of the source occurrence. In previous study, seven most significant parameters of a cylinder-like body were identified, which influence the shape of the gravity anomaly as well its intensity. We build up a linear regression system that contains six linear relationships between the six most significant parameters of a cylinder-like body and seven attributes from the gravity anomaly. Those equations allow the user to estimate the multiple parameters of an elongated geological body simultaneously under the constraint of gravity observations. Integrating those calculations into gravity surveys helps in making a drilling decision. In this article, a shaft situated in Siah bisheh dam has been considered as a case study to verify proposed formula. This case study was a part of The Siah Bishe Pumped Storage project. The project was located in the Alborz mountain range, 125 km north of Tehran. The site can be reached on the main Chalus road, connecting Tehran with the Caspian Sea. The project area lies in the southern part of the Paleozoic- Mesozoic Central Range of the alpine Alborz mountain chain. The rock sequences in the project area consist of massive limestones, detrital series (sandstones, shales) and volcanic rocks of Permian formations, Triassic dolomites and Jurassic formations with black shales and sandstones. Several tectonic faults are crossing the project alignment. In this area, main purpose of gravity surveying was exploration of collapse zones. The gravity data used in this study come from gravity department of institute of geophysics in IRAN. The spatial resolution of the original gravity observations was 15 m between stations. The Bouguer anomaly grid was then interpolated to a spacing of 3 m using a minimum curvature gridding algorithm. The residual Bouguer anomaly is obtained after removing a first order polynomial trend to the Bouguer anomaly. Moreover, its engineering and geological parameters have been calculated. Collapsed zone in abovementioned shaft has been determined and illustrated using microgravity data. We applied the new method to residual gravity anomaly curve of collapsed zone. Finally, this zone has also been mathematically modeled using inclined cylinder.In this paper, a new formula is applied for the calculation of gravity anomalies from a cylinder model representing a geological body. Compared to conventional methods, this new development allows the cylinder to be freely oriented in space. 38 gravity forward models are produced each of which anomaly attributes are calculated. Then a linear relationship is established between attributes and source parameters as a new formula. These attributes are relations which calculated from coordinates of special parts of residual gravity anomaly curve. Using this linear relationship, source parameters can be then estimated by gravity anomaly attributes. Linear relationship is obtained by least-square method and minimizing differences. Consequently, Compared to previous methods, this new development considers multiple factors that have impact on geophysical observations (some neglected in previous studies) and more variables are considered such as dip angle, strike direction, size, depth to the top, and density of the cylinder. These parameters are important for determination of the geometric of subsurface geological body. Based on a series of forward modeling using the new formula, a multiple linear regression system has been developed. The multiple linear regression method relates the variations of residual gravity anomaly which is changed by variations source parameters. Based on previous studies, we suppose that the shape as well the amplitude of the gravity anomaly will change with the changes of cylinder occurrence. We use the multiple linear regression to examine if there is a linear relationship between each parameter of the cylinder and a series of attributes from the gravity anomaly. Actually, in this algorithm, we assessed the variability of a residual gravity anomaly as a function of the source occurrence. In previous study, seven most significant parameters of a cylinder-like body were identified, which influence the shape of the gravity anomaly as well its intensity. We build up a linear regression system that contains six linear relationships between the six most significant parameters of a cylinder-like body and seven attributes from the gravity anomaly. Those equations allow the user to estimate the multiple parameters of an elongated geological body simultaneously under the constraint of gravity observations. Integrating those calculations into gravity surveys helps in making a drilling decision. In this article, a shaft situated in Siah bisheh dam has been considered as a case study to verify proposed formula. This case study was a part of The Siah Bishe Pumped Storage project. The project was located in the Alborz mountain range, 125 km north of Tehran. The site can be reached on the main Chalus road, connecting Tehran with the Caspian Sea. The project area lies in the southern part of the Paleozoic- Mesozoic Central Range of the alpine Alborz mountain chain. The rock sequences in the project area consist of massive limestones, detrital series (sandstones, shales) and volcanic rocks of Permian formations, Triassic dolomites and Jurassic formations with black shales and sandstones. Several tectonic faults are crossing the project alignment. In this area, main purpose of gravity surveying was exploration of collapse zones. The gravity data used in this study come from gravity department of institute of geophysics in IRAN. The spatial resolution of the original gravity observations was 15 m between stations. The Bouguer anomaly grid was then interpolated to a spacing of 3 m using a minimum curvature gridding algorithm. The residual Bouguer anomaly is obtained after removing a first order polynomial trend to the Bouguer anomaly. Moreover, its engineering and geological parameters have been calculated. Collapsed zone in abovementioned shaft has been determined and illustrated using microgravity data. We applied the new method to residual gravity anomaly curve of collapsed zone. Finally, this zone has also been mathematically modeled using inclined cylinder.https://jesphys.ut.ac.ir/article_57729_8834659da613a2e6476bb455293f558d.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Gravity data inversion using L1-norm stabilizerGravity data inversion using L1-norm stabilizer3373485481610.22059/jesphys.2016.54816FASaeedVatankhahJournal Article20150218In this paper the inversion of gravity data using L1–norm stabilizer is considered. The inversion is an important step in the interpretation of data. In gravity data inversion, the goal is to estimate density and geometry of the unknown subsurface model from a set of known observation measured on the surface. Commonly, rectangular prisms are used to model the subsurface under the survey area. The unknown density contrasts within each prism are the parameters which should be estimated. The inversion of gravity data is an example of underdetermined and ill-posed problem, i.e. the solution can be non-unique and unstable. Thus, in order to find an acceptable solution regularization should be imposed. Solution is usually obtained by minimizing a global objective function consisting of two terms, data misfit and the regularization term. Data misfit measures how well an obtained model can reproduce the observed data. Usually, it is assumed noise in gravity data is Gaussian, therefore a L2–norm measure of the error between observed and predicted data is well suited for data misfit. There are several choices for a stabilizer, depends on type of features one wants to see from inverted model. A typical choice is a L2 –norm of a low-order differential operator applied to the model, which also a priori information and depth weighting can be incorporated (Li and Oldenburg, 1996). In this case the objective function is quadratic, then minimization of the function results a linear system to be solved. However, the models recovered in this way are characterized by smooth feature which are not always consistent with the real geological structures. There are situations in which the sources are localized and separated by sharp, distinct interfaces. To deal with this problem, during last decades, researchers have proposed a few types of stabilizer. Last and Kubik (1983) presented a compactness criterion for gravity inversion that seeks to minimize the area (or volume in 3D) of the causative body. Portniaguine and Zhdanov (1999) based on this stabilizer, who named the minimum support (MS), developed the minimum gradient support (MGS) stabilizer. For both constraint, the regularization term can be written as the weighted L2–type norm of the model. Therefore, the problem of the minimization of the objective function can be treated same as conventional Tikhonov functional. The only difference is that a priori variable weighting matrix for model parameters incorporated in the regularization term. Thus the Iteratively Reweighted Least Square (IRLS) algorithm is required to solve the problem. Other possibility for stabilizer is the minimization of the L1-norm of model or gradient of model, the latter indicates total variation regularization. The L1–norm stabilizer allows occurrence of large elements in the inverted model among mostly small values. Therefore, it can be used to obtain sharp boundaries and blocky features. Although the L1–norm stabilizer has favorable properties, in reconstruction of sparse models, its numerical implementation in a minimization problem can be difficult because its derivatives with respect to an element is not defined at zero. To overcome this difficulty, in this paper, the L1–norm stabilizer is approximated by a reweighted L2 –norm term. The algorithm is extended to gravity inverse problem, which needs depth weighting and other priori information to be included in the objective function. For estimating the regularization parameter, which balances between two terms of objective function, the Unbiased Predictive Risk Estimator (UPRE) method is used. The solution of the resulting objective functional is found using Generalized Singular Value Decomposition (GSVD), also provides for efficient determination of the regularization parameter at each iteration. Simulation using synthetic data of a dipping dike demonstrates that the method is capable to reconstruct focused image, boundaries and slop of the reconstructed model are close to those of the original model. The method is applied on gravity data acquired over the Gotvand dam site, in the south-west of Iran. The results show rather good agreement with those obtained from the boreholes.In this paper the inversion of gravity data using L1–norm stabilizer is considered. The inversion is an important step in the interpretation of data. In gravity data inversion, the goal is to estimate density and geometry of the unknown subsurface model from a set of known observation measured on the surface. Commonly, rectangular prisms are used to model the subsurface under the survey area. The unknown density contrasts within each prism are the parameters which should be estimated. The inversion of gravity data is an example of underdetermined and ill-posed problem, i.e. the solution can be non-unique and unstable. Thus, in order to find an acceptable solution regularization should be imposed. Solution is usually obtained by minimizing a global objective function consisting of two terms, data misfit and the regularization term. Data misfit measures how well an obtained model can reproduce the observed data. Usually, it is assumed noise in gravity data is Gaussian, therefore a L2–norm measure of the error between observed and predicted data is well suited for data misfit. There are several choices for a stabilizer, depends on type of features one wants to see from inverted model. A typical choice is a L2 –norm of a low-order differential operator applied to the model, which also a priori information and depth weighting can be incorporated (Li and Oldenburg, 1996). In this case the objective function is quadratic, then minimization of the function results a linear system to be solved. However, the models recovered in this way are characterized by smooth feature which are not always consistent with the real geological structures. There are situations in which the sources are localized and separated by sharp, distinct interfaces. To deal with this problem, during last decades, researchers have proposed a few types of stabilizer. Last and Kubik (1983) presented a compactness criterion for gravity inversion that seeks to minimize the area (or volume in 3D) of the causative body. Portniaguine and Zhdanov (1999) based on this stabilizer, who named the minimum support (MS), developed the minimum gradient support (MGS) stabilizer. For both constraint, the regularization term can be written as the weighted L2–type norm of the model. Therefore, the problem of the minimization of the objective function can be treated same as conventional Tikhonov functional. The only difference is that a priori variable weighting matrix for model parameters incorporated in the regularization term. Thus the Iteratively Reweighted Least Square (IRLS) algorithm is required to solve the problem. Other possibility for stabilizer is the minimization of the L1-norm of model or gradient of model, the latter indicates total variation regularization. The L1–norm stabilizer allows occurrence of large elements in the inverted model among mostly small values. Therefore, it can be used to obtain sharp boundaries and blocky features. Although the L1–norm stabilizer has favorable properties, in reconstruction of sparse models, its numerical implementation in a minimization problem can be difficult because its derivatives with respect to an element is not defined at zero. To overcome this difficulty, in this paper, the L1–norm stabilizer is approximated by a reweighted L2 –norm term. The algorithm is extended to gravity inverse problem, which needs depth weighting and other priori information to be included in the objective function. For estimating the regularization parameter, which balances between two terms of objective function, the Unbiased Predictive Risk Estimator (UPRE) method is used. The solution of the resulting objective functional is found using Generalized Singular Value Decomposition (GSVD), also provides for efficient determination of the regularization parameter at each iteration. Simulation using synthetic data of a dipping dike demonstrates that the method is capable to reconstruct focused image, boundaries and slop of the reconstructed model are close to those of the original model. The method is applied on gravity data acquired over the Gotvand dam site, in the south-west of Iran. The results show rather good agreement with those obtained from the boreholes.https://jesphys.ut.ac.ir/article_54816_7cef661ac7a645cd13f9ab36c325e804.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Edge Detection of Gravity Anomalies with Normalized Total Horizontal Derivative (NTHD)Edge Detection of Gravity Anomalies with Normalized Total Horizontal Derivative (NTHD)3493565491710.22059/jesphys.2016.54917FAVahidEntezar SaadatSeyed HaniMotavalli AnbaranJournal Article20150419There are several varieties of edge detection method. Edge detection and edge enhancement techniques play an essential role in interpreting potential field data. Some of common methods are: Tilt angle filter (TA), Total horizontal derivative of tilt angle filter (THDT), Theta filter, TDX filter, Hyperbolic tilt angle (HTA). These filters maximum values are when facing with the edge of an anomalous mass, and their minimum values are above the anomalous mass except tilt angle filter which is positive when over the source and passes through zero when over or near the edge. Local phase filters (edge enhancement methods) are based on the phase variation of the derivative quantities.<br />The mentioned filters have different advantages like flexibility in making new filters but a universal disadvantage of these methods is that they cannot display the large and small amplitude edges simultaneously. In this paper the ability of normalized total horizontal derivative (NTHD) method is shown and it compared with the other methods. The NTHD method is based on the ratio of the horizontal derivative to the maximum of nearby values which are in an arbitrary window. The maxima of the NTHD method are located on the edges of causative sources.<br />To determine the ability of NTHD method, it is applied on an artificial rectangular prism which is created with Matlab software. In order to find the stability of the method when facing a noisy data, a Gaussian noise created with randn command in Matlab area and added to the artificially rectangular prism and then the NTHD method applied on it. To evaluate the capability of this method with prevalent edge detection methods, a Matlab code has written and the numbered edge detection filters were applied on several artificially rectangular bodies which are in shallow and deep depths. The results showed that almost all filters delineated edges of shallow anomalies successfully but when they facing with the deeper anomalies their ability rising down and cannot detect the edges precisely. The excellence of the NTHD method in recognition of source edges is due to the fact that it can make the strong and weak amplitude edges visible simultaneously and can bring out more details. The edge detection technique (NTHD) was further applied on the Mobrun gravity anomaly which digitized from Grant and West (1965). The gravity data of Mobrun ore body consist of thirteen profiles. The distance between profiles is 60 meters and in each profile, data have space of 30 meters. In order to reduce the existent noise in these data, we upward data for a distance of 10 meters then we apply NTHD filter with a 1×1 window. The result of applying the NTHD method on Mobrun ore body are in concord with the prevalent results of exploration bore data and precisely detects the edges of anomaly. In order to examine the results of edge detection, we use the data of borehole that they dugout along the AB profile. Among the boreholes the BH2 borehole is near to the edge of Mobrun ore body and is in concord with the edge detection results of NTHD method.There are several varieties of edge detection method. Edge detection and edge enhancement techniques play an essential role in interpreting potential field data. Some of common methods are: Tilt angle filter (TA), Total horizontal derivative of tilt angle filter (THDT), Theta filter, TDX filter, Hyperbolic tilt angle (HTA). These filters maximum values are when facing with the edge of an anomalous mass, and their minimum values are above the anomalous mass except tilt angle filter which is positive when over the source and passes through zero when over or near the edge. Local phase filters (edge enhancement methods) are based on the phase variation of the derivative quantities.<br />The mentioned filters have different advantages like flexibility in making new filters but a universal disadvantage of these methods is that they cannot display the large and small amplitude edges simultaneously. In this paper the ability of normalized total horizontal derivative (NTHD) method is shown and it compared with the other methods. The NTHD method is based on the ratio of the horizontal derivative to the maximum of nearby values which are in an arbitrary window. The maxima of the NTHD method are located on the edges of causative sources.<br />To determine the ability of NTHD method, it is applied on an artificial rectangular prism which is created with Matlab software. In order to find the stability of the method when facing a noisy data, a Gaussian noise created with randn command in Matlab area and added to the artificially rectangular prism and then the NTHD method applied on it. To evaluate the capability of this method with prevalent edge detection methods, a Matlab code has written and the numbered edge detection filters were applied on several artificially rectangular bodies which are in shallow and deep depths. The results showed that almost all filters delineated edges of shallow anomalies successfully but when they facing with the deeper anomalies their ability rising down and cannot detect the edges precisely. The excellence of the NTHD method in recognition of source edges is due to the fact that it can make the strong and weak amplitude edges visible simultaneously and can bring out more details. The edge detection technique (NTHD) was further applied on the Mobrun gravity anomaly which digitized from Grant and West (1965). The gravity data of Mobrun ore body consist of thirteen profiles. The distance between profiles is 60 meters and in each profile, data have space of 30 meters. In order to reduce the existent noise in these data, we upward data for a distance of 10 meters then we apply NTHD filter with a 1×1 window. The result of applying the NTHD method on Mobrun ore body are in concord with the prevalent results of exploration bore data and precisely detects the edges of anomaly. In order to examine the results of edge detection, we use the data of borehole that they dugout along the AB profile. Among the boreholes the BH2 borehole is near to the edge of Mobrun ore body and is in concord with the edge detection results of NTHD method.https://jesphys.ut.ac.ir/article_54917_2cb28f362e93b1dc65c77781c5a52066.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Interpretation of gravity data of ancient sub-structures in Tepe Hissar, DamghanInterpretation of gravity data of ancient sub-structures in Tepe Hissar, Damghan3573685509810.22059/jesphys.2016.55098FABehzadSarlakAliNejati KalatehJournal Article20150611ABSTRACT <br />The use of geophysical methods, before digging, can be effective in the archaeological explorations. In the meantime, the gravimetery is the one of the most widely methods that be use, due to lack of harmful environmental effects. The gravity method is based on density contrast between the anomalous body and the country rocks or around of them. For archeology studying, the target is detection of sub-surface structures which was made in the enceint. But here it’s posible was coverd by some overburden such as alluvium. There are varity of density contarst that can be detect by using gravity data, thereforet the density contrast between the walls and chambers can be studied by the gravimetery. In this article, to investigate the subsurface structures of walls in the ancient area of Tepe-Hissar in Damghan, the gravity data and fuzzy filtres was used. In geophysical prospecting there are some nosiy data that must be removed. The first all of required corrections for example instrument drift correction, free air and slab bougure, latitude and terrain corrections were done on gravity data. <br />In this way the bougure gravity anomaly was obtained. Images of the gravity field of the Earth are used worldwide as part of exploration programs for mineral, hydrocarbons, and archaeology and etc. resources. When the data quality permits, a range of highpass filters, such as downward continuation or vertical derivatives, can be applied to bring out fine detail. Also, In order to separate the residual anomaliy from regional gravity we used trend surface method. Local phase filters provide an alternative approach but conventional phase functions need to be unwrapped to remove phase ambiguity (Fitzgerald et al., 1997). Therefore, detection of the boundary of chambers or walls and the horizontal location of sources can be obtained from derivative based filters such as the horizontal gradient magnitude, tilt-angle, theta-map, Laplacian and tangent hyperbolic, however these methods typically fail for archaeological purposes due to the high noise content of these datasets. In this paper, the first similary to prospecting area a synthetic model prepared which combined some chambers and walls, and the chambers or rooms have filled with the alluvium and soil. Based on the filters, here we can detect the edges that the density change sharp and density contrast will be high or very low. <br />One of the conventional phase filter that use for edge detection is the tilt angle (Miller and Singh, 1994). The gradient tilt angle has some interesting properties. As a dimensionless ratio it responds equally well to shallow and deep sources and to a large dynamic range of amplitudes for sources at the same level. Because the tilt angle is based on a ratio of derivatives, it enhances large and small amplitude anomalies well. The results show the tilt angle of the synthetic and real data. The tilt angle is effective in balancing the amplitudes of the different anomalies, but it is not primarily an edge-detection filter. The theta map uses the analytic signal amplitude to normalize the total horizontal derivative (Wijns et al. 2005). The amplitude of the response of this filter from the deeper and shallow source bodies is similar, although the response from the deeper bodies is rather diffuse. The hyperbolic tilt angle (HTA) filter uses of the real part of the hyperbolic tangent function in the tilt angle calculation achieved better delineation of the edges of the anomalous body than the other filters we use here. The maximum value of the HTA gives location of the body edges (Cooper and Cowan, 2006). <br />Edge enhancement in potential-field data helps geological and archaeological interpretation. There are many methods for enhancing edges, most of which are high-pass filters based on the horizontal or vertical derivatives of the field. Normalized Derivatives Ratio (NDR), a new edge-detection filter, is based on ratios of the Derivatives orthogonal to the horizontal of the field. The NDR is demonstrated using synthetic and real gravity data from an archaeology site, Tepe-Hissar. Compared with other filters, the NDR filter produces more detailed results as can see that the separation and detection walls and chambers have a high compliance with the results of excavations carried out.ABSTRACT <br />The use of geophysical methods, before digging, can be effective in the archaeological explorations. In the meantime, the gravimetery is the one of the most widely methods that be use, due to lack of harmful environmental effects. The gravity method is based on density contrast between the anomalous body and the country rocks or around of them. For archeology studying, the target is detection of sub-surface structures which was made in the enceint. But here it’s posible was coverd by some overburden such as alluvium. There are varity of density contarst that can be detect by using gravity data, thereforet the density contrast between the walls and chambers can be studied by the gravimetery. In this article, to investigate the subsurface structures of walls in the ancient area of Tepe-Hissar in Damghan, the gravity data and fuzzy filtres was used. In geophysical prospecting there are some nosiy data that must be removed. The first all of required corrections for example instrument drift correction, free air and slab bougure, latitude and terrain corrections were done on gravity data. <br />In this way the bougure gravity anomaly was obtained. Images of the gravity field of the Earth are used worldwide as part of exploration programs for mineral, hydrocarbons, and archaeology and etc. resources. When the data quality permits, a range of highpass filters, such as downward continuation or vertical derivatives, can be applied to bring out fine detail. Also, In order to separate the residual anomaliy from regional gravity we used trend surface method. Local phase filters provide an alternative approach but conventional phase functions need to be unwrapped to remove phase ambiguity (Fitzgerald et al., 1997). Therefore, detection of the boundary of chambers or walls and the horizontal location of sources can be obtained from derivative based filters such as the horizontal gradient magnitude, tilt-angle, theta-map, Laplacian and tangent hyperbolic, however these methods typically fail for archaeological purposes due to the high noise content of these datasets. In this paper, the first similary to prospecting area a synthetic model prepared which combined some chambers and walls, and the chambers or rooms have filled with the alluvium and soil. Based on the filters, here we can detect the edges that the density change sharp and density contrast will be high or very low. <br />One of the conventional phase filter that use for edge detection is the tilt angle (Miller and Singh, 1994). The gradient tilt angle has some interesting properties. As a dimensionless ratio it responds equally well to shallow and deep sources and to a large dynamic range of amplitudes for sources at the same level. Because the tilt angle is based on a ratio of derivatives, it enhances large and small amplitude anomalies well. The results show the tilt angle of the synthetic and real data. The tilt angle is effective in balancing the amplitudes of the different anomalies, but it is not primarily an edge-detection filter. The theta map uses the analytic signal amplitude to normalize the total horizontal derivative (Wijns et al. 2005). The amplitude of the response of this filter from the deeper and shallow source bodies is similar, although the response from the deeper bodies is rather diffuse. The hyperbolic tilt angle (HTA) filter uses of the real part of the hyperbolic tangent function in the tilt angle calculation achieved better delineation of the edges of the anomalous body than the other filters we use here. The maximum value of the HTA gives location of the body edges (Cooper and Cowan, 2006). <br />Edge enhancement in potential-field data helps geological and archaeological interpretation. There are many methods for enhancing edges, most of which are high-pass filters based on the horizontal or vertical derivatives of the field. Normalized Derivatives Ratio (NDR), a new edge-detection filter, is based on ratios of the Derivatives orthogonal to the horizontal of the field. The NDR is demonstrated using synthetic and real gravity data from an archaeology site, Tepe-Hissar. Compared with other filters, the NDR filter produces more detailed results as can see that the separation and detection walls and chambers have a high compliance with the results of excavations carried out.https://jesphys.ut.ac.ir/article_55098_a2e50e8cee738f87d66ebfbd80a19356.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Comparison of different methods for the estimation of depth, location and source-type of magnetic and gravity fieldsComparison of different methods for the estimation of depth, location and source-type of magnetic and gravity fields3693915608210.22059/jesphys.2016.56082FAJamaledinBaniamerianPhD Student Of Geophysics-Electromagnetism,
Institute of Geophysics,
University of Tehran,BehroozOskooiInstitute of Geophysics- University of Tehran0000-0003-3065-194XMaurizioFediUniversity of Naples Federico II, Department of Earth Sciences, Environment and ResourcesJournal Article20151003The main goal of interpretation of potential fields (gravity and magnetic) data is determination of the buried source parameters including the depth, horizontal position, the structural index and the physical properties, which is magnetic susceptibility in the magnetic cases and density for gravity fields. For a large number of data it is so expensive modelling the data by using the inversion algorithms. So, in the last decades a variety of algorithms have been developed for source parameters estimate (Nabighian et al, 2005a, b). The depth is often the most important parameter to being determined. Early depth to source methods were mainly graphically. The automatic methods began to appear owing to computers and digital data available in 2D and 3D cases, most of them being field derivative based methods. Many of these approaches have restricted use because of limiting assumptions in their theory. <br />Since there is no method being the best, it is wise to use a variety of methods. In this paper, firstly, some of the widely used methods are described, such as Euler deconvolution (Thompson, 1982), analytic signal (Nabighian, 1972, 1974), source parameter imaging (SPI) (Thurston and Smith, 1997), Improved source parameters imaging (iSPI) (Smith et al, 1998), enhanced local wavenumber (ELW) (Salem et al, 2005) and ANEUL (Salem and Ravat, 2003). These are applied to a synthetic data produced by a fairly complex model simulating a magnetic basement, in order to estimate the position and structural index of its structures. <br />The SPI method requires second-order derivatives of the ﬁeld and uses the local wavenumber concept. The SPI is a fast and automatic method, assuming as source-model either a 2D sloping contact or a 2D dipping thin sheet model; it provides estimates of depth, dip, edge location and physical property contrast. iSPI uses the same concept as in SPI and is applied to 2D data. ELW is a combination of local wavenumber in Euler equation to determine source location and structural index. In this method a window is selected around the local-wave number peaks, and the source position is estimated by solving an over determined problem in each window. Once the source location is estimated, the structural index is obtained by using the main equations of the method. ANEUL is a fully automatic method, whose main equation is obtained by using analytic signal up to order 2 in Euler-homogeneity equation. <br />Since all the above mentioned methods make use of the field derivatives of different orders, high-quality datasets are needed, otherwise the final results may be affected by even severe errors. <br />Multiscale methods (Fedi and Pilkington, 2012), are a different class of interpretative methods, based on the behavior of the potential fields at different altitudes. They allow determination of depth, horizontal location and source-type. Taking advantage of a combined use of upward continuation and field differentiation, these methods are very stable and not sensitive to noise as the other methods are. <br />In this paper, among the several multiscale methods we use DEXP transformation (Fedi, 2007), automatic DEXP (Abbas et al, 2014) and Geometric method (Fedi et al, 2009, 2012). By DEXP transformation the field is calculated at some altitudes and it is scaled by using a power-law of the altitude. The depth can be obtained by ﬁnding the extreme points of the DEXP field. Automatic DEXP is based on computing the local wavenumber (of any order) at several latitudes and then scaling with a proper function to estimate the structural index and the source position. This new version of the DEXP transform is fully automatic and does not need any priori information. By the geometric approach, the maxima of the field at various scales are located along lines that are called ridges. Extrapolating the true ridges below the measurement surface, is enough to find the source position at the ridge intersection.<br />In spite of using noise-free data set in the synthetic case, it is shown that the classical methods do not provide results as accurate as those by multiscale methods. Comparison among more methods and evaluation of their consistency will be really important and practical, in real cases, to evaluate the final results and to make decision about individuating the best ones. <br />We conclude the paper by applying all the methods to a magnetic profile over a 2D structure, in order to estimate its parameters. None of these methods is restricted to the magnetic fields, and they can be also applied to gravity fields or its derivatives as well.The main goal of interpretation of potential fields (gravity and magnetic) data is determination of the buried source parameters including the depth, horizontal position, the structural index and the physical properties, which is magnetic susceptibility in the magnetic cases and density for gravity fields. For a large number of data it is so expensive modelling the data by using the inversion algorithms. So, in the last decades a variety of algorithms have been developed for source parameters estimate (Nabighian et al, 2005a, b). The depth is often the most important parameter to being determined. Early depth to source methods were mainly graphically. The automatic methods began to appear owing to computers and digital data available in 2D and 3D cases, most of them being field derivative based methods. Many of these approaches have restricted use because of limiting assumptions in their theory. <br />Since there is no method being the best, it is wise to use a variety of methods. In this paper, firstly, some of the widely used methods are described, such as Euler deconvolution (Thompson, 1982), analytic signal (Nabighian, 1972, 1974), source parameter imaging (SPI) (Thurston and Smith, 1997), Improved source parameters imaging (iSPI) (Smith et al, 1998), enhanced local wavenumber (ELW) (Salem et al, 2005) and ANEUL (Salem and Ravat, 2003). These are applied to a synthetic data produced by a fairly complex model simulating a magnetic basement, in order to estimate the position and structural index of its structures. <br />The SPI method requires second-order derivatives of the ﬁeld and uses the local wavenumber concept. The SPI is a fast and automatic method, assuming as source-model either a 2D sloping contact or a 2D dipping thin sheet model; it provides estimates of depth, dip, edge location and physical property contrast. iSPI uses the same concept as in SPI and is applied to 2D data. ELW is a combination of local wavenumber in Euler equation to determine source location and structural index. In this method a window is selected around the local-wave number peaks, and the source position is estimated by solving an over determined problem in each window. Once the source location is estimated, the structural index is obtained by using the main equations of the method. ANEUL is a fully automatic method, whose main equation is obtained by using analytic signal up to order 2 in Euler-homogeneity equation. <br />Since all the above mentioned methods make use of the field derivatives of different orders, high-quality datasets are needed, otherwise the final results may be affected by even severe errors. <br />Multiscale methods (Fedi and Pilkington, 2012), are a different class of interpretative methods, based on the behavior of the potential fields at different altitudes. They allow determination of depth, horizontal location and source-type. Taking advantage of a combined use of upward continuation and field differentiation, these methods are very stable and not sensitive to noise as the other methods are. <br />In this paper, among the several multiscale methods we use DEXP transformation (Fedi, 2007), automatic DEXP (Abbas et al, 2014) and Geometric method (Fedi et al, 2009, 2012). By DEXP transformation the field is calculated at some altitudes and it is scaled by using a power-law of the altitude. The depth can be obtained by ﬁnding the extreme points of the DEXP field. Automatic DEXP is based on computing the local wavenumber (of any order) at several latitudes and then scaling with a proper function to estimate the structural index and the source position. This new version of the DEXP transform is fully automatic and does not need any priori information. By the geometric approach, the maxima of the field at various scales are located along lines that are called ridges. Extrapolating the true ridges below the measurement surface, is enough to find the source position at the ridge intersection.<br />In spite of using noise-free data set in the synthetic case, it is shown that the classical methods do not provide results as accurate as those by multiscale methods. Comparison among more methods and evaluation of their consistency will be really important and practical, in real cases, to evaluate the final results and to make decision about individuating the best ones. <br />We conclude the paper by applying all the methods to a magnetic profile over a 2D structure, in order to estimate its parameters. None of these methods is restricted to the magnetic fields, and they can be also applied to gravity fields or its derivatives as well.https://jesphys.ut.ac.ir/article_56082_ab15f90be71e0f18fbeda6b6919b9834.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Precise edge detection of anomalies in potential field data using balanced horizontal derivative and second-order normalized total horizontal derivativePrecise edge detection of anomalies in potential field data using balanced horizontal derivative and second-order normalized total horizontal derivative3934035855110.22059/jesphys.2016.58551FASafaKhazaeiIHUJournal Article20151016Multiple methods available to increase the resolution of the edges, which usually are the functions composed by the first-order horizontal derivative and vertical derivative of potential field data. Cooper and Cowan (2006) presented total horizontal derivative (THD) which has less performance in determining edge of deep sources. Cooper and Cowan (2006) and Wijns et al. (2005) respectively have presented TDX and the Theta which are less sensitive to depth of the anomaly, also they have greater accuracy in detecting of the edges than the THD filter. The edge detection filters are all the functions based on the horizontal derivative and vertical derivatives of the data. The maxima of total horizontal derivative and the zero of the vertical derivative are corresponding to the edges of the source.<br />We find that the horizontal coordinates of the maxima of total horizontal derivative and the zero of vertical derivative are both bigger than the true locations of the edges, and the coordinates of the maxima of total horizontal derivative are more close to the true edges. We prove that the ratio of the total horizontal derivative to the second-order vertical derivative can obtain more accurate edge detection results.<br />We use the balanced horizontal derivative (BHD) edge detection filter, which uses the ratio of the first-order horizontal derivative to the second-order horizontal derivative to recognize the edges of the source, and the recognized edges by the BHD edge detection filter are more correct and are more insensitive to noise. We also use of second-order normalized total horizontal derivative (TDX2) to detect anomaly edges, which uses the ratio of the first vertical derivative of the horizontal to the second-order horizontal derivative to recognize the edges of the source, which gives the same results as the BHD. Ma and et al. (2014) have presented the BHD and the TDX2 filters as follows:<br />BHD=tan^(-1)(√(("∂f" /"∂x" )^2 "+" ("∂f" /"∂y" )^2 )/(k×-((∂^2 f)/(∂x^2 )+(∂^2 f)/(∂y^2 )) )) (1)<br />where, ∂f/∂x, ∂f/∂y and ∂2f/∂z2 are the derivatives of the data f, and, K=(mean(∂f/∂z))/(mean((∂^2 f)/〖∂z〗^2 )) and the maxima of the absolute value of the BHD are corresponding to the edges of the source.<br /><br />〖TDX〗_2=tan^(-1)(√(〖((∂f_z)/∂x)〗^2+〖((∂f_z)/∂y)〗^2 )/|(∂^2 f)/(∂z^2 )| ) (2)<br /><br />where, fz is the first-order vertical derivative of potential field f.<br />The results of the modeling on synthetic and real data shows that the edges that recognized by the TDX2 and the BHD are more precise and clear, by the presented methods are consistent with the true values for the shallow anomaly .and the BHD and the TDX2 filters can display the edges of shallow and deep sources simultaneously. For the anomaly of three separated objects interfere with different domains, filters break boundary anomalies better than previous filters. Also in this study, the mentioned filters have been used on gravity data of the aqueduct in the geophysics institute. Based on the obtained results, TDX2 and BHD filters show well the general trend of the aqueduct by separating the anomaly edge of the aqueduct and other existing anomalies on the aqueduct residual map. The width of the edge obtained by applying these filters is about 1.27m whereas that is 2.7m for the TDX and Theta filters which is impossible according to the geological information of the study area. Studies show that, the obtained width for the aqueduct is determined by the BHD and TDX2 filters more accurate than others.Multiple methods available to increase the resolution of the edges, which usually are the functions composed by the first-order horizontal derivative and vertical derivative of potential field data. Cooper and Cowan (2006) presented total horizontal derivative (THD) which has less performance in determining edge of deep sources. Cooper and Cowan (2006) and Wijns et al. (2005) respectively have presented TDX and the Theta which are less sensitive to depth of the anomaly, also they have greater accuracy in detecting of the edges than the THD filter. The edge detection filters are all the functions based on the horizontal derivative and vertical derivatives of the data. The maxima of total horizontal derivative and the zero of the vertical derivative are corresponding to the edges of the source.<br />We find that the horizontal coordinates of the maxima of total horizontal derivative and the zero of vertical derivative are both bigger than the true locations of the edges, and the coordinates of the maxima of total horizontal derivative are more close to the true edges. We prove that the ratio of the total horizontal derivative to the second-order vertical derivative can obtain more accurate edge detection results.<br />We use the balanced horizontal derivative (BHD) edge detection filter, which uses the ratio of the first-order horizontal derivative to the second-order horizontal derivative to recognize the edges of the source, and the recognized edges by the BHD edge detection filter are more correct and are more insensitive to noise. We also use of second-order normalized total horizontal derivative (TDX2) to detect anomaly edges, which uses the ratio of the first vertical derivative of the horizontal to the second-order horizontal derivative to recognize the edges of the source, which gives the same results as the BHD. Ma and et al. (2014) have presented the BHD and the TDX2 filters as follows:<br />BHD=tan^(-1)(√(("∂f" /"∂x" )^2 "+" ("∂f" /"∂y" )^2 )/(k×-((∂^2 f)/(∂x^2 )+(∂^2 f)/(∂y^2 )) )) (1)<br />where, ∂f/∂x, ∂f/∂y and ∂2f/∂z2 are the derivatives of the data f, and, K=(mean(∂f/∂z))/(mean((∂^2 f)/〖∂z〗^2 )) and the maxima of the absolute value of the BHD are corresponding to the edges of the source.<br /><br />〖TDX〗_2=tan^(-1)(√(〖((∂f_z)/∂x)〗^2+〖((∂f_z)/∂y)〗^2 )/|(∂^2 f)/(∂z^2 )| ) (2)<br /><br />where, fz is the first-order vertical derivative of potential field f.<br />The results of the modeling on synthetic and real data shows that the edges that recognized by the TDX2 and the BHD are more precise and clear, by the presented methods are consistent with the true values for the shallow anomaly .and the BHD and the TDX2 filters can display the edges of shallow and deep sources simultaneously. For the anomaly of three separated objects interfere with different domains, filters break boundary anomalies better than previous filters. Also in this study, the mentioned filters have been used on gravity data of the aqueduct in the geophysics institute. Based on the obtained results, TDX2 and BHD filters show well the general trend of the aqueduct by separating the anomaly edge of the aqueduct and other existing anomalies on the aqueduct residual map. The width of the edge obtained by applying these filters is about 1.27m whereas that is 2.7m for the TDX and Theta filters which is impossible according to the geological information of the study area. Studies show that, the obtained width for the aqueduct is determined by the BHD and TDX2 filters more accurate than others.https://jesphys.ut.ac.ir/article_58551_38e1b4a947d2bcd22244f22135cb5920.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Identification of spatial-temporal of cyclones changes in Mediterranean using numerical detection algorithmIdentification of spatial-temporal of cyclones changes in Mediterranean using numerical detection algorithm4054175855210.22059/jesphys.2016.58552FATeimourAlizadehJournal Article20150301Today, extra tropical cyclones are recognized not only for the important influence they exert on midlatitude weather conditions but also for their integral role in the earth’s climate system. (Gary luckmann, 2012). Extra tropical cyclones are fundamental meteorological features and play a key role in a broad range of weather phenomena. They are a central component maintaining the global atmospheric energy, moisture, and momentum budgets. They are on the one hand responsible for an important part of our water supply, and on the other are intimately linked with many natural hazards affecting the middle and high latitudes (wind damage, precipitation-related flooding, storm surges, and marine storminess). Thus, it is important to provide for society an accurate diagnosis of cyclone activity, which includes a baseline climatology of extra tropical storms (e.g., Hoskins and Hodges 2002). Identifying and tracking extra tropical cyclones might seem, superficially, to be a straightforward activity, but in reality it is very challenging (Urs neu and etal, 2013). One of the region of cyclogenesis is Mediterranean Sea in north hemisphere that positive vorticity and cyclogenesis occurred in west and east of this sea. <br />There are many study with numerical method about cyclogenesis in Mediterranean Sea and at first Petersen 1957, had identified the cyclone frequency in Mediterranean in north hemisphere study. Alpert and etal 1990, study the cyclogenesis in west and east of Mediterranean Sea and found that vorticity in Mediterranean Sea have main role in cyclogenesis in Mediterranean region, also lee ward cyclogenesis have main role in Geneva golf and cypress. One of the first study that performed about impact of cyclones of Mediterranean over climate of Iran is Alijani research, 1364, that he found that this feature have main impact on climate of Iran. Sedaghat and others 1378, had been studied the cyclone tracking of cyclone in middle east and they said that the most of cyclone are in west and North West of Iran. One of the most reason of this study is identify of spatial temporal change of Mediterranean cyclone. Because the climate change and global warming caused the change in atmospheric general circulation and change in atmospheric phenomena. <br /> For numerical cyclone detection used the ERA-Enterim database, this is last reanalysis of global atmosphere by ECMWF by Dee and etal, 2011 which available in six hours interval and with resolution 0.5*0.5 longitude and latitude in duration of 1980 to 2013. The cyclone positions are defined by local minima of the geopotential height (z1000) of the 1000-hPa surface considering the neighborhood of eight grid points. Additionally, in order to locate intense vortices, the mean vorticity of a minimum point in 600 km radios required. (Blender and Etal, 1999) The threshold of mean vorticity is 5*〖10〗^(-5) s^(-1)because in this region there are shallow and thermal low and this is the best threshold for exclude of those. After calculation of cyclogenesis the spatial-temporal change of those identify with frequency sum and mean center for three duration of eleven years. <br /> Spatial distribution cyclones shows in Figure 2 and include of 12180 cyclone in this region. As see in fig 2 the more cyclogenesis occurred in the west of Mediterranean and the most concentration of those are in west and east of Italia whatever we move to east of Mediterranean decrease from the cyclone concentrations the main center of cyclongenesis in east of Mediterranean are in cypress coast and Turkish west coast. It is considerable that in any time cyclone with more than one frequency are in Iran. <br /> About of 14 percent of cyclones in Mediterranean occurred in January that mostly composed in west and east of Italia, in additional in this month cyclone concentration are in Turkish and Syria. The most frequency of cyclogenesis occurred in February and the lowest frequency of those occurred in the July. The trend of cyclogenesis in Mediterranean are positive and cyclones are increasing by 1.04 coefficient in each year. But the coefficient of determination in the regression equation are very low and nearest to the zero this shows that the regression equation couldn’t show the logic correlation between time and cyclone frequency. The trend month of cyclone in Mediterranean in May, July, august and October are negative and in others months are positive. But in month trend such as years trend the coefficient of determination of regression equation are low. The result show that the temporal trend of cyclogenesis in Mediterranean don’t change in this study area. The mean center of cyclogenesis show that the center of mean cyclone in the three duration have a shift to west and north latitude.Today, extra tropical cyclones are recognized not only for the important influence they exert on midlatitude weather conditions but also for their integral role in the earth’s climate system. (Gary luckmann, 2012). Extra tropical cyclones are fundamental meteorological features and play a key role in a broad range of weather phenomena. They are a central component maintaining the global atmospheric energy, moisture, and momentum budgets. They are on the one hand responsible for an important part of our water supply, and on the other are intimately linked with many natural hazards affecting the middle and high latitudes (wind damage, precipitation-related flooding, storm surges, and marine storminess). Thus, it is important to provide for society an accurate diagnosis of cyclone activity, which includes a baseline climatology of extra tropical storms (e.g., Hoskins and Hodges 2002). Identifying and tracking extra tropical cyclones might seem, superficially, to be a straightforward activity, but in reality it is very challenging (Urs neu and etal, 2013). One of the region of cyclogenesis is Mediterranean Sea in north hemisphere that positive vorticity and cyclogenesis occurred in west and east of this sea. <br />There are many study with numerical method about cyclogenesis in Mediterranean Sea and at first Petersen 1957, had identified the cyclone frequency in Mediterranean in north hemisphere study. Alpert and etal 1990, study the cyclogenesis in west and east of Mediterranean Sea and found that vorticity in Mediterranean Sea have main role in cyclogenesis in Mediterranean region, also lee ward cyclogenesis have main role in Geneva golf and cypress. One of the first study that performed about impact of cyclones of Mediterranean over climate of Iran is Alijani research, 1364, that he found that this feature have main impact on climate of Iran. Sedaghat and others 1378, had been studied the cyclone tracking of cyclone in middle east and they said that the most of cyclone are in west and North West of Iran. One of the most reason of this study is identify of spatial temporal change of Mediterranean cyclone. Because the climate change and global warming caused the change in atmospheric general circulation and change in atmospheric phenomena. <br /> For numerical cyclone detection used the ERA-Enterim database, this is last reanalysis of global atmosphere by ECMWF by Dee and etal, 2011 which available in six hours interval and with resolution 0.5*0.5 longitude and latitude in duration of 1980 to 2013. The cyclone positions are defined by local minima of the geopotential height (z1000) of the 1000-hPa surface considering the neighborhood of eight grid points. Additionally, in order to locate intense vortices, the mean vorticity of a minimum point in 600 km radios required. (Blender and Etal, 1999) The threshold of mean vorticity is 5*〖10〗^(-5) s^(-1)because in this region there are shallow and thermal low and this is the best threshold for exclude of those. After calculation of cyclogenesis the spatial-temporal change of those identify with frequency sum and mean center for three duration of eleven years. <br /> Spatial distribution cyclones shows in Figure 2 and include of 12180 cyclone in this region. As see in fig 2 the more cyclogenesis occurred in the west of Mediterranean and the most concentration of those are in west and east of Italia whatever we move to east of Mediterranean decrease from the cyclone concentrations the main center of cyclongenesis in east of Mediterranean are in cypress coast and Turkish west coast. It is considerable that in any time cyclone with more than one frequency are in Iran. <br /> About of 14 percent of cyclones in Mediterranean occurred in January that mostly composed in west and east of Italia, in additional in this month cyclone concentration are in Turkish and Syria. The most frequency of cyclogenesis occurred in February and the lowest frequency of those occurred in the July. The trend of cyclogenesis in Mediterranean are positive and cyclones are increasing by 1.04 coefficient in each year. But the coefficient of determination in the regression equation are very low and nearest to the zero this shows that the regression equation couldn’t show the logic correlation between time and cyclone frequency. The trend month of cyclone in Mediterranean in May, July, august and October are negative and in others months are positive. But in month trend such as years trend the coefficient of determination of regression equation are low. The result show that the temporal trend of cyclogenesis in Mediterranean don’t change in this study area. The mean center of cyclogenesis show that the center of mean cyclone in the three duration have a shift to west and north latitude.https://jesphys.ut.ac.ir/article_58552_01027dddac8eb9765cbbb91548b6252a.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Modeling and interpolation of ionosphere total electron content using artificial neural network and GPS observationModeling and interpolation of ionosphere total electron content using artificial neural network and GPS observation4194375550410.22059/jesphys.2016.55504FAMir RezaGhaffari Razin0000-0002-5579-5889BehzadVoosoghiJournal Article20150314Global positioning system (GPS) signals provide valuable information about ionosphere physical structure. Using these signals, can be derived total electron content (TEC) for each line of sight between the receiver and the satellite. For historic and other sparse data sets, the reconstruction of TEC images is often performed using multivariate interpolation techniques. Recently it has become clear that the techniques derived from artificial intelligence research and modern computer science provide a number of system aids to analyze and predict the behavior of complex solar-terrestrial dynamic systems. Methods of artificial intelligence have provided tools which potentially make the task of ionospheric modeling possible. Artificial neural network (ANN) provides an inexplicit non-linear model to learn relations between inputs and outputs using training data. <br /> Neural network is an information processing system which is formed by a large number of simple processing elements, known as artificial nerves. The input data are multiplied by the corresponding weight and the summation are entered into neurons. Each neuron has an activation function. Inputs pass to the activation function and determine the output of neurons. The behavior of neural network is related to communication between nodes. Using training data, the designed ANN can be adjusted in an iterative procedure to determine optimal parameters of ANN. Then for an unknown input, we can compute corresponding output using the trained ANN. The neurons of input and output layers are determined according to the number of input and output parameters. The number of neurons in the hidden layer can be determined by trial and error through minimizing total error of the ANN. For this minimization, each ANN parameter’s share in the total error should be computed which can be achieved by a back-propagating algorithm.<br /> Radial basis function neural network (RBFNN) is known from the approximation theory as it is applied to the real multivariate interpolation problem. RBFNN is popularized by Moody and Darken (1989), and many researchers suggested it as an alternative ANN structure to MLP. RBFNN is very useful for function approximation and classification problems because of its more compact topology and faster learning speed. RBFNN is configured with three layers. An input layer consists of source neurons and distributes input vectors to each of the neurons in the hidden layer without any multiplicative factors. The single hidden layer has receptive field units (hidden neurons) each of which represents a nonlinear transfer function called a basis function. The output layer produces a linear weighted sum of hidden neuron outputs and supplies the response of RBFNN.<br /> Due to the nonlinearity of ionosphere physical properties, in this paper, multi-layer perceptron artificial neural networks (MLP-ANN) and RBFNN used to model and predict the spatial and temporal variations of vertical TEC (VTEC) over Iran. The used model is able to estimate and predict the VTEC within and also near the network. For this work, observations of 22 GPS stations in northwest of Iran (360Global positioning system (GPS) signals provide valuable information about ionosphere physical structure. Using these signals, can be derived total electron content (TEC) for each line of sight between the receiver and the satellite. For historic and other sparse data sets, the reconstruction of TEC images is often performed using multivariate interpolation techniques. Recently it has become clear that the techniques derived from artificial intelligence research and modern computer science provide a number of system aids to analyze and predict the behavior of complex solar-terrestrial dynamic systems. Methods of artificial intelligence have provided tools which potentially make the task of ionospheric modeling possible. Artificial neural network (ANN) provides an inexplicit non-linear model to learn relations between inputs and outputs using training data. <br /> Neural network is an information processing system which is formed by a large number of simple processing elements, known as artificial nerves. The input data are multiplied by the corresponding weight and the summation are entered into neurons. Each neuron has an activation function. Inputs pass to the activation function and determine the output of neurons. The behavior of neural network is related to communication between nodes. Using training data, the designed ANN can be adjusted in an iterative procedure to determine optimal parameters of ANN. Then for an unknown input, we can compute corresponding output using the trained ANN. The neurons of input and output layers are determined according to the number of input and output parameters. The number of neurons in the hidden layer can be determined by trial and error through minimizing total error of the ANN. For this minimization, each ANN parameter’s share in the total error should be computed which can be achieved by a back-propagating algorithm.<br /> Radial basis function neural network (RBFNN) is known from the approximation theory as it is applied to the real multivariate interpolation problem. RBFNN is popularized by Moody and Darken (1989), and many researchers suggested it as an alternative ANN structure to MLP. RBFNN is very useful for function approximation and classification problems because of its more compact topology and faster learning speed. RBFNN is configured with three layers. An input layer consists of source neurons and distributes input vectors to each of the neurons in the hidden layer without any multiplicative factors. The single hidden layer has receptive field units (hidden neurons) each of which represents a nonlinear transfer function called a basis function. The output layer produces a linear weighted sum of hidden neuron outputs and supplies the response of RBFNN.<br /> Due to the nonlinearity of ionosphere physical properties, in this paper, multi-layer perceptron artificial neural networks (MLP-ANN) and RBFNN used to model and predict the spatial and temporal variations of vertical TEC (VTEC) over Iran. The used model is able to estimate and predict the VTEC within and also near the network. For this work, observations of 22 GPS stations in northwest of Iran (360https://jesphys.ut.ac.ir/article_55504_fb562e93a658ef6def5b77a77fd2e94a.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Comparative Analysis of Ionospheric VTEC Parameter and Meteorological Precipitation Parameter in Esfahan and ShirazComparative Analysis of Ionospheric VTEC Parameter and Meteorological Precipitation Parameter in Esfahan and Shiraz4394485568110.22059/jesphys.2016.55681FASedigheOmidiMohammad HosseinMemarian0000-0002-1899-8302Journal Article20150316Reduced rainfall in many parts of the world have prompted scientists to try to manipulate, in different ways, to the nature, causing precipitation from accumulated water in the atmosphere. Atmospheric ionization is a new method for producing precipitation and has attracted much attention. In this regard, in recent years, in some regions of the world, in different ways, the relation between meteorological phenomena, especially tropical cyclones, and ionosphere are studied. Because of importance of water and precipitation, in present paper in order to detect the presence or absence of correlation between ionosphere and the precipitation meteorological phenomena, used three kind of data consist of geomagnetic, ionospheric and meteorological data.<br />The meteorological and geomagnetic data are measured data but the ionospheric data derived from raw GPS data. For analysis, In the first step, it analyzed the geomagnetic conditions by use of two geomagnetic indices consist of Kp and Dst for an especial period. In the second step, analyzed the data of ionospheric Vertical Total Electron Content (VTEC) and omitted geomagnetic disturbed days from same period. The daily maximums of VTEC, as the differentiating factor of various days, was selected. In the third step, it analyses and compares time series of maximums of ionospheric VTEC from processing of data from two GPS stations, in quiet geomagnetic conditions, with variations of precipitation recorded in two synoptic stations situated in same place of GPS stations, that have considerable difference in precipitation. These stations are Shiraz as the high rainfall station and Esfahan as the low rainfall station that considered in A 41-day period since 1 Jan to 10 February 2011. Also it is compared and analyzed the time series of variations of maximums of VTEC, to mean the difference of maximum of VTEC in any day than maximum of VTEC in days ago, with time series of precipitation in both stations. With these analysis it was observed that there is systematic relationship between variations of maximum of ionospheric VTEC and occurrence of precipitation. So that after occurrence intense fluctuations with special amplitude in maximum of VTEC, occur precipitation peaks. Therefore along with mild changes of maximums of ionospheric VTEC in a period, it observe tropospheric stable conditions and along with intense changes of maximums of ionospheric in the same period, it observed unstable tropospheric conditions and precipitation occurred. Also, the time series of maximums of VTEC in the same period obtained from running the global ionospheric model IRI2012 and compared with the time series of maximums of ionospheric VTEC from processing the raw data of GPS. it seems that the best kind of data for study the troposphere-ionosphere correlation, are ionospheric data from processing the raw data of GPS stations and the ionospheric data from running the ionospheric global models, due to the inability show details, are not suitable for this. It may be because of this, that maximums VTEC from processing raw data of GPS in this paper are point measurements and the maximums of VTEC from global ionospheric model IRI2012 has global scale and simulate large scale variations of ionospheric parameters.Reduced rainfall in many parts of the world have prompted scientists to try to manipulate, in different ways, to the nature, causing precipitation from accumulated water in the atmosphere. Atmospheric ionization is a new method for producing precipitation and has attracted much attention. In this regard, in recent years, in some regions of the world, in different ways, the relation between meteorological phenomena, especially tropical cyclones, and ionosphere are studied. Because of importance of water and precipitation, in present paper in order to detect the presence or absence of correlation between ionosphere and the precipitation meteorological phenomena, used three kind of data consist of geomagnetic, ionospheric and meteorological data.<br />The meteorological and geomagnetic data are measured data but the ionospheric data derived from raw GPS data. For analysis, In the first step, it analyzed the geomagnetic conditions by use of two geomagnetic indices consist of Kp and Dst for an especial period. In the second step, analyzed the data of ionospheric Vertical Total Electron Content (VTEC) and omitted geomagnetic disturbed days from same period. The daily maximums of VTEC, as the differentiating factor of various days, was selected. In the third step, it analyses and compares time series of maximums of ionospheric VTEC from processing of data from two GPS stations, in quiet geomagnetic conditions, with variations of precipitation recorded in two synoptic stations situated in same place of GPS stations, that have considerable difference in precipitation. These stations are Shiraz as the high rainfall station and Esfahan as the low rainfall station that considered in A 41-day period since 1 Jan to 10 February 2011. Also it is compared and analyzed the time series of variations of maximums of VTEC, to mean the difference of maximum of VTEC in any day than maximum of VTEC in days ago, with time series of precipitation in both stations. With these analysis it was observed that there is systematic relationship between variations of maximum of ionospheric VTEC and occurrence of precipitation. So that after occurrence intense fluctuations with special amplitude in maximum of VTEC, occur precipitation peaks. Therefore along with mild changes of maximums of ionospheric VTEC in a period, it observe tropospheric stable conditions and along with intense changes of maximums of ionospheric in the same period, it observed unstable tropospheric conditions and precipitation occurred. Also, the time series of maximums of VTEC in the same period obtained from running the global ionospheric model IRI2012 and compared with the time series of maximums of ionospheric VTEC from processing the raw data of GPS. it seems that the best kind of data for study the troposphere-ionosphere correlation, are ionospheric data from processing the raw data of GPS stations and the ionospheric data from running the ionospheric global models, due to the inability show details, are not suitable for this. It may be because of this, that maximums VTEC from processing raw data of GPS in this paper are point measurements and the maximums of VTEC from global ionospheric model IRI2012 has global scale and simulate large scale variations of ionospheric parameters.https://jesphys.ut.ac.ir/article_55681_87fd12073f6c467f04fa56a35d3ebee9.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Measurement of the atmospheric visibility distance by imaging a linear grating with sinusoidal amplitude and having variable spatial period through the atmosphereMeasurement of the atmospheric visibility distance by imaging a linear grating with sinusoidal amplitude and having variable spatial period through the atmosphere4494585855310.22059/jesphys.2016.58553FASaifollahRasouliAssociate Professor of Physics,
Department of Physics, Institute for Advanced Studies in Basic Sciences (IASBS)ElahehHaririUniversity of ZanjanSiamakKhademiUniversity of ZanjanJournal Article20150414In meteorology, daytime atmospheric visibility distance (or visibility) is defined as the greatest distance at which a large dark object against the light sky at the horizon can be seen and clearly recognized by an unaided eye. For determination of maximum visible distance at nighttime, usually, a known, preferably unfocused, moderately intense light source is used in which, at the maximum distance it can be recognized. These atmospheric visibility distance definitions were developed based on human observation through the atmosphere. Presence of dust, fog, haze, pollution, or smoke, at the atmosphere reduces the meteorological visibility. Visibility degradation is one of manifestations of the atmospheric pollutions and airborne particles, which is mainly due to absorption and scattering effects of aerosols in the atmosphere. In this regard, the local air quality can also is reflected by the atmospheric visibility distance. Measurement of atmospheric visibility distance is an important issue in the transportation. Low visibility of atmosphere is mainly a problem of traffic safety. Therefore, existence a reliable atmospheric visibility, at roads for driving, at airports for takeoff and landing of airplanes, at ports for movement of ships, and so on, is necessary. <br />Based on the mentioned definitions, it seems that, the measurement of atmospheric visibility distance is affected by many factors such as the size and shape of the target, the air light intensities of the observing area, the observer’s angle to the target and height, the light intensities for night targets, and so on. In addition, human factors also affected the measurements because of the requirement that the visibility targets be both detected and recognized by the naked eye. Many instrumentation approaches for measuring atmospheric visibility distance have been developed. Transmissometers and scatter meters are two types of instruments are used for determination of the atmospheric visibility distance. A transmissometer operates by sending a narrow collimated laser beam through the atmosphere. It extrapolates the attenuation of the laser beam at a known path length in order to estimate the distance for which the emitted light is attenuated by 95%. A scatter meter assesses the dispersion of a light beam at a particular scattering angle. In this work, we introduce a new, originally an optical method, based on the measurement of the optical visibility or contrast of image of a periodic pattern that captured by a telescope equipped with a digital camera through the atmosphere. In comparing to other methods of the measurement of atmospheric visibility distance, this method is less affected by the setup and instruments factors. <br />In this paper we have presented a new method for measuring atmospheric visibility distance by imaging from a reflective sinusoidal linear grating having variable spatial period. In the experiment a sinusoidal grating with variable period of 𝟕mm to 𝟏𝟏cm is printed on an area 𝟏 m 𝟑m and pasted on a suitable wooden frame. The frame is installed at 𝟑m height from the ground surface and 𝟒𝟕𝟓m distance from an imagining system consisting a telescope and a CCD camera. The telescope is a Newtonian telescope. The CCD is installed at the focal plane of the telescope. Images of the grating are recorded through the atmosphere by the CCD at different days and different day times. Local visibilities of the grating images are measured and cut-off frequency of the patterns is determined. From the cut-off frequency of the image patterns the atmospheric visibility distance is determined for the recording time. Our results for the visibility distance at different times are comparable with the weather report from the Zanjan airport station that was used another method.In meteorology, daytime atmospheric visibility distance (or visibility) is defined as the greatest distance at which a large dark object against the light sky at the horizon can be seen and clearly recognized by an unaided eye. For determination of maximum visible distance at nighttime, usually, a known, preferably unfocused, moderately intense light source is used in which, at the maximum distance it can be recognized. These atmospheric visibility distance definitions were developed based on human observation through the atmosphere. Presence of dust, fog, haze, pollution, or smoke, at the atmosphere reduces the meteorological visibility. Visibility degradation is one of manifestations of the atmospheric pollutions and airborne particles, which is mainly due to absorption and scattering effects of aerosols in the atmosphere. In this regard, the local air quality can also is reflected by the atmospheric visibility distance. Measurement of atmospheric visibility distance is an important issue in the transportation. Low visibility of atmosphere is mainly a problem of traffic safety. Therefore, existence a reliable atmospheric visibility, at roads for driving, at airports for takeoff and landing of airplanes, at ports for movement of ships, and so on, is necessary. <br />Based on the mentioned definitions, it seems that, the measurement of atmospheric visibility distance is affected by many factors such as the size and shape of the target, the air light intensities of the observing area, the observer’s angle to the target and height, the light intensities for night targets, and so on. In addition, human factors also affected the measurements because of the requirement that the visibility targets be both detected and recognized by the naked eye. Many instrumentation approaches for measuring atmospheric visibility distance have been developed. Transmissometers and scatter meters are two types of instruments are used for determination of the atmospheric visibility distance. A transmissometer operates by sending a narrow collimated laser beam through the atmosphere. It extrapolates the attenuation of the laser beam at a known path length in order to estimate the distance for which the emitted light is attenuated by 95%. A scatter meter assesses the dispersion of a light beam at a particular scattering angle. In this work, we introduce a new, originally an optical method, based on the measurement of the optical visibility or contrast of image of a periodic pattern that captured by a telescope equipped with a digital camera through the atmosphere. In comparing to other methods of the measurement of atmospheric visibility distance, this method is less affected by the setup and instruments factors. <br />In this paper we have presented a new method for measuring atmospheric visibility distance by imaging from a reflective sinusoidal linear grating having variable spatial period. In the experiment a sinusoidal grating with variable period of 𝟕mm to 𝟏𝟏cm is printed on an area 𝟏 m 𝟑m and pasted on a suitable wooden frame. The frame is installed at 𝟑m height from the ground surface and 𝟒𝟕𝟓m distance from an imagining system consisting a telescope and a CCD camera. The telescope is a Newtonian telescope. The CCD is installed at the focal plane of the telescope. Images of the grating are recorded through the atmosphere by the CCD at different days and different day times. Local visibilities of the grating images are measured and cut-off frequency of the patterns is determined. From the cut-off frequency of the image patterns the atmospheric visibility distance is determined for the recording time. Our results for the visibility distance at different times are comparable with the weather report from the Zanjan airport station that was used another method.https://jesphys.ut.ac.ir/article_58553_f7695b76827976dd7180021c37515808.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X42220160822Study of long-term variation of extinction coefficient based on horizontal visibility in busiest airports in IranStudy of long-term variation of extinction coefficient based on horizontal visibility in busiest airports in Iran4594675550310.22059/jesphys.2016.55503FAMasoudKhoshsimaSatellite System Institute, Iranian Space Research CenterFarhangAhmadi-GiviInstitute of Geophysics, University of TehranJournal Article20150601Atmospheric visibility is a key factor in everyday life mainly in aviation, industry and surface traffic. It has been defined as the greatest distance at which an observer can just see a black object viewed against the horizon sky which is known as the visual range. Furthermore, light extinction which is mostly due to absorption and scattering effects of aerosols in the atmosphere, can be calculated from visual range using Koschmieder formula. Visibility and extinction have similar units. In a non-polluted atmosphere, visibility ranges from 145 to 225 km, and in normal atmospheric condition it ranges from 10 to 100 km; yet in polluted areas it can be remarkably low down. Visibility is regularly measured at synoptic meteorological stations all over the world as a standard meteorological parameter. Skilled observers have been measuring the visual range using individual markers at known distance from the meteorological location against the horizon sky. <br /> In this paper, airport visibility data for the period of 1970 to 2010 are examined in the four busiest airports in Iran including Tehran-Mehrabad, Mashhad, Shiraz and Isfahan. All data from the four airport stations have been used for analyzing the temporal variations. The analyses are based on daily average measurements, i.e. the average of 9, 12 and 15 UTC data. Midday values are usually used in studies of this kind as they are more representative of regional visibility levels, because early morning radiation fogs and high relative humidity which may reflect only the local environment would regularly have dispersed by midday. The historical trends of extinction coefficient based on visual range for the four aforementioned airports are computed. Trend is determined by a least square regression analysis of midday average extinction. In general, an upward atmospheric extinction trend is seen for all stations. Tehran-Mehrabad airport has the most increasing trend. The extinction was around 0.3 km-1 in early 70’s but it increases in the present and reaches up to around 0.4 km-1. <br /> Airport visibility data that inherently undervalue the true visibility are most appropriately summarized by cumulative percentiles. The Nth cumulative percentile is the visibility that is equal or exceeds N percent of the time. Visibility data lends itself well to the treatment in this manner. Daily visibility observations are investigated during the last four decades at 10th and 90th cumulative percentiles to show the threshold visibility in each airport. The 10th and 90th cumulative percentiles of visibility are used to identify the frequency of ‘good’ and ‘poor’ visibilities, respectively. Results show that there is not a distinctive difference between the 40-year poor or good visibilities among all stations. However, Tehran airport has the least quantity in visual range than the other airports. The good visibility has the largest threshold value of around 20 km in Shiraz and Mashhad airports. <br /> Monthly comparison of extinction for 40 years, shows that there is an about 2-km difference between winter months and the rest of the year. It can be due to the effect of weather or concentration of pollutants in different months. The results of correlation analysis indicate that the difference may be due to the variation in relative humidity value in different months. To minimize the effect of humidity, the days with relative humidity value of above 70 percent and cloudiness of above 4/8 of sky are removed from the visibility trend analysis. Detailed analyses show that the trends of the screened days are nearly parallel to the trends of raw data, but with a slight difference in each airport. Increase in extinction is also observed since 1970 when absolute values of extinction change. The extinction trend is not significantly changed in Tehran and Isfahan airport which may emphasis on the role of air pollution on atmospheric extinction.Atmospheric visibility is a key factor in everyday life mainly in aviation, industry and surface traffic. It has been defined as the greatest distance at which an observer can just see a black object viewed against the horizon sky which is known as the visual range. Furthermore, light extinction which is mostly due to absorption and scattering effects of aerosols in the atmosphere, can be calculated from visual range using Koschmieder formula. Visibility and extinction have similar units. In a non-polluted atmosphere, visibility ranges from 145 to 225 km, and in normal atmospheric condition it ranges from 10 to 100 km; yet in polluted areas it can be remarkably low down. Visibility is regularly measured at synoptic meteorological stations all over the world as a standard meteorological parameter. Skilled observers have been measuring the visual range using individual markers at known distance from the meteorological location against the horizon sky. <br /> In this paper, airport visibility data for the period of 1970 to 2010 are examined in the four busiest airports in Iran including Tehran-Mehrabad, Mashhad, Shiraz and Isfahan. All data from the four airport stations have been used for analyzing the temporal variations. The analyses are based on daily average measurements, i.e. the average of 9, 12 and 15 UTC data. Midday values are usually used in studies of this kind as they are more representative of regional visibility levels, because early morning radiation fogs and high relative humidity which may reflect only the local environment would regularly have dispersed by midday. The historical trends of extinction coefficient based on visual range for the four aforementioned airports are computed. Trend is determined by a least square regression analysis of midday average extinction. In general, an upward atmospheric extinction trend is seen for all stations. Tehran-Mehrabad airport has the most increasing trend. The extinction was around 0.3 km-1 in early 70’s but it increases in the present and reaches up to around 0.4 km-1. <br /> Airport visibility data that inherently undervalue the true visibility are most appropriately summarized by cumulative percentiles. The Nth cumulative percentile is the visibility that is equal or exceeds N percent of the time. Visibility data lends itself well to the treatment in this manner. Daily visibility observations are investigated during the last four decades at 10th and 90th cumulative percentiles to show the threshold visibility in each airport. The 10th and 90th cumulative percentiles of visibility are used to identify the frequency of ‘good’ and ‘poor’ visibilities, respectively. Results show that there is not a distinctive difference between the 40-year poor or good visibilities among all stations. However, Tehran airport has the least quantity in visual range than the other airports. The good visibility has the largest threshold value of around 20 km in Shiraz and Mashhad airports. <br /> Monthly comparison of extinction for 40 years, shows that there is an about 2-km difference between winter months and the rest of the year. It can be due to the effect of weather or concentration of pollutants in different months. The results of correlation analysis indicate that the difference may be due to the variation in relative humidity value in different months. To minimize the effect of humidity, the days with relative humidity value of above 70 percent and cloudiness of above 4/8 of sky are removed from the visibility trend analysis. Detailed analyses show that the trends of the screened days are nearly parallel to the trends of raw data, but with a slight difference in each airport. Increase in extinction is also observed since 1970 when absolute values of extinction change. The extinction trend is not significantly changed in Tehran and Isfahan airport which may emphasis on the role of air pollution on atmospheric extinction.https://jesphys.ut.ac.ir/article_55503_a926af78b8ae3df6e55db9366616d71c.pdf