Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421Edge detection of magnetic body using horizontal gradient of pseudo gravity anomalyEdge detection of magnetic body using horizontal gradient of pseudo gravity anomaly22633FAKamalAlamdarAbdolhamidAnsariJournal Article19700101Mapping the edges of magnetized bodies is fundamental to the application of magnetic data to geologic mapping. Whether as a guide for subsequent field mapping or as a predictive mapping tool in areas of limited exposure, delineating lateral magnetization changes provides information on not only lithological changes but also on structural regimes and deformation styles and trends. Adding contact locations to maps of the magnetic field or enhanced versions of the field (derivatives, transforms, etc.) improves significantly the interpretive power of such products. Furthermore, this has recently become particularly important because of the large volumes of magnetic data that are being collected for environmental and geological applications. Hence, a variety of semi-automatic methods, based on the use of derivatives of the magnetic field has been developed to determine magnetic source parameters such as locations of boundaries and depths. Almost all methods that determine contact locations are based on calculating some function of the magnetic field that produces a maximum over a source body edge. Finding the maxima is then efficiently done with the curve-fitting approach of Blakely and Simpson (1986). Gravity and magnetic data are usually processed and interpreted separately, and fully integrated results basically are created in the mind of the interpreter. Data interpretation in such a manner requires an interpreter experienced both in topics concerning potential field theory and the geology of the study area. To simplify the joint interpretation of data, the automatic production of auxiliary interpreting products, in the form of maps or profiles, is useful to help a less experienced interpreter or when investigating regions with poorly known geology. Fortunately, a suitable theoretical background for the joint interpretation of gravity and magnetic anomalies is well established and can serve promptly in generating such products. Because of its mathematical expression, this theory is commonly referred to as the Poisson relation or the Poisson theorem. This theorem provides a simple linear relationship connecting gravity and magnetic potentials and, by extension, field components that are commonly derived from geophysical surveys. To validate this, an isolated source must have a uniform density and magnetization contrast. The relationship, however, is independent of the shape and location of the source. Therefore, the magnetic field can be calculated directly from the gravity field without knowing the geometry of the body or how magnetization and density are distributed within the body and Vice Versa.
Therefore, a magnetic grid may be transformed into a grid of pseudo-gravity. The process requires pole reduction, but adds a further procedure which converts the essentially dipolar nature of a magnetic field to its equivalent monopolar form. The result, with suitable scaling, is comparable with the gravity map. It shows the gravity map that would have been observed if density were proportional to magnetization (or susceptibility). Comparison of gravity and pseudo-gravity maps can reveal a good deal about the local geology. Where anomalies coincide, the source of the gravity and magnetic disturbances is likely to be the same geological structure. Similarly, a gravity grid can be transformed into a pseudo-magnetic grid, although this is a less common practice. Pseudo- gravity transformation is a linear filter which is usually applied in the frequency domain on magnetic data. This filter produces an applicable result because interpretation and quantifying the gravity anomaly is easier than magnetic anomaly.
Filtering (enhancement techniques) is a way of separating signals of different wavelength to isolate and hence enhance anomalous features with a certain wavelength. One of the enhancement methods in magnetic data filtering is Total Horizontal Derivative (THDR) designed to look at Maxima in the filtered map indicate source edges. It is complementary to the traditional filters and also first vertical derivative enhancements techniques. It usually produces a more exact location for faults than the first vertical derivative, but for magnetic data it must be used in conjunction with the other transformations e.g. reduction to pole (RTP) or pseudo-gravity. Computing horizontal gradient of the pseudo-gravity anomaly and mapping the maximum value of this causes edge detection of the magnetic causative body. In this paper this method is applied on synthetic magnetic anomaly and also on the magnetic anomaly from the Gol- Gohar area in Sirjan which demonstrate a 30m width body.Mapping the edges of magnetized bodies is fundamental to the application of magnetic data to geologic mapping. Whether as a guide for subsequent field mapping or as a predictive mapping tool in areas of limited exposure, delineating lateral magnetization changes provides information on not only lithological changes but also on structural regimes and deformation styles and trends. Adding contact locations to maps of the magnetic field or enhanced versions of the field (derivatives, transforms, etc.) improves significantly the interpretive power of such products. Furthermore, this has recently become particularly important because of the large volumes of magnetic data that are being collected for environmental and geological applications. Hence, a variety of semi-automatic methods, based on the use of derivatives of the magnetic field has been developed to determine magnetic source parameters such as locations of boundaries and depths. Almost all methods that determine contact locations are based on calculating some function of the magnetic field that produces a maximum over a source body edge. Finding the maxima is then efficiently done with the curve-fitting approach of Blakely and Simpson (1986). Gravity and magnetic data are usually processed and interpreted separately, and fully integrated results basically are created in the mind of the interpreter. Data interpretation in such a manner requires an interpreter experienced both in topics concerning potential field theory and the geology of the study area. To simplify the joint interpretation of data, the automatic production of auxiliary interpreting products, in the form of maps or profiles, is useful to help a less experienced interpreter or when investigating regions with poorly known geology. Fortunately, a suitable theoretical background for the joint interpretation of gravity and magnetic anomalies is well established and can serve promptly in generating such products. Because of its mathematical expression, this theory is commonly referred to as the Poisson relation or the Poisson theorem. This theorem provides a simple linear relationship connecting gravity and magnetic potentials and, by extension, field components that are commonly derived from geophysical surveys. To validate this, an isolated source must have a uniform density and magnetization contrast. The relationship, however, is independent of the shape and location of the source. Therefore, the magnetic field can be calculated directly from the gravity field without knowing the geometry of the body or how magnetization and density are distributed within the body and Vice Versa.
Therefore, a magnetic grid may be transformed into a grid of pseudo-gravity. The process requires pole reduction, but adds a further procedure which converts the essentially dipolar nature of a magnetic field to its equivalent monopolar form. The result, with suitable scaling, is comparable with the gravity map. It shows the gravity map that would have been observed if density were proportional to magnetization (or susceptibility). Comparison of gravity and pseudo-gravity maps can reveal a good deal about the local geology. Where anomalies coincide, the source of the gravity and magnetic disturbances is likely to be the same geological structure. Similarly, a gravity grid can be transformed into a pseudo-magnetic grid, although this is a less common practice. Pseudo- gravity transformation is a linear filter which is usually applied in the frequency domain on magnetic data. This filter produces an applicable result because interpretation and quantifying the gravity anomaly is easier than magnetic anomaly.
Filtering (enhancement techniques) is a way of separating signals of different wavelength to isolate and hence enhance anomalous features with a certain wavelength. One of the enhancement methods in magnetic data filtering is Total Horizontal Derivative (THDR) designed to look at Maxima in the filtered map indicate source edges. It is complementary to the traditional filters and also first vertical derivative enhancements techniques. It usually produces a more exact location for faults than the first vertical derivative, but for magnetic data it must be used in conjunction with the other transformations e.g. reduction to pole (RTP) or pseudo-gravity. Computing horizontal gradient of the pseudo-gravity anomaly and mapping the maximum value of this causes edge detection of the magnetic causative body. In this paper this method is applied on synthetic magnetic anomaly and also on the magnetic anomaly from the Gol- Gohar area in Sirjan which demonstrate a 30m width body.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421Investigation of Soil Magnetic Characteristics Efficiency for Sediment Sources DifferentiationInvestigation of Soil Magnetic Characteristics Efficiency for Sediment Sources Differentiation22634FAAsgharKouhpeimaSadatFeizniaHasanAhmadiMohammadMoazzamiJournal Article19700101Designing effective strategies for sediment management and control is closely related to identification of sediment sources in a drainage basin. One of the methods for investigating sediment sources is using magnetic tracers for erosion and sediment yield studies. Sediment source identification using magnetic characteristics is a simple, inexpensive, quick and non-destructive method having applicability in different environments. One of the parameters which is used in most sediment tracing studies is susceptibility magnetization parameter which is measured in low frequency (XLf) and high frequency (XHf). Another magnetic parameter which is used in sediment tracing is frequency depended susceptibility magnetization (xFD). The objective of this study is investigating the applicability of magnetic characteristics of soil and sediment samples as tracers in identification and differentiation of sediment sources in five small drainage basins in Semnan Province: Amrovan, Atari, Ebrahim-Abad, Ali-Abad and Royan. Land use of all drainage basins is rangeland as they have cool, semi-arid climate and variable lithological units.
By field work, different lithological units (as surface sources units) and gully walls (as sub-surface sources) were identified as sediment sources. Sampling of surface sources was preformed from 0-2 centimeter depth and sampling of sub-surface sources was performed on gully walls. Sediments deposited behind dam reservoirs of the basin outlet were also sampled. The total of 250 samples was collected. The samples were air-dried sieved and particles less than 63 microns were separated for further analysis.
In this study, two parameters of XLf and XFD which are easily measured using a magnetic susceptibility meter were used as sediment tracers. These parameters were measured using the BartingtonMS2 susceptibility meter of the Geophysics Institute of University of Tehran.
The potential of magnetic parameters as sediment sources in the studied drainage basin was determined using the Kruskal-Wallis and Discrimination Function Analysis (DFA) statistical tests. Using Kruskall-Wallis test, the amount of P for all characteristics was lower than the critical value and therefore they were entered in DFA Analysis. The percentages of the samples which were correctly classified by DFA Analysis were between 39.9% to 57.5% in different drainage basins. The highest percentage using combined XLf and XFD parameters was 65 % in Ali-Abad drainage basin and the lowest percentage was 48% in Amrovan drainage basin. No single characteristics could differentiate sediment sources completely in different drainage basins. XLf had lower differentiation potential relative to XFD in all drainage basins for example the differentiation potential of XLf varies from 39.9% in Ebrahim-Abad to 52.5% in Ali-Abad drainage basin. Whereas the differentiation potential of XFD is from 43.3% (in Ebrahim Abad drainage basin) to 57.5% (in Ali-Abad drainage basin).The results show that using composite tracers will increase the differentiation potential relative to using single tracers.Designing effective strategies for sediment management and control is closely related to identification of sediment sources in a drainage basin. One of the methods for investigating sediment sources is using magnetic tracers for erosion and sediment yield studies. Sediment source identification using magnetic characteristics is a simple, inexpensive, quick and non-destructive method having applicability in different environments. One of the parameters which is used in most sediment tracing studies is susceptibility magnetization parameter which is measured in low frequency (XLf) and high frequency (XHf). Another magnetic parameter which is used in sediment tracing is frequency depended susceptibility magnetization (xFD). The objective of this study is investigating the applicability of magnetic characteristics of soil and sediment samples as tracers in identification and differentiation of sediment sources in five small drainage basins in Semnan Province: Amrovan, Atari, Ebrahim-Abad, Ali-Abad and Royan. Land use of all drainage basins is rangeland as they have cool, semi-arid climate and variable lithological units.
By field work, different lithological units (as surface sources units) and gully walls (as sub-surface sources) were identified as sediment sources. Sampling of surface sources was preformed from 0-2 centimeter depth and sampling of sub-surface sources was performed on gully walls. Sediments deposited behind dam reservoirs of the basin outlet were also sampled. The total of 250 samples was collected. The samples were air-dried sieved and particles less than 63 microns were separated for further analysis.
In this study, two parameters of XLf and XFD which are easily measured using a magnetic susceptibility meter were used as sediment tracers. These parameters were measured using the BartingtonMS2 susceptibility meter of the Geophysics Institute of University of Tehran.
The potential of magnetic parameters as sediment sources in the studied drainage basin was determined using the Kruskal-Wallis and Discrimination Function Analysis (DFA) statistical tests. Using Kruskall-Wallis test, the amount of P for all characteristics was lower than the critical value and therefore they were entered in DFA Analysis. The percentages of the samples which were correctly classified by DFA Analysis were between 39.9% to 57.5% in different drainage basins. The highest percentage using combined XLf and XFD parameters was 65 % in Ali-Abad drainage basin and the lowest percentage was 48% in Amrovan drainage basin. No single characteristics could differentiate sediment sources completely in different drainage basins. XLf had lower differentiation potential relative to XFD in all drainage basins for example the differentiation potential of XLf varies from 39.9% in Ebrahim-Abad to 52.5% in Ali-Abad drainage basin. Whereas the differentiation potential of XFD is from 43.3% (in Ebrahim Abad drainage basin) to 57.5% (in Ali-Abad drainage basin).The results show that using composite tracers will increase the differentiation potential relative to using single tracers.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421Using semblance based coherency to detect micro faults in the Khangiran gas fieldUsing semblance based coherency to detect micro faults in the Khangiran gas field22635FAAliHashemi GazarAbdolrahimJavaherianJournal Article19700101Seismic attributes are very useful in seismic data interpretation. One of these attributes is the coherency attribute. Seismic coherency is a complex trace and a geometrical attribute that is applied to a 3D cube of seismic data. It is a measure of lateral changes in acoustic impedance caused by the variation in structure, stratigraphy, lithology, porosity, and the presence of hydrocarbons. When coherency attributes are applied to seismic data, they show the continuity between two or more traces of the seismic window. The rate of the seismic continuity is a sign of geological continuity. The 3D seismic coherency cube can be extremely effective in delineating faults. To calculate the attribute of coherency, there are three solutions: semblance, eigenstructure and cross correlation. Inputs of these algorithms are 3D seismic data. Similar traces are mapped with high coherence coefficients and dissimilar traces take lower coefficients. In this paper, we designed the semblance based coherency algorithm in Matlab and applied it to the synthetic data. For this purpose, we generated several 3D synthetic seismic cubes including micro-faulted horizontal, dipping, and cross dipping layers. We also studied the effect of the dominant frequency, signal to noise ratio and the size of the analysis cube in calculating coherency attributes. We used a Ricker wavelet with the dominant frequency of 30 Hz for horizontal layers and 35 Hz for dipping layers and signal to noise ratio of 1. We applied all three approaches of coherency attributes on a data set from the Khangiran gas field in NE Iran.
This method is employed using, as narrow as possible, a temporal window analysis typically determined by the highest usable frequency in the input seismic data. Near-vertical structural features, such as faults are better enhanced when using a longer temporal analysis window. By this algorithm, we were able to balance the conflicting requirements between maximizing lateral resolution and increasing S/N ratio. We studied the applicability of this algorithm to detect faults with minor-displacements and compared the results of this method with eigenstructure and cross correlation over the same data set. It provides better results compared with the other two methods. This study shows that the semblance-based coherency algorithm provides a better coherency cube than the eigenstructure-based coherency cube.Seismic attributes are very useful in seismic data interpretation. One of these attributes is the coherency attribute. Seismic coherency is a complex trace and a geometrical attribute that is applied to a 3D cube of seismic data. It is a measure of lateral changes in acoustic impedance caused by the variation in structure, stratigraphy, lithology, porosity, and the presence of hydrocarbons. When coherency attributes are applied to seismic data, they show the continuity between two or more traces of the seismic window. The rate of the seismic continuity is a sign of geological continuity. The 3D seismic coherency cube can be extremely effective in delineating faults. To calculate the attribute of coherency, there are three solutions: semblance, eigenstructure and cross correlation. Inputs of these algorithms are 3D seismic data. Similar traces are mapped with high coherence coefficients and dissimilar traces take lower coefficients. In this paper, we designed the semblance based coherency algorithm in Matlab and applied it to the synthetic data. For this purpose, we generated several 3D synthetic seismic cubes including micro-faulted horizontal, dipping, and cross dipping layers. We also studied the effect of the dominant frequency, signal to noise ratio and the size of the analysis cube in calculating coherency attributes. We used a Ricker wavelet with the dominant frequency of 30 Hz for horizontal layers and 35 Hz for dipping layers and signal to noise ratio of 1. We applied all three approaches of coherency attributes on a data set from the Khangiran gas field in NE Iran.
This method is employed using, as narrow as possible, a temporal window analysis typically determined by the highest usable frequency in the input seismic data. Near-vertical structural features, such as faults are better enhanced when using a longer temporal analysis window. By this algorithm, we were able to balance the conflicting requirements between maximizing lateral resolution and increasing S/N ratio. We studied the applicability of this algorithm to detect faults with minor-displacements and compared the results of this method with eigenstructure and cross correlation over the same data set. It provides better results compared with the other two methods. This study shows that the semblance-based coherency algorithm provides a better coherency cube than the eigenstructure-based coherency cube.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421Using the Multiplicative Schwarz Alternating Algorithm(MSAA) for Solving the Large Linear System of Equations Related to Global Gravity Field Recovery up to Degree and Order 120Using the Multiplicative Schwarz Alternating Algorithm(MSAA) for Solving the Large Linear System of Equations Related to Global Gravity Field Recovery up to Degree and Order 12022636FAAbdolrezaSafariMohammad AliSharifi0000-0003-0745-4147BabakAmjadiparvarJournal Article19700101The GRACE mission has substantiated the Low–Low Satellite-to-Satellite Tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high–low SST concept in the CHAMP mission to provide a much higher accuracy. The line of sight (LOS) acceleration difference between the GRACE satellite pair is the most frequently used observable for mapping the global gravity field of the Earth in terms of spherical harmonic coefficients.
The following relationship is valid for each evaluation point:
(1)
The GRACE ranging system provides inter-satellite range and its first time derivative, , as the LL-SST observations and the GPS receivers mounted on the GRACE satellites provide the position vectors as the HL-SST mode observations. The inter-satellite range acceleration, , and are obtained by numerical differentiation of and , respectively.
In the absence of non-gravitational forces, the left-hand side of Eq. (1) can be considered as the LOS gravitational acceleration differences,
(2)
A sequence of observations with evaluation points sets up a system with linear equations. In this paper, the corresponding linear system of equations has been set up for spherical harmonic up to degree and order 120. The total number of unknowns, , is
(3)
Such a linear equation system can be solved with iterative solvers or direct solvers. However, the runtime of direct methods or that of iterative solvers without a suitable preconditioner increases tremendously. This is the reason why we need a more sophisticated method to solve the linear system of problems with a large number of unknowns.
Multiplicative variant of the Schwarsz alternating algorithm is a domain decomposition method, which allows it to split the normal matrix of the system into several smaller overlaped submatrices. In each iteration step the multiplicative variant of the Schwarz alternating algorithm solves linear systems with the matrices obtained from the splitting successively. It reduces both runtime and memory requirements drastically. An MSAA example with two submatrices is shown in Fig. 1
Figure 1. MSAA example with two submatrices.
This method dates back to H. A. Achwarsz’ work, published in 1980, and has been investigated by many authors since then. In this paper we propose the Multiplicative Schwarz Alternating Algorithm (MSAA) for solving the large linear system of gravity field recovery. The proposed algorithm has been applied in a close-loop simulation to the International Association of Geodesy (IAG)-simulated data of the GRACE mission. The achieved results indicate the validity and efficiency of the proposed algorithm in solving the linear system of equations from accuracy and runtime points of view.The GRACE mission has substantiated the Low–Low Satellite-to-Satellite Tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high–low SST concept in the CHAMP mission to provide a much higher accuracy. The line of sight (LOS) acceleration difference between the GRACE satellite pair is the most frequently used observable for mapping the global gravity field of the Earth in terms of spherical harmonic coefficients.
The following relationship is valid for each evaluation point:
(1)
The GRACE ranging system provides inter-satellite range and its first time derivative, , as the LL-SST observations and the GPS receivers mounted on the GRACE satellites provide the position vectors as the HL-SST mode observations. The inter-satellite range acceleration, , and are obtained by numerical differentiation of and , respectively.
In the absence of non-gravitational forces, the left-hand side of Eq. (1) can be considered as the LOS gravitational acceleration differences,
(2)
A sequence of observations with evaluation points sets up a system with linear equations. In this paper, the corresponding linear system of equations has been set up for spherical harmonic up to degree and order 120. The total number of unknowns, , is
(3)
Such a linear equation system can be solved with iterative solvers or direct solvers. However, the runtime of direct methods or that of iterative solvers without a suitable preconditioner increases tremendously. This is the reason why we need a more sophisticated method to solve the linear system of problems with a large number of unknowns.
Multiplicative variant of the Schwarsz alternating algorithm is a domain decomposition method, which allows it to split the normal matrix of the system into several smaller overlaped submatrices. In each iteration step the multiplicative variant of the Schwarz alternating algorithm solves linear systems with the matrices obtained from the splitting successively. It reduces both runtime and memory requirements drastically. An MSAA example with two submatrices is shown in Fig. 1
Figure 1. MSAA example with two submatrices.
This method dates back to H. A. Achwarsz’ work, published in 1980, and has been investigated by many authors since then. In this paper we propose the Multiplicative Schwarz Alternating Algorithm (MSAA) for solving the large linear system of gravity field recovery. The proposed algorithm has been applied in a close-loop simulation to the International Association of Geodesy (IAG)-simulated data of the GRACE mission. The achieved results indicate the validity and efficiency of the proposed algorithm in solving the linear system of equations from accuracy and runtime points of view.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421North Western Qom, Iran Aquifer Characterization br SNMRNorth Western Qom, Iran Aquifer Characterization br SNMR22637FABahmanAbbassiMohammad KazemHafiziJournal Article19700101The objective of this research is to represent applicability of Surface Nuclear Magnetic Resonance (SNMR) in aquifer characterization, based on SNMR and Electrical Resistivity Tomography data. SNMR is the only geophysical method that directly detects water. Two important parameters, porosity and permeability are available through inversion of initial amplitude and decay time constant versus pulse moments. Usually, electrical methods are concurrently applied along with SNMR surveys, making it possible to easily manipulate SNMR inversion. A case study is introduced to explain the efficiency of SNMR for aquifer characterizations.
Introduction: Surface Nuclear Magnetic Resonance (SNMR) is a new geophysical method currently developed for shallow investigations of aquifers. Compared to the other geophysical methods, SNMR is a water selective method. Therefore, hydraulic properties of media are achievable through SNMR investigations. Relative to classical electrical methods, SNMR still costs much. In order to reduce the expense of the survey, it would be better to perform a sufficient electrical tomography in the region, and then few SNMR soundings for acquiring aquifer properties.
Basically, in SNMR the response of excited protons is measured as relaxation times in the water content (directly linked to saturated effective porosity) and pore size distribution (linked to permeability). Hydrogen nuclei have a magnetic moment and in an undisturbed manner press about the ambient geomagnetic field vector (Legchenko, et al., 2002). The resonance frequency ( ) is known as Larmor frequency and is proportional to the strength of the Earth’s magnetic field :
(1)
Where, is the gyromagnetic ratio for hydrogen nuclei. An SNMR measurement is made by disturbing protons with a secondary magnetic field ( ) transmitted at the resonant frequency ( ? ) that causes proton spinning vector ( ) to simultaneously tip away from its equilibrium and rotate about at the resonant frequency. In SNMR, is applied as an alternating magnetic field with a circular or square loop as transmitter at the surface. The loop is energized by a pulse of an alternating current :
(2)
The degree to which M is tipped is dependent on the transmitter pulse moment q, which is the product of the transmitter current and the duration of the pulse excitation ( ).The voltage induced in the receiver loop after transmitting a certain pulse moment q is fitted with a function of the form:
(3)
Where = initial signal amplitude; = the decay time constant; = Larmor frequency as equation 1; = phase shift between the signal and the excitation current. In 1D distribution of the subsurface, the initial amplitude is a function of q:
(4)
The water content distribution is related to the sounding curve . The magnetic induction field of excitation is defined in the kernel function , which is linked with the loop configuration. The excitation field can be varied by changing the pulse moment q. An increase of q leads to an SNMR response from deeper regions.
The inverse problem to obtain the one-dimensional water content distribution can be solved in different ways, which are discussed in several SNMR publications. The sounding curve is available after fitting and extrapolating the envelopes of the response signals at the several pulse moments. Commonly a non-linear least square algorithm estimates the signal parameters , and . The two parameters obtained as a result of geophysical inversion are free water content, which is porosity in a saturated aquifer ( ), determined from E0, and permeability revealed from inversion (K).
Discussion and Conclusion: The study area is located north-west of Qom, Iran. Thirteen Vertical Electrical Soundings in three profiles and three Magnetic Resonance Soundings are performed over recent river deposits of the area. The studies revealed a fractured aquifer containing a low permeable shallow reservoir with high porosity and a fractured deep permeable one with low porosities.
Fractured aquifers, due to bad sorting of their sediments, have usually low porosities and high permabilities. An electrical tomography also shows a distinct fractured pattern in a low porosity, high permeable zone, with low amount of resistivity, which is an indication of clay accumulation in the crush zone.
Joint application of SNMR and electrical resistivity methods are important in direct characterization of aquifer parameters including porosity, permeability and electrical resistivity. It is possible to study the relationship between electrical resistivities and porosity/permeability of the aquifer. In this study, qualitative interpretation shows an inverse relationship between electrical resistivity and porosity of the aquifer on the one side, and permeability on the other. Another suggestible approach, which is currently under study, is to explore the complex relationships in a quantitative way, such as using nature inspired algorithms, like Neural Network Estimations.The objective of this research is to represent applicability of Surface Nuclear Magnetic Resonance (SNMR) in aquifer characterization, based on SNMR and Electrical Resistivity Tomography data. SNMR is the only geophysical method that directly detects water. Two important parameters, porosity and permeability are available through inversion of initial amplitude and decay time constant versus pulse moments. Usually, electrical methods are concurrently applied along with SNMR surveys, making it possible to easily manipulate SNMR inversion. A case study is introduced to explain the efficiency of SNMR for aquifer characterizations.
Introduction: Surface Nuclear Magnetic Resonance (SNMR) is a new geophysical method currently developed for shallow investigations of aquifers. Compared to the other geophysical methods, SNMR is a water selective method. Therefore, hydraulic properties of media are achievable through SNMR investigations. Relative to classical electrical methods, SNMR still costs much. In order to reduce the expense of the survey, it would be better to perform a sufficient electrical tomography in the region, and then few SNMR soundings for acquiring aquifer properties.
Basically, in SNMR the response of excited protons is measured as relaxation times in the water content (directly linked to saturated effective porosity) and pore size distribution (linked to permeability). Hydrogen nuclei have a magnetic moment and in an undisturbed manner press about the ambient geomagnetic field vector (Legchenko, et al., 2002). The resonance frequency ( ) is known as Larmor frequency and is proportional to the strength of the Earth’s magnetic field :
(1)
Where, is the gyromagnetic ratio for hydrogen nuclei. An SNMR measurement is made by disturbing protons with a secondary magnetic field ( ) transmitted at the resonant frequency ( ? ) that causes proton spinning vector ( ) to simultaneously tip away from its equilibrium and rotate about at the resonant frequency. In SNMR, is applied as an alternating magnetic field with a circular or square loop as transmitter at the surface. The loop is energized by a pulse of an alternating current :
(2)
The degree to which M is tipped is dependent on the transmitter pulse moment q, which is the product of the transmitter current and the duration of the pulse excitation ( ).The voltage induced in the receiver loop after transmitting a certain pulse moment q is fitted with a function of the form:
(3)
Where = initial signal amplitude; = the decay time constant; = Larmor frequency as equation 1; = phase shift between the signal and the excitation current. In 1D distribution of the subsurface, the initial amplitude is a function of q:
(4)
The water content distribution is related to the sounding curve . The magnetic induction field of excitation is defined in the kernel function , which is linked with the loop configuration. The excitation field can be varied by changing the pulse moment q. An increase of q leads to an SNMR response from deeper regions.
The inverse problem to obtain the one-dimensional water content distribution can be solved in different ways, which are discussed in several SNMR publications. The sounding curve is available after fitting and extrapolating the envelopes of the response signals at the several pulse moments. Commonly a non-linear least square algorithm estimates the signal parameters , and . The two parameters obtained as a result of geophysical inversion are free water content, which is porosity in a saturated aquifer ( ), determined from E0, and permeability revealed from inversion (K).
Discussion and Conclusion: The study area is located north-west of Qom, Iran. Thirteen Vertical Electrical Soundings in three profiles and three Magnetic Resonance Soundings are performed over recent river deposits of the area. The studies revealed a fractured aquifer containing a low permeable shallow reservoir with high porosity and a fractured deep permeable one with low porosities.
Fractured aquifers, due to bad sorting of their sediments, have usually low porosities and high permabilities. An electrical tomography also shows a distinct fractured pattern in a low porosity, high permeable zone, with low amount of resistivity, which is an indication of clay accumulation in the crush zone.
Joint application of SNMR and electrical resistivity methods are important in direct characterization of aquifer parameters including porosity, permeability and electrical resistivity. It is possible to study the relationship between electrical resistivities and porosity/permeability of the aquifer. In this study, qualitative interpretation shows an inverse relationship between electrical resistivity and porosity of the aquifer on the one side, and permeability on the other. Another suggestible approach, which is currently under study, is to explore the complex relationships in a quantitative way, such as using nature inspired algorithms, like Neural Network Estimations.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421Multiple suppression in CMP data using parabolic Radon transformMultiple suppression in CMP data using parabolic Radon transform22638FAVidaMinaeianAbdolrahimJavaherianAbolfazlMoslemiJournal Article19700101Reflection seismic data consist of primary reflections, coherent and incoherent noises. One of the objectives of seismic data processing is to enhance the quality of the real signals by attenuating different kinds of noises. Multiples constitute one of the most troublesome forms of coherent noises in exploration seismology. Multiple reflections often destructively interfere with the desired primary reflections so identification and interpretation of the primary events would be difficult. So, the problem of multiple attenuation in reflection seismograms has always been of great importance. Radon transform that is based on a process which integrates the data along different curved surfaces, is a robust tool for suppressing multiples from seismic data. Like all transform filter pairs, the Radon transform first forward transforms the data into a model parameter domain where the crossing primaries and multiples would be better separated. In the most common multiple attenuation process, multiples are windowed in the transform domain and reconstructed in the original domain using an inverse Radon transform. Then, the modeled multiples are subtracted from the original data to obtain a gather with primaries only.
Based on the form of the integrating surface, there are three types of Radon transforms: linear, hyperbolic and parabolic. The Parabolic Radon is a common tool in multiple attenuation based on the velocity discrimination. In this method, the first step is to replace hyperbolic events in a CMP gather with parabolas by applying NMO correction using velocities of primaries. Then the parabolic Radon domain is generated by summing the data along a set of parabolic paths, parameterized by a curvature, q, which intersect the zero-offset axis at the time ?. This procedure is repeated for each intercept time sample. Ideally, an approximately parabolic event should map into a point in the parabolic Radon domain. So the primaries and multiples would be separable in the new domain. Due to applying NMO correction using the velocities of primary events, the energy of primaries maps to events at around 0 ms moveout in the transform domain, while under-corrected multiples should map to higher moveouts. For attenuating the multiples, it is necessary to produce a model containing only the multiple events. This is done by filtering the primary energy in the Radon domain and then inverse transforming the remaining part of the Radon domain which contains multiples, back to the offset domain. In the final step the multiples-only gather is subtracted from the original data.
The parabolic Radon has different benefits that make it attractive. It achieves multiple attenuation equally at all offsets. Moreover, it does not require knowing the exact velocities of multiples and primaries and it needs no knowledge of the multiple generation mechanism. The most important limitation of the method is that multiples must have sufficient moveout discrimination in order to be attenuated. Experiences have shown that while in synthetic data very fine discrimination may be modeled, in real data with their variable amplitudes and waveforms and additive noise, at least 30 ms moveout is required for the transform to be effective.
In this paper, the parabolic Radon transform with its application in multiple suppression has been studied and MATLAB programming has been implemented. The code was successfully applied on different synthetic 2D models consisting of various multiple reflections, such as water-bottom multiples, simple multiples and interbed multiples, that interfere with primary reflections at near or far offsets. This code was also examined on a 2D real seismic data set. The validation of the program has been verified by comparing the obtained results with the results of Geocluster software.Reflection seismic data consist of primary reflections, coherent and incoherent noises. One of the objectives of seismic data processing is to enhance the quality of the real signals by attenuating different kinds of noises. Multiples constitute one of the most troublesome forms of coherent noises in exploration seismology. Multiple reflections often destructively interfere with the desired primary reflections so identification and interpretation of the primary events would be difficult. So, the problem of multiple attenuation in reflection seismograms has always been of great importance. Radon transform that is based on a process which integrates the data along different curved surfaces, is a robust tool for suppressing multiples from seismic data. Like all transform filter pairs, the Radon transform first forward transforms the data into a model parameter domain where the crossing primaries and multiples would be better separated. In the most common multiple attenuation process, multiples are windowed in the transform domain and reconstructed in the original domain using an inverse Radon transform. Then, the modeled multiples are subtracted from the original data to obtain a gather with primaries only.
Based on the form of the integrating surface, there are three types of Radon transforms: linear, hyperbolic and parabolic. The Parabolic Radon is a common tool in multiple attenuation based on the velocity discrimination. In this method, the first step is to replace hyperbolic events in a CMP gather with parabolas by applying NMO correction using velocities of primaries. Then the parabolic Radon domain is generated by summing the data along a set of parabolic paths, parameterized by a curvature, q, which intersect the zero-offset axis at the time ?. This procedure is repeated for each intercept time sample. Ideally, an approximately parabolic event should map into a point in the parabolic Radon domain. So the primaries and multiples would be separable in the new domain. Due to applying NMO correction using the velocities of primary events, the energy of primaries maps to events at around 0 ms moveout in the transform domain, while under-corrected multiples should map to higher moveouts. For attenuating the multiples, it is necessary to produce a model containing only the multiple events. This is done by filtering the primary energy in the Radon domain and then inverse transforming the remaining part of the Radon domain which contains multiples, back to the offset domain. In the final step the multiples-only gather is subtracted from the original data.
The parabolic Radon has different benefits that make it attractive. It achieves multiple attenuation equally at all offsets. Moreover, it does not require knowing the exact velocities of multiples and primaries and it needs no knowledge of the multiple generation mechanism. The most important limitation of the method is that multiples must have sufficient moveout discrimination in order to be attenuated. Experiences have shown that while in synthetic data very fine discrimination may be modeled, in real data with their variable amplitudes and waveforms and additive noise, at least 30 ms moveout is required for the transform to be effective.
In this paper, the parabolic Radon transform with its application in multiple suppression has been studied and MATLAB programming has been implemented. The code was successfully applied on different synthetic 2D models consisting of various multiple reflections, such as water-bottom multiples, simple multiples and interbed multiples, that interfere with primary reflections at near or far offsets. This code was also examined on a 2D real seismic data set. The validation of the program has been verified by comparing the obtained results with the results of Geocluster software.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421Comparison between Airborne Geophysical and ASTER Data for Hydrothermal Alteration Mapping for Exploration of Copper MineralizationComparison between Airborne Geophysical and ASTER Data for Hydrothermal Alteration Mapping for Exploration of Copper Mineralization22639FAFeizollahMasoumiHojatollahRanjbarJournal Article19700101The study area covers the northern part of the Baft geological map (scale of 1:100 000 ). Several porphyry and vein-type mineralization are reported from this area. A topic that is discussed in the mineral exploration community is the use of remote sensing and airborne geophysics for porphyry type mineralization. Which one is more reliable and efficient in hydrothermal alteration mapping? Airborne geophysical data and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images of this area were analyzed and compared for hydrothermal alteration mapping. ASTER data was analyzed by using shortwave infrared (SWIR) bands by applying principal component analysis (PCA) and band ratioing, in order to enhance the altered areas. Band ratios such as 4/9, 7/9 ad 7/6 were used for hydrothermal alteration enhancement. After applying PCA, principal component 3 could enhance the hydrothermal alteration. Airborne geophysical data was analyzed by applying principal component analysis and ratioing techniques. The higher K radiometric values, as was expected, are not entirely associated with hydrothermal alteration. There are anomalous values that are associated with the lithologies that are rich in K-bearing feldspars. The overall evaluation of satellite and geophysical data shows that ASTER data is more accurate in terms of hydrothermal alteration mapping than geophysical data in this area. Nevertheless, this point should be taken into consideration that the geophysical data can detect both surface and sub-surface anomalies. The combined use of both data sets is recommended for hydrothermal alteration mapping.The study area covers the northern part of the Baft geological map (scale of 1:100 000 ). Several porphyry and vein-type mineralization are reported from this area. A topic that is discussed in the mineral exploration community is the use of remote sensing and airborne geophysics for porphyry type mineralization. Which one is more reliable and efficient in hydrothermal alteration mapping? Airborne geophysical data and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images of this area were analyzed and compared for hydrothermal alteration mapping. ASTER data was analyzed by using shortwave infrared (SWIR) bands by applying principal component analysis (PCA) and band ratioing, in order to enhance the altered areas. Band ratios such as 4/9, 7/9 ad 7/6 were used for hydrothermal alteration enhancement. After applying PCA, principal component 3 could enhance the hydrothermal alteration. Airborne geophysical data was analyzed by applying principal component analysis and ratioing techniques. The higher K radiometric values, as was expected, are not entirely associated with hydrothermal alteration. There are anomalous values that are associated with the lithologies that are rich in K-bearing feldspars. The overall evaluation of satellite and geophysical data shows that ASTER data is more accurate in terms of hydrothermal alteration mapping than geophysical data in this area. Nevertheless, this point should be taken into consideration that the geophysical data can detect both surface and sub-surface anomalies. The combined use of both data sets is recommended for hydrothermal alteration mapping.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421Applying optically stimulated luminescence to determine the slip rate of part of the Baatar Hyarhan fault in Mongolia (Central Asia)Applying optically stimulated luminescence to determine the slip rate of part of the Baatar Hyarhan fault in Mongolia (Central Asia)22640FAHamideAminiMortezaFattahiJournal Article19700101Optically Stimulated Luminescence (OSL) is currently one of the most important dating methods. It dates the last exposure to sunlight. Natural hazard evidences such as colluvial wedges, alluvial and fluvial sediments can be dated by OSL. Therefore, OSL plays an important role in studies related to Paleoseismology and tectonic activity, particularly in arid and semi- arid regions. The Altai Mountains in western Mongolia are an arid zone. Baatar Hyarhan, a thrust- bounded massif, is situated in the south- eastern part of Altai. According to the geomorphology and seismicity of Mongolia, Altai is active and its activity is the response to the convergence between the Eurasian and Indian plates. Therefore, slip rate estimation is essential for investigating the activity of this mountain. There are two basins in both margins of Baatar Hyarhan. Geomorphology markers implicate low progress of Baatar Hyarhan through these basins. Faulting has uplifted ridges of folded sediment, known locally as forebergs, close to the range- front. In this article, the slip rate of Baatar Hyarhan is calculated. The eastern Zereg Basin, North- East of Baatar Hyarhan and South- West of Baatar Hyarhan forebergs are the three areas considered for sampling. Scarp heights were estimated using differential GPS (Nissen et al. 2009). Optically Stimulated Luminescence (OSL) dating was used to estimate the deposition age of these three areas. Equivalent dose (De) is measured by the analyst program and histogram method. The age at which this sediment was last exposed to light is determined by dividing the amount of radiation required to produce natural luminescence (known as the equivalent dose, De) by the dose rate. Vertical and horizontal displacement rates are determined by dividing the average of the offset by the age of each sample which are 0.07 - 0.53 mmyr -1, 0.03- 0.44 mmyr -1, respectively. The slip rate of 0.10-0.69 mmyr -1 was calculated by employing shortening rates and approximate slope of each area.Optically Stimulated Luminescence (OSL) is currently one of the most important dating methods. It dates the last exposure to sunlight. Natural hazard evidences such as colluvial wedges, alluvial and fluvial sediments can be dated by OSL. Therefore, OSL plays an important role in studies related to Paleoseismology and tectonic activity, particularly in arid and semi- arid regions. The Altai Mountains in western Mongolia are an arid zone. Baatar Hyarhan, a thrust- bounded massif, is situated in the south- eastern part of Altai. According to the geomorphology and seismicity of Mongolia, Altai is active and its activity is the response to the convergence between the Eurasian and Indian plates. Therefore, slip rate estimation is essential for investigating the activity of this mountain. There are two basins in both margins of Baatar Hyarhan. Geomorphology markers implicate low progress of Baatar Hyarhan through these basins. Faulting has uplifted ridges of folded sediment, known locally as forebergs, close to the range- front. In this article, the slip rate of Baatar Hyarhan is calculated. The eastern Zereg Basin, North- East of Baatar Hyarhan and South- West of Baatar Hyarhan forebergs are the three areas considered for sampling. Scarp heights were estimated using differential GPS (Nissen et al. 2009). Optically Stimulated Luminescence (OSL) dating was used to estimate the deposition age of these three areas. Equivalent dose (De) is measured by the analyst program and histogram method. The age at which this sediment was last exposed to light is determined by dividing the amount of radiation required to produce natural luminescence (known as the equivalent dose, De) by the dose rate. Vertical and horizontal displacement rates are determined by dividing the average of the offset by the age of each sample which are 0.07 - 0.53 mmyr -1, 0.03- 0.44 mmyr -1, respectively. The slip rate of 0.10-0.69 mmyr -1 was calculated by employing shortening rates and approximate slope of each area.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421Estimation of reservoir rock porosity using linear ensemble combination of single artificial neural networks based on analytical and genetic algorithm techniquesEstimation of reservoir rock porosity using linear ensemble combination of single artificial neural networks based on analytical and genetic algorithm techniques22641FAMahmoodZakeriAbolghasemKamkar-RouhaniJournal Article19700101Porosity is one of the most important properties for comprehensive studies of hydrocarbon reservoirs. For determination of porosity in a rock, that is the ratio of volume of voids to the total volume of the rock, there are two conventional methods: In the first method, direct measurement of porosity is carried out by testing drilling cores. In the second method, porosity is determined indirectly using well logging data and relevant mathematical relations or equations. There are some limitations and difficulties for determination of porosity using both the above methods. Using the artificial neural networks (ANNs) method for this purpose can reduce these difficulties remarkably, and also, contains acceptable results. Solving any problem using ANNs needs a three-step procedure: training, generalization and operation. In the training step, the network teaches the patterns that exist in the inputs and the relation between the inputs and the outputs of the training set. Generalization is the ability of the network to present acceptable responses for the inputs that have not been included in the training set (unseen patterns). Operation is the use of the network for the objective problem. Obviously the network, which is used in the operation step, must be well trained and have a suitable generalization performance. One of difficulties which may occur for a network after being trained, is overfitting that is the same as poor generalization performance. If conditions are so that the network is trained to a favorable amount of error reduction for training patterns or to a distinct number of epochs but overfitting does not occur, in this state the training is called overtraining. In the ANNs method, a number of networks are trained. These networks are evaluated using a suitable performance criterion, for example mean square errors (MSE), and based on this criterion, the best network is selected. Although selecting the best single neural network generates the best obtained pattern, it leads to loss of information existing in the other networks. There is the drawback that the generalization performance of the best selected network for unseen patterns is limited and more over, error in estimation is common. If we accept that for all possible test patterns, complete or 100 % generalization is impossible, we have a convincing reason to search for methods for improving the performance of ANNs. For this purpose, a combination of trained networks using suitable methods has been proposed because this work may lead to integrate the information of the networks of the components in the combination and thus to help the enhancement of the accuracy of the results and the generalization performance of the combination in comparison with the best selected network. Using a combination of single neural networks, multiple network systems which are also called committee machine (CM), are generated to access better results for problems that a network alone cannot solve or may be solved effectively using CM. Ensemble combination of ANNs is a type of CM having parallel structure in which any of its components or networks solely presents a solution for the objective problem, and then the solution results are combined in a proper manner. In function estimation problems, ensemble combinations can be made linearly or nonlinearly. In this research work, linear ensemble combination of single artificial neural networks was applied in order to estimate the effective porosity of the Kangan gas reservoir rock in the giant Southern Pars hydrocarbon field. From the view point of structural geology, the Kangan gas deposit is an asymmetric anticline with a northwest-southeast spread whose southeast side is turned. This geologic formation consists of dolomite, limestone, dolomitic lime and thin layers of shale. Well logging data acquired from 4 wells in the area at a depth interval corresponding to the Kangan formation were used. 215 selected patterns from wells SP1, SP3 and SP13 were used for training the networks and 89 selected patterns from well SP6 were used for testing the generalization performance of the networks. In each pattern, acoustic, density, gamma ray and neutron porosity well log data were considered as the inputs of the networks and the effective porosity data were assigned as the outputs of the networks. First, back propagation single neural networks having different structures (totally 90 structures) were trained using the overtraining method. Then, 7 networks which had the best results, i.e. containing minimum MSE in the test step, were selected for making ensemble combinations. 120 Linear ensemble combinations of these 7 networks (i.e. 21 two-fold combinations, 35 three-fold combinations, 35 four-fold combinations, 21 five-fold combinations, 7 six-fold combinations and 1 seven-fold combination) were constructed using analytical methods including simple averaging and four different Hashem’s optimal linear combination (OLC) methods, i.e. unconstrained MSE-OLC with a constant term, constrained MSE-OLC with a constant term, unconstrained MSE-OLC without a constant term and constrained MSE-OLC without a constant term. In Hashem's methods, coefficients of networks in MSE-OLC are computed by performing a set of matrix operations. Then the best produced combination using the above-mentioned 5 analytical methods was selected from each of two-fold, three-fold, four-fold, five-fold, six-fold and seven-fold combination sets (i.e. the combination which contained minimum MSE in the test step). For the 6 selected combinations, in addition to analytical methods, the coefficients of MSE-OLC were computed using genetic algorithm (GA). The best produced analytical ensemble combination, which with respect to the other analytical combinations had the maximum reduction in MSE of the test step compared to the best single neural network, was a three-fold unconstrained MSE-OLC without constant term. This combination in comparison with the best single neural network decreased the MSE in the training and test steps 6.3 % and 4.9 %, respectively. Despite this, the best ensemble combination among all the combinations was a six-fold OLC obtained using the GA optimization method. This best ensemble combination, compared to the best single neural network, reduced the MSE in the training and test steps 14.4% and 12.5%, respectively. Generally, in the all cases that were investigated, OLC using GA yielded better results as it caused more reduction in MSE of the test step compared to analytical combinations. However, OLCs using Hashem's methods compared to other combinations generally contained more reductions in MSE of the training step.Porosity is one of the most important properties for comprehensive studies of hydrocarbon reservoirs. For determination of porosity in a rock, that is the ratio of volume of voids to the total volume of the rock, there are two conventional methods: In the first method, direct measurement of porosity is carried out by testing drilling cores. In the second method, porosity is determined indirectly using well logging data and relevant mathematical relations or equations. There are some limitations and difficulties for determination of porosity using both the above methods. Using the artificial neural networks (ANNs) method for this purpose can reduce these difficulties remarkably, and also, contains acceptable results. Solving any problem using ANNs needs a three-step procedure: training, generalization and operation. In the training step, the network teaches the patterns that exist in the inputs and the relation between the inputs and the outputs of the training set. Generalization is the ability of the network to present acceptable responses for the inputs that have not been included in the training set (unseen patterns). Operation is the use of the network for the objective problem. Obviously the network, which is used in the operation step, must be well trained and have a suitable generalization performance. One of difficulties which may occur for a network after being trained, is overfitting that is the same as poor generalization performance. If conditions are so that the network is trained to a favorable amount of error reduction for training patterns or to a distinct number of epochs but overfitting does not occur, in this state the training is called overtraining. In the ANNs method, a number of networks are trained. These networks are evaluated using a suitable performance criterion, for example mean square errors (MSE), and based on this criterion, the best network is selected. Although selecting the best single neural network generates the best obtained pattern, it leads to loss of information existing in the other networks. There is the drawback that the generalization performance of the best selected network for unseen patterns is limited and more over, error in estimation is common. If we accept that for all possible test patterns, complete or 100 % generalization is impossible, we have a convincing reason to search for methods for improving the performance of ANNs. For this purpose, a combination of trained networks using suitable methods has been proposed because this work may lead to integrate the information of the networks of the components in the combination and thus to help the enhancement of the accuracy of the results and the generalization performance of the combination in comparison with the best selected network. Using a combination of single neural networks, multiple network systems which are also called committee machine (CM), are generated to access better results for problems that a network alone cannot solve or may be solved effectively using CM. Ensemble combination of ANNs is a type of CM having parallel structure in which any of its components or networks solely presents a solution for the objective problem, and then the solution results are combined in a proper manner. In function estimation problems, ensemble combinations can be made linearly or nonlinearly. In this research work, linear ensemble combination of single artificial neural networks was applied in order to estimate the effective porosity of the Kangan gas reservoir rock in the giant Southern Pars hydrocarbon field. From the view point of structural geology, the Kangan gas deposit is an asymmetric anticline with a northwest-southeast spread whose southeast side is turned. This geologic formation consists of dolomite, limestone, dolomitic lime and thin layers of shale. Well logging data acquired from 4 wells in the area at a depth interval corresponding to the Kangan formation were used. 215 selected patterns from wells SP1, SP3 and SP13 were used for training the networks and 89 selected patterns from well SP6 were used for testing the generalization performance of the networks. In each pattern, acoustic, density, gamma ray and neutron porosity well log data were considered as the inputs of the networks and the effective porosity data were assigned as the outputs of the networks. First, back propagation single neural networks having different structures (totally 90 structures) were trained using the overtraining method. Then, 7 networks which had the best results, i.e. containing minimum MSE in the test step, were selected for making ensemble combinations. 120 Linear ensemble combinations of these 7 networks (i.e. 21 two-fold combinations, 35 three-fold combinations, 35 four-fold combinations, 21 five-fold combinations, 7 six-fold combinations and 1 seven-fold combination) were constructed using analytical methods including simple averaging and four different Hashem’s optimal linear combination (OLC) methods, i.e. unconstrained MSE-OLC with a constant term, constrained MSE-OLC with a constant term, unconstrained MSE-OLC without a constant term and constrained MSE-OLC without a constant term. In Hashem's methods, coefficients of networks in MSE-OLC are computed by performing a set of matrix operations. Then the best produced combination using the above-mentioned 5 analytical methods was selected from each of two-fold, three-fold, four-fold, five-fold, six-fold and seven-fold combination sets (i.e. the combination which contained minimum MSE in the test step). For the 6 selected combinations, in addition to analytical methods, the coefficients of MSE-OLC were computed using genetic algorithm (GA). The best produced analytical ensemble combination, which with respect to the other analytical combinations had the maximum reduction in MSE of the test step compared to the best single neural network, was a three-fold unconstrained MSE-OLC without constant term. This combination in comparison with the best single neural network decreased the MSE in the training and test steps 6.3 % and 4.9 %, respectively. Despite this, the best ensemble combination among all the combinations was a six-fold OLC obtained using the GA optimization method. This best ensemble combination, compared to the best single neural network, reduced the MSE in the training and test steps 14.4% and 12.5%, respectively. Generally, in the all cases that were investigated, OLC using GA yielded better results as it caused more reduction in MSE of the test step compared to analytical combinations. However, OLCs using Hashem's methods compared to other combinations generally contained more reductions in MSE of the training step.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X371201104211D and 2D interpretation of the Magnetotelluric (MT) data of northeast Gorgan plain1D and 2D interpretation of the Magnetotelluric (MT) data of northeast Gorgan plain22642FAIsaMansoori KermanshahiBehrozOskooiJournal Article19700101A detailed standard Magnetotelluric (MT) study was conducted to recognize brine bearing layers in depths of less than 2000 m in the northeast of Iran close to the southeastern shore of the Caspian Sea. Long and medium period natural-field MT methods have proved very useful for subsurface mapping purpose by determining the resistivity of the near surface structure.
MT data were analyzed and modeled using a 1D inversion scheme. Then corresponding data on eight profiles were inverted using 2D inversion schemes.
Down to 2 km, the resistivity model obtained from the MT data is consistent with the geological information from a 1200 m borehole in the area. Analysis of the MT data-set suggests signatures of salt water reservoirs in the area which are distinguished potentially positive to contain iodine. Due to the very conductive nature of the sediments regardless of all difficulties in the interpretation stage because of the lack of a considerable resistivity contrast we could recognize the more conductive zones in the less conductive host as layers of saline water.
Conductive structures are ideal targets for the Magnetotelluric method when located in a considerably resistive host. They produce strong variations in underground electrical resistivity. In cases where the electrical resistivity of the target is not substantially different from that of it would be quite difficult to reach a promising result. Despite this limitation, we could get some useful results in our study.
Dashli-Boroon area is located in Golestan Province in the northeastern part of Iran right at the border with Turkmenistan. Geologically it is a part of the Kopeh-Dagh sedimentary basin. Kopeh-Dagh has was formed by the last orogeny phase of Alpine and the erosion that followed. Topography relief is very smooth and basically it is a flat plain consisting of loesses occurring naturally between the Elburz mountain range and the desert of Turkmenistan. Quaternary sediments including clay and evaporates and particularly salt are impenetrable.
An MT survey was carried out using GMS05 (Metronix, Germany) and MTU2000 (Uppsala University, Sweden) systems in February 2007. MT data were collected at 60 sites in a network of 2 by 2 km meshes along eight EW profiles.
For data processing a code developed by Smirnov (2003) was used. 1D and 2D inversions are conducted to resolve the conductive structures. 1D inversion of the determinant (DET) data using the code of Pedersen (2004) as well as the 2D inversion of TE, TM, TE+TM and DET mode data using a code from Siripunvaraporn and Egbert (2000) were performed.
A supplementary goal of this work is to evaluate the possibility of using surface MT measurements on the very conductive sediments to monitor the underground salt water bearing layers or bodies. Our concern which is followed in the current paper, only in the frame of one- and two- dimensional (1D and 2D) interpretation, is to place emphasis on the characteristics of the extremely conductive structures which are supposed to bear iodine in economic meanings. Based upon the MT results some sites were proposed for detail exploration by excavating deep exploration boreholes. As results the resistivity sections show a clear picture of the resistivity changes both laterally and with depth.A detailed standard Magnetotelluric (MT) study was conducted to recognize brine bearing layers in depths of less than 2000 m in the northeast of Iran close to the southeastern shore of the Caspian Sea. Long and medium period natural-field MT methods have proved very useful for subsurface mapping purpose by determining the resistivity of the near surface structure.
MT data were analyzed and modeled using a 1D inversion scheme. Then corresponding data on eight profiles were inverted using 2D inversion schemes.
Down to 2 km, the resistivity model obtained from the MT data is consistent with the geological information from a 1200 m borehole in the area. Analysis of the MT data-set suggests signatures of salt water reservoirs in the area which are distinguished potentially positive to contain iodine. Due to the very conductive nature of the sediments regardless of all difficulties in the interpretation stage because of the lack of a considerable resistivity contrast we could recognize the more conductive zones in the less conductive host as layers of saline water.
Conductive structures are ideal targets for the Magnetotelluric method when located in a considerably resistive host. They produce strong variations in underground electrical resistivity. In cases where the electrical resistivity of the target is not substantially different from that of it would be quite difficult to reach a promising result. Despite this limitation, we could get some useful results in our study.
Dashli-Boroon area is located in Golestan Province in the northeastern part of Iran right at the border with Turkmenistan. Geologically it is a part of the Kopeh-Dagh sedimentary basin. Kopeh-Dagh has was formed by the last orogeny phase of Alpine and the erosion that followed. Topography relief is very smooth and basically it is a flat plain consisting of loesses occurring naturally between the Elburz mountain range and the desert of Turkmenistan. Quaternary sediments including clay and evaporates and particularly salt are impenetrable.
An MT survey was carried out using GMS05 (Metronix, Germany) and MTU2000 (Uppsala University, Sweden) systems in February 2007. MT data were collected at 60 sites in a network of 2 by 2 km meshes along eight EW profiles.
For data processing a code developed by Smirnov (2003) was used. 1D and 2D inversions are conducted to resolve the conductive structures. 1D inversion of the determinant (DET) data using the code of Pedersen (2004) as well as the 2D inversion of TE, TM, TE+TM and DET mode data using a code from Siripunvaraporn and Egbert (2000) were performed.
A supplementary goal of this work is to evaluate the possibility of using surface MT measurements on the very conductive sediments to monitor the underground salt water bearing layers or bodies. Our concern which is followed in the current paper, only in the frame of one- and two- dimensional (1D and 2D) interpretation, is to place emphasis on the characteristics of the extremely conductive structures which are supposed to bear iodine in economic meanings. Based upon the MT results some sites were proposed for detail exploration by excavating deep exploration boreholes. As results the resistivity sections show a clear picture of the resistivity changes both laterally and with depth.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421Seismic wavelet estimationSeismic wavelet estimation22643FAAminRoshandel KahooHamid RezaSiahkoohiJournal Article19700101Based on the convolutional model, a seismic trace is the convolution of seismic source wavelet and reflection coefficient series of the earth. Seismic source wavelet estimation is one of the most important stages in processing and interpretation of seismic data. Accurate estimation of wavelet increases the efficiency of the deconvolution and temporal resolution of seismic data. On the other hand, the most important stage of seismic data interpretation is the inversion of seismic data to seismic impedance. The quality of inversion depends on the correlation of synthetic and real seismic traces in the well position. With increased accuracy in estimating source wavelet, the correlation increases.
Different methods have been introduced for estimating seismic source wavelet, such as homomorphic deconvolution, least squares method, autoregressive method and Hopfield neural network method.
In this paper, we used frequency behavior of reflection coefficient series and seismic source wavelet, and then attenuated the effect of reflection coefficient series of the earth from seismic trace and estimated the seismic source wavelet. The amplitude spectrum of reflection coefficients series behaves as a signal with high frequency content, whereas the amplitude spectrum of seismic source wavelet behaves as a signal with low frequency content.
So, the amplitude spectrum of the trace is the product of two high frequency signals (amplitude spectrum of reflection coefficient series) and low frequency signal (amplitude spectrum of seismic wavelet). Therefore, we can consider the amplitude spectrum of the reflection series as a noise and the amplitude spectrum of the wavelet as a signal.
Most of the denoising methods attenuate the additive noise from signals. In our case, the noise is the multiplicative type. We used the logarithm operator to convert the multiplicative type of noise to be additive. Now, we can estimate the seismic wavelet by denoising the logarithm of the amplitude spectrum of seismic trace.
In this paper, we used three different denoising methods, discrete wavelet transform, empirical mode decomposition and time – frequency peak filtering to denoise the logarithm of the amplitude spectrum of seismic trace.
The efficiency of the above- mentioned three denoising methods to estimate the seismic source wavelet are tested on both synthetic and real seismic data. The obtained results show that the three introduced methods estimate the seismic source wavelet accurately. As can be seen from the results, the estimated wavelets by EMD and TFPF methods have higher accuracy than that of the DWT method.Based on the convolutional model, a seismic trace is the convolution of seismic source wavelet and reflection coefficient series of the earth. Seismic source wavelet estimation is one of the most important stages in processing and interpretation of seismic data. Accurate estimation of wavelet increases the efficiency of the deconvolution and temporal resolution of seismic data. On the other hand, the most important stage of seismic data interpretation is the inversion of seismic data to seismic impedance. The quality of inversion depends on the correlation of synthetic and real seismic traces in the well position. With increased accuracy in estimating source wavelet, the correlation increases.
Different methods have been introduced for estimating seismic source wavelet, such as homomorphic deconvolution, least squares method, autoregressive method and Hopfield neural network method.
In this paper, we used frequency behavior of reflection coefficient series and seismic source wavelet, and then attenuated the effect of reflection coefficient series of the earth from seismic trace and estimated the seismic source wavelet. The amplitude spectrum of reflection coefficients series behaves as a signal with high frequency content, whereas the amplitude spectrum of seismic source wavelet behaves as a signal with low frequency content.
So, the amplitude spectrum of the trace is the product of two high frequency signals (amplitude spectrum of reflection coefficient series) and low frequency signal (amplitude spectrum of seismic wavelet). Therefore, we can consider the amplitude spectrum of the reflection series as a noise and the amplitude spectrum of the wavelet as a signal.
Most of the denoising methods attenuate the additive noise from signals. In our case, the noise is the multiplicative type. We used the logarithm operator to convert the multiplicative type of noise to be additive. Now, we can estimate the seismic wavelet by denoising the logarithm of the amplitude spectrum of seismic trace.
In this paper, we used three different denoising methods, discrete wavelet transform, empirical mode decomposition and time – frequency peak filtering to denoise the logarithm of the amplitude spectrum of seismic trace.
The efficiency of the above- mentioned three denoising methods to estimate the seismic source wavelet are tested on both synthetic and real seismic data. The obtained results show that the three introduced methods estimate the seismic source wavelet accurately. As can be seen from the results, the estimated wavelets by EMD and TFPF methods have higher accuracy than that of the DWT method.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421Trend analysis of minimum, maximum, and mean daily temperature extremes in several climatic regions of IranTrend analysis of minimum, maximum, and mean daily temperature extremes in several climatic regions of Iran22644FAVahidVarshavianAliKhaliliNozarGhahremanSohrabHajjamJournal Article19700101An increase, even moderate, in global temperature is expected to result in a change in frequency of extreme weather events like drought, heavy rainfall and
storms. The study of extreme events is difficult due to the fact that it is difficult to find long-term homogeneous data series. Also the delimitation of extreme events is not univocal since a parameter value that would be defined as an extreme event in one place, might still be considered a normal event in another. In this study, maximum, minimum and mean daily air temperature (Tmax, Tmin, and Tmean) data over a 44 years period (1961-2004) of four synoptic stations of Iran namely; Kerman, Kermanshah, Mashhad, and Shiraz were collected. These stations represent different climates of Iran based on Koppen climatic classification. Required data were obtained from the Islamic Republic of Iran Meteorological Organization (IRIMO). Data were used to calculate extreme temperature values including the magnitudes of the lower (1st, 5th, 10th) and upper (90th, 95th, 99th) percentile threshold values for each year and number of days below the lower threshold values and above the upper threshold values. All time series were checked for normality with the Kolmogorov-Smirnov test. Time trends for all variables were analyzed using parametric and nonparametric techniques (Least squares linear regression, Pearson, Spearman and Kendall's ?-significance test). Kerman (Desert climate) showed significant positive trend in all minimum temperature percentile threshold values except the 95th percentile and a number of upper percentiles (90th and 95th). In maximum temperature, except 1st percentile and the number of days above 90th percentile, other percentiles and the number of days were significant. Results for mean temperature trend were similar to minimum temperature. Mashhad (Temperate humid climate) showed a significant positive trend in all minimum temperature percentile threshold values and the number of days below the 10th, upper 90th and 99th percentile threshold values. Maximum temperature showed a significant positive trend just in upper thresholds and all number of days except the number of days below the 1st and upper 95th percentile threshold values. In mean temperature, results were also similar to the minimum temperature.An increase, even moderate, in global temperature is expected to result in a change in frequency of extreme weather events like drought, heavy rainfall and
storms. The study of extreme events is difficult due to the fact that it is difficult to find long-term homogeneous data series. Also the delimitation of extreme events is not univocal since a parameter value that would be defined as an extreme event in one place, might still be considered a normal event in another. In this study, maximum, minimum and mean daily air temperature (Tmax, Tmin, and Tmean) data over a 44 years period (1961-2004) of four synoptic stations of Iran namely; Kerman, Kermanshah, Mashhad, and Shiraz were collected. These stations represent different climates of Iran based on Koppen climatic classification. Required data were obtained from the Islamic Republic of Iran Meteorological Organization (IRIMO). Data were used to calculate extreme temperature values including the magnitudes of the lower (1st, 5th, 10th) and upper (90th, 95th, 99th) percentile threshold values for each year and number of days below the lower threshold values and above the upper threshold values. All time series were checked for normality with the Kolmogorov-Smirnov test. Time trends for all variables were analyzed using parametric and nonparametric techniques (Least squares linear regression, Pearson, Spearman and Kendall's ?-significance test). Kerman (Desert climate) showed significant positive trend in all minimum temperature percentile threshold values except the 95th percentile and a number of upper percentiles (90th and 95th). In maximum temperature, except 1st percentile and the number of days above 90th percentile, other percentiles and the number of days were significant. Results for mean temperature trend were similar to minimum temperature. Mashhad (Temperate humid climate) showed a significant positive trend in all minimum temperature percentile threshold values and the number of days below the 10th, upper 90th and 99th percentile threshold values. Maximum temperature showed a significant positive trend just in upper thresholds and all number of days except the number of days below the 1st and upper 95th percentile threshold values. In mean temperature, results were also similar to the minimum temperature.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421Numerical solution of incompressible Boussinesq equations using fourth-order compact scheme: Lock exchange flowNumerical solution of incompressible Boussinesq equations using fourth-order compact scheme: Lock exchange flow22645FASarmadGhader0000-0001-9666-5493AbozarGhasemiMohammad RezaBanazadehDarushMansouryJournal Article19700101In recent years, the number of research works devoted to applying the highly accurate numerical schemes, in particular compact finite difference schemes, to numerical simulation of complex flow fields with multi-scale structures, is increasing. The use of compact finite-difference schemes are the simple and powerful ways to reach the objectives of high accuracy and low computational cost. Compact schemes, compared with the traditional explicit finite difference schemes of the same order, have proved to be significantly more accurate along with the benefit of using smaller stencil sizes, which can be essential in treating non-periodic boundary conditions. Applications of some families of the compact schemes to spatial differencing of some idealized models of the atmosphere and oceans, show that the compact finite difference schemes are promising methods for numerical simulation of the atmosphere–ocean dynamics.
This work is devoted to the application of a fourth-order compact finite difference scheme to numerical solution of gravity current. The governing equations used to perform the numerical simulation are the two dimensional incompressible Boussinesq equations. The two-dimensional lock-exchange flow configuration is used to conduct the numerical simulation of the Boussinesq equations. The lock-exchange flow is a prototype problem which has been studied numerically and experimentally by many researchers.
For the spatial differencing of the governing equations the second-order central and the fourth-order compact finite difference schemes are used. The predictor-corrector leapfrog scheme is used to advance the Boussinesq equations in time. The boundary condition formulation required to generate stable numerical solutions without degrading the global accuracy of the computations, is also presented.
The fourth-order compact scheme is compared in detail with the conventional second-order central finite difference method. Qualitative comparison of the results of the present work with published results for the planar lock-exchange flow indicates the validity and accuracy of the fourth-order compact scheme for numerical solution of the two-dimensional incompressible Boussinesq equations.In recent years, the number of research works devoted to applying the highly accurate numerical schemes, in particular compact finite difference schemes, to numerical simulation of complex flow fields with multi-scale structures, is increasing. The use of compact finite-difference schemes are the simple and powerful ways to reach the objectives of high accuracy and low computational cost. Compact schemes, compared with the traditional explicit finite difference schemes of the same order, have proved to be significantly more accurate along with the benefit of using smaller stencil sizes, which can be essential in treating non-periodic boundary conditions. Applications of some families of the compact schemes to spatial differencing of some idealized models of the atmosphere and oceans, show that the compact finite difference schemes are promising methods for numerical simulation of the atmosphere–ocean dynamics.
This work is devoted to the application of a fourth-order compact finite difference scheme to numerical solution of gravity current. The governing equations used to perform the numerical simulation are the two dimensional incompressible Boussinesq equations. The two-dimensional lock-exchange flow configuration is used to conduct the numerical simulation of the Boussinesq equations. The lock-exchange flow is a prototype problem which has been studied numerically and experimentally by many researchers.
For the spatial differencing of the governing equations the second-order central and the fourth-order compact finite difference schemes are used. The predictor-corrector leapfrog scheme is used to advance the Boussinesq equations in time. The boundary condition formulation required to generate stable numerical solutions without degrading the global accuracy of the computations, is also presented.
The fourth-order compact scheme is compared in detail with the conventional second-order central finite difference method. Qualitative comparison of the results of the present work with published results for the planar lock-exchange flow indicates the validity and accuracy of the fourth-order compact scheme for numerical solution of the two-dimensional incompressible Boussinesq equations.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421The evaluation of simulated discharge of coupled surface scheme and river routing in numerical weather prediction WRF (Case study Karoon river)The evaluation of simulated discharge of coupled surface scheme and river routing in numerical weather prediction WRF (Case study Karoon river)22646FAMehranKhodamorad PourParvizIrannejadSohrabHajjamJournal Article19700101Land surface parameterization schemes are one of the most important components in GCM and NWP models. These schemes calculate the exchanges of momentum, mass and energy between land surface and the atmosphere. Runoff is one of the important components in the water cycle of the land surface scheme whose parameterization is difficult because of complexity in the processes governing the runoff generation and its strong dependence on time and space. A coarse resolution land surface model cannot explicitly model the complexities of runoff generation within a catchment; instead, it aims to represent the major processes via subgrid scale parameterizations. A popular solution involves the use of probability density functions to represent subgrid variability.
In this paper, the OSU land surface scheme in version 3 of the Weather Research and Forecasting (WRF) model is studied in which runoff is parameterized as a probability density function of the maximum infiltration. Because the river is being studied it needs to couple the land surface scheme (OSU) with a river routing model, then Total Runoff Integrating Pathway (TRIP) is considered. In this paper, only the treatment of runoff in the model was considered, hence some of the errors in simulations could be the result of deficiencies in other parameterizations.
In this paper, the Karoon River is divided into three subbasins including Farsiat, Harmaleh and Soosan located in the south, west and east of the Karoon respectively by using ARCGIS and ARCHYDRO softwares.
The WRF model was run in a one-way method which needs two domains. The simulations are conducted for December 2005 with 5×5 km grid spacing over an internal domain having 106×115 grid points along altitude and longitude respectively and with 15×15 km grid spacing over a parent domain having 67×69 grid points along altitude and longitude respectively and centered on the Karoon at 50?E and 32?N. The initial and boundary conditions are derived from the GFS data. The modeled discharge (OSU-WRF) in the three subbasins was evaluated with the coupled TRIP river routing and OSU-WRF, using daily observation.
The daily study of the simulated discharge between the coupled model and OSU-WRF, indicates slight difference in all the three subbasins. This slight difference is related to the lag time involved in the calculating surface and ground water storages.
The comparison between the coupled modeled and observed discharge shows that the coupled model generally underestimates total runoff during December 2005 and there were high model bias and Mean Absolute Error (MAE) in all of the three subbasins, especially Farsiat and Harmaleh. This is due to great differences between the monthly mean discharge of the coupled modeled and that observed. Also, the subsurface runoff dominates in most of the studied time and the coupled modeled generally underestimates subsurface runoff. This is related to the poor simulation subsurface soil moisture in the lowest soil layer.
The evaluation of the simulated discharge of the land surface scheme in WRF coupled with the river routing shows negative model efficiency in all of the three subbasins, especially Farsiat and Harmaleh. This means that the model is not successful in the discharge simulation and it cannot even indicate the reality of the stream flow as good as the application of average of the observation. In Soosan subbasin, the simulation of discharge is better than other subbasins because of the higher efficiency and lower model bias and mean absolute error.
On the other hand, the coupled model usually underestimates runoff, though the model overestimates precipitation. This can be related to error in the surface runoff parameterization and so in calculating maximum infiltration.
The study of the correlation coefficient between the simulations and observations shows the correlation coefficient is higher for precipitation (0.66 and 0.88) than runoff (0.50 and 0.54) in Harmaleh and Soosan subbasins. While the correlation coefficient for runoff is relatively high (0.6) in Farsiat, that for precipitation is very small (~0.02). The comparison between normalized standard deviation of rainfall and runoff in all of the three subbasins shows the modeled rainfall has higher variability than observation, especially in Farsiat, but the modeled runoff has lower variability than observations.
The error of the WRF model in the rainfall prediction and the error of the OSU land surface scheme in the rainfall-runoff model or the error which existed in the surface parameters used in the performance of the model, especially parameters related to probably density distributed function of the soil infiltration are effective in the error of the estimated river's discharge.
The comparison between the observed and modeled discharge shows the error in the initial conditions used in this paper, especially initial conditions of surface water and ground water storages, could be another source of the error in the simulated discharge.Land surface parameterization schemes are one of the most important components in GCM and NWP models. These schemes calculate the exchanges of momentum, mass and energy between land surface and the atmosphere. Runoff is one of the important components in the water cycle of the land surface scheme whose parameterization is difficult because of complexity in the processes governing the runoff generation and its strong dependence on time and space. A coarse resolution land surface model cannot explicitly model the complexities of runoff generation within a catchment; instead, it aims to represent the major processes via subgrid scale parameterizations. A popular solution involves the use of probability density functions to represent subgrid variability.
In this paper, the OSU land surface scheme in version 3 of the Weather Research and Forecasting (WRF) model is studied in which runoff is parameterized as a probability density function of the maximum infiltration. Because the river is being studied it needs to couple the land surface scheme (OSU) with a river routing model, then Total Runoff Integrating Pathway (TRIP) is considered. In this paper, only the treatment of runoff in the model was considered, hence some of the errors in simulations could be the result of deficiencies in other parameterizations.
In this paper, the Karoon River is divided into three subbasins including Farsiat, Harmaleh and Soosan located in the south, west and east of the Karoon respectively by using ARCGIS and ARCHYDRO softwares.
The WRF model was run in a one-way method which needs two domains. The simulations are conducted for December 2005 with 5×5 km grid spacing over an internal domain having 106×115 grid points along altitude and longitude respectively and with 15×15 km grid spacing over a parent domain having 67×69 grid points along altitude and longitude respectively and centered on the Karoon at 50?E and 32?N. The initial and boundary conditions are derived from the GFS data. The modeled discharge (OSU-WRF) in the three subbasins was evaluated with the coupled TRIP river routing and OSU-WRF, using daily observation.
The daily study of the simulated discharge between the coupled model and OSU-WRF, indicates slight difference in all the three subbasins. This slight difference is related to the lag time involved in the calculating surface and ground water storages.
The comparison between the coupled modeled and observed discharge shows that the coupled model generally underestimates total runoff during December 2005 and there were high model bias and Mean Absolute Error (MAE) in all of the three subbasins, especially Farsiat and Harmaleh. This is due to great differences between the monthly mean discharge of the coupled modeled and that observed. Also, the subsurface runoff dominates in most of the studied time and the coupled modeled generally underestimates subsurface runoff. This is related to the poor simulation subsurface soil moisture in the lowest soil layer.
The evaluation of the simulated discharge of the land surface scheme in WRF coupled with the river routing shows negative model efficiency in all of the three subbasins, especially Farsiat and Harmaleh. This means that the model is not successful in the discharge simulation and it cannot even indicate the reality of the stream flow as good as the application of average of the observation. In Soosan subbasin, the simulation of discharge is better than other subbasins because of the higher efficiency and lower model bias and mean absolute error.
On the other hand, the coupled model usually underestimates runoff, though the model overestimates precipitation. This can be related to error in the surface runoff parameterization and so in calculating maximum infiltration.
The study of the correlation coefficient between the simulations and observations shows the correlation coefficient is higher for precipitation (0.66 and 0.88) than runoff (0.50 and 0.54) in Harmaleh and Soosan subbasins. While the correlation coefficient for runoff is relatively high (0.6) in Farsiat, that for precipitation is very small (~0.02). The comparison between normalized standard deviation of rainfall and runoff in all of the three subbasins shows the modeled rainfall has higher variability than observation, especially in Farsiat, but the modeled runoff has lower variability than observations.
The error of the WRF model in the rainfall prediction and the error of the OSU land surface scheme in the rainfall-runoff model or the error which existed in the surface parameters used in the performance of the model, especially parameters related to probably density distributed function of the soil infiltration are effective in the error of the estimated river's discharge.
The comparison between the observed and modeled discharge shows the error in the initial conditions used in this paper, especially initial conditions of surface water and ground water storages, could be another source of the error in the simulated discharge.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421Analysis of correlation between total ozone and upper air meteorological parameters in the Middle EastAnalysis of correlation between total ozone and upper air meteorological parameters in the Middle East22647FAZahraShariepourJournal Article19700101In this paper, the correlation between total ozone and upper air meteorological parameters such as heights of geopotential surfaces and temperatures for three stations of Tehran, Ankara and Bet Dagan located in the Middle East, in cold period (Jan, Feb and March) of years of 2005, 2007 and 2008 has been investigated. Data for upper air is taken from Wyoming University and ozone data is from satellite data.
The amounts of daily total ozone in selected stations have been nearly close to each other and with increasing latitude, the average of period has also increased. The maximum of period average is for the Ankara station. During the survey period, the range of changes of daily total ozone for the Ankara station has been between 249 and 447 DU, for the Tehran station between 253 and 420 DU and for Bet Dagan between 252 and 429 DU.
The results from investigating coefficients of correlation between total ozone and meteorological parameters of upper air shows that in all of the examined stations, there is a significant negative correlation between total ozone and heights of the 500, 300, 200 and 100 hPa geopotential surfaces.
Comparing the correlation coefficient of ozone and heights of geopotential surfaces at different stations shows that, at all, correlation 300 hPa level is stronger than 100 and 500 hPa levels. In other words, the correlation between ozone and meteorological parameters in the upper troposphere is stronger than the middle troposphere and stratosphere, and with decreasing latitude in stations, the stratospheric correlation (100hPa) increases. The greatest correlation coefficient is for 200 hPa level.
The investigation of the relation between changes of different geopotential surfaces of the troposphere and stratosphere shows that pressure changes of the troposphere and stratosphere have been coherent and in accordance with each other.
Also there is a good coordination between changes in 200 hPa and 500 hPa during cold times of the year and the range of changes in heights of geopotential surfaces in stratosphere is less than the upper troposphere.
There is positive correlation between total ozone and stratospheric temperature and negative correlation between total ozone and the temperature of the middle and upper troposphere.
The main reason for these correlations between total ozone and heights of geopotential surfaces and temperature may be related to the atmosphere dynamics and rising and falling air currents. In other words, in lower stratosphere where ozone increases with height, during decrease of heights of geopotential surfaces ,ozone convergence occurs in high altitudes and causes an increase in atmosphere total ozone column and the opposite, with an increase in heights of geopotential surfaces, divergence of ozone in high latitudes occurs and atmosphere total ozone column decreases.
In the Tehran station, an increase of 10 meter in the heights of the 200 hPa geopotential surface, corresponds to total ozone decrease by 1.7 DU.In this paper, the correlation between total ozone and upper air meteorological parameters such as heights of geopotential surfaces and temperatures for three stations of Tehran, Ankara and Bet Dagan located in the Middle East, in cold period (Jan, Feb and March) of years of 2005, 2007 and 2008 has been investigated. Data for upper air is taken from Wyoming University and ozone data is from satellite data.
The amounts of daily total ozone in selected stations have been nearly close to each other and with increasing latitude, the average of period has also increased. The maximum of period average is for the Ankara station. During the survey period, the range of changes of daily total ozone for the Ankara station has been between 249 and 447 DU, for the Tehran station between 253 and 420 DU and for Bet Dagan between 252 and 429 DU.
The results from investigating coefficients of correlation between total ozone and meteorological parameters of upper air shows that in all of the examined stations, there is a significant negative correlation between total ozone and heights of the 500, 300, 200 and 100 hPa geopotential surfaces.
Comparing the correlation coefficient of ozone and heights of geopotential surfaces at different stations shows that, at all, correlation 300 hPa level is stronger than 100 and 500 hPa levels. In other words, the correlation between ozone and meteorological parameters in the upper troposphere is stronger than the middle troposphere and stratosphere, and with decreasing latitude in stations, the stratospheric correlation (100hPa) increases. The greatest correlation coefficient is for 200 hPa level.
The investigation of the relation between changes of different geopotential surfaces of the troposphere and stratosphere shows that pressure changes of the troposphere and stratosphere have been coherent and in accordance with each other.
Also there is a good coordination between changes in 200 hPa and 500 hPa during cold times of the year and the range of changes in heights of geopotential surfaces in stratosphere is less than the upper troposphere.
There is positive correlation between total ozone and stratospheric temperature and negative correlation between total ozone and the temperature of the middle and upper troposphere.
The main reason for these correlations between total ozone and heights of geopotential surfaces and temperature may be related to the atmosphere dynamics and rising and falling air currents. In other words, in lower stratosphere where ozone increases with height, during decrease of heights of geopotential surfaces ,ozone convergence occurs in high altitudes and causes an increase in atmosphere total ozone column and the opposite, with an increase in heights of geopotential surfaces, divergence of ozone in high latitudes occurs and atmosphere total ozone column decreases.
In the Tehran station, an increase of 10 meter in the heights of the 200 hPa geopotential surface, corresponds to total ozone decrease by 1.7 DU.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421Analysis of 2005 Dahuieh (Zarand) aftershock sequences in Kerman province, southeast IranAnalysis of 2005 Dahuieh (Zarand) aftershock sequences in Kerman province, southeast Iran22648FAMajidNematiMohammad RezaGheitanchiJournal Article19700101In this study, the 2005 Dahuieh (Zarand) locally recorded aftershock sequence has been analyzed. Having the distribution of aftershocks and the source extension, a W-E trending near vertical faulting with an extension of about 15-20 km could be estimated. The rupture causing the powerful Dahuieh earthquake apparently initiated in the modified epicentric area and propagated unilaterally towards the west. The cross section of aftershocks perpendicular to the fault suggests that the aftershocks had a depth range about 20 km, indicating that the seismic activity took place within the upper crust and the seismogenic layer, in this region, which had a thickness not greater than 20 km. The focal mechanism of the main shock and right lateral motion of the Kuh-Bannan fault suggested that the earthquake fault must be reverse and the northern block acted as a hanging wall during the source process of the main shock. The epicenteral distribution of aftershocks showed a lack of activity that was interpreted as the modified location of the main shock. Our results are in agreement with waveform modeling. The time frequency pattern of the aftershock decay followed the Kisslinger stretched exponential descending formula.In this study, the 2005 Dahuieh (Zarand) locally recorded aftershock sequence has been analyzed. Having the distribution of aftershocks and the source extension, a W-E trending near vertical faulting with an extension of about 15-20 km could be estimated. The rupture causing the powerful Dahuieh earthquake apparently initiated in the modified epicentric area and propagated unilaterally towards the west. The cross section of aftershocks perpendicular to the fault suggests that the aftershocks had a depth range about 20 km, indicating that the seismic activity took place within the upper crust and the seismogenic layer, in this region, which had a thickness not greater than 20 km. The focal mechanism of the main shock and right lateral motion of the Kuh-Bannan fault suggested that the earthquake fault must be reverse and the northern block acted as a hanging wall during the source process of the main shock. The epicenteral distribution of aftershocks showed a lack of activity that was interpreted as the modified location of the main shock. Our results are in agreement with waveform modeling. The time frequency pattern of the aftershock decay followed the Kisslinger stretched exponential descending formula.Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X37120110421Analysis and Prediction of GNSS Estimated Total Electron ContentsAnalysis and Prediction of GNSS Estimated Total Electron Contents22649FAJamalAsgariAli RezaAmiri-SimkooeiJournal Article19700101The least squares harmonic estimation is applied to the hourly time-series of Total Electron Contents (TEC) derived from ionospheric models using seven years of GPS observations processed by Bernese software. The frequencies of dominant spectral components in the spectrum are estimated. We observe significant periodic patterns with periods of 24 h and its fractions 24h/n, n=2,…,11, which are the well-known Fourier series decomposition of the diurnal periodic pattern of the ionospheric variations. The principal component with daily signal is due to the day-night variation of TEC values. The semidiurnal and tri-diurnal components can be explained by the substorm signatures in both auroral electrojet (in layer E) and ring current variations (related to magnetosphere at low latitudes) and tidal effects. Also, the spectrum shows the well-known 27-day period of solar cycle variations. We observe annual, semi-annual and tri-annual signals in the series. The detected signals are then applied to perform an ionospheric prediction. The results indicate that a substantial part (in the absolute sense) of the TEC values can be predicted using this base function, and an undetectable part remains as disturbed noise which can exceed 20 TEC units for the disturbed ionosphere. In comparison with the standard Klobuchar model, the model presented in this contribution will significantly improve the single frequency GPS positioning accuracy.The least squares harmonic estimation is applied to the hourly time-series of Total Electron Contents (TEC) derived from ionospheric models using seven years of GPS observations processed by Bernese software. The frequencies of dominant spectral components in the spectrum are estimated. We observe significant periodic patterns with periods of 24 h and its fractions 24h/n, n=2,…,11, which are the well-known Fourier series decomposition of the diurnal periodic pattern of the ionospheric variations. The principal component with daily signal is due to the day-night variation of TEC values. The semidiurnal and tri-diurnal components can be explained by the substorm signatures in both auroral electrojet (in layer E) and ring current variations (related to magnetosphere at low latitudes) and tidal effects. Also, the spectrum shows the well-known 27-day period of solar cycle variations. We observe annual, semi-annual and tri-annual signals in the series. The detected signals are then applied to perform an ionospheric prediction. The results indicate that a substantial part (in the absolute sense) of the TEC values can be predicted using this base function, and an undetectable part remains as disturbed noise which can exceed 20 TEC units for the disturbed ionosphere. In comparison with the standard Klobuchar model, the model presented in this contribution will significantly improve the single frequency GPS positioning accuracy.