Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X363201011221D & 2D Interpretation of The Magnetotelluric data for Detecting Geological Subsurface Structures along an E-W profile in Arak1D & 2D Interpretation of The Magnetotelluric data for Detecting Geological Subsurface Structures along an E-W profile in Arak21975FAS Masoud AnsariBehroz OskooiMartin UnsworthJournal Article19700101Reflection and refraction of EM signals at both horizontal and vertical interfaces separate media of different electrical parameters, geoelectromagnetic methods have been developed and employed to recognize the geological features and particularly fault zones in many regions. To achieve higher lateral resolution and also greater depth penetration, the MT method is one of the most effective electromagnetic techniques to imagine the subsurface structures electrically.
In 2006 wide frequency range of magnetotelluric measurements were carried out at the eastern part of the city of Arak in Iran to understand the crustal electrical conductivity of the region by putting emphasis on locating the fault zones. The electric and magnetic field components were acquired along a profile across the geological trend at 15 stations. A robust single site processing followed by the inversion and one dimensional as well as two dimensional modeling was performed. The inversion results revealed electrical conductivity structures in correlation with geological features. As a significant result, true locations of two major faults, Talkhab and Tabarteh Faults and a conductive block in between were recognized in the Arak area.
Introduction: The area of study is a part of the Arak watershed located in two Central-Iran and Sanandaj-Sirjan Zones. Two parallel faults named Talkhab and Tabarteh pass through the region and divide it into three blocks. The seismicity of the area is controlled by these two parallel faults, especially the Talkhab Fault which is presumed to be the source for seismicity activity in the region (Mirzaei and Ghadimi, 2006). Anomalous crustal conductors are occasionally associated with seismic activity. Fluid is an important factor in the fault zone and many of the active faults are characterized by a great volume of groundwater (Gundmundsson, 2000).
The magnetotelluric method is a passive electromagnetic technique that uses the natural, time varying electric and magnetic field components measured at right angles at the surface of the earth to make inferences about the earth’s electrical structure which, in turn, can be related to the geology tectonics and subsurface conditions. Measurements of the horizontal components of the natural electromagnetic field are used to construct the full complex impedance tensor, Z, as a function of frequency. Using the effective impedance, determinant apparent resistivities and phases are computed and used for the inversion. Also the apparent resistivities for both TE and TM mode are computed and used for 2D inversion.
Data Processing, Inversion and Conclusions: MT data were processed using a code from Smirnov (2003) aiming at a robust single site estimate of electromagnetic transfer functions. As the area of study is populated and close to urban noise sources, the recorded data has not good quality which justifies the low coherency between the electric and magnetic channels. We performed 1D inversion of the determinant data using a code from Pedersen (2004) for all sites. Since the quality of the determinant data was acceptable, we performed 2D inversion of the determinant data using a code from Siripunvaraporn and Egbert (2000). Besides an extra 2D inversion of MT data for TE and TM modes was performed using a code from Rodi and Mackie (2001).
The 2D models significantly illustrate two conductive zones, two resistive blocks and a large conductive zone hidden under the Quaternary alluviums along the profile. As significant results, in collaboration with geological information about the presence of the Talkhab and Tabarteh faults the conductivity features can be attributed to the faults. Besides, a probable hidden fault is also recognizable.Reflection and refraction of EM signals at both horizontal and vertical interfaces separate media of different electrical parameters, geoelectromagnetic methods have been developed and employed to recognize the geological features and particularly fault zones in many regions. To achieve higher lateral resolution and also greater depth penetration, the MT method is one of the most effective electromagnetic techniques to imagine the subsurface structures electrically.
In 2006 wide frequency range of magnetotelluric measurements were carried out at the eastern part of the city of Arak in Iran to understand the crustal electrical conductivity of the region by putting emphasis on locating the fault zones. The electric and magnetic field components were acquired along a profile across the geological trend at 15 stations. A robust single site processing followed by the inversion and one dimensional as well as two dimensional modeling was performed. The inversion results revealed electrical conductivity structures in correlation with geological features. As a significant result, true locations of two major faults, Talkhab and Tabarteh Faults and a conductive block in between were recognized in the Arak area.
Introduction: The area of study is a part of the Arak watershed located in two Central-Iran and Sanandaj-Sirjan Zones. Two parallel faults named Talkhab and Tabarteh pass through the region and divide it into three blocks. The seismicity of the area is controlled by these two parallel faults, especially the Talkhab Fault which is presumed to be the source for seismicity activity in the region (Mirzaei and Ghadimi, 2006). Anomalous crustal conductors are occasionally associated with seismic activity. Fluid is an important factor in the fault zone and many of the active faults are characterized by a great volume of groundwater (Gundmundsson, 2000).
The magnetotelluric method is a passive electromagnetic technique that uses the natural, time varying electric and magnetic field components measured at right angles at the surface of the earth to make inferences about the earth’s electrical structure which, in turn, can be related to the geology tectonics and subsurface conditions. Measurements of the horizontal components of the natural electromagnetic field are used to construct the full complex impedance tensor, Z, as a function of frequency. Using the effective impedance, determinant apparent resistivities and phases are computed and used for the inversion. Also the apparent resistivities for both TE and TM mode are computed and used for 2D inversion.
Data Processing, Inversion and Conclusions: MT data were processed using a code from Smirnov (2003) aiming at a robust single site estimate of electromagnetic transfer functions. As the area of study is populated and close to urban noise sources, the recorded data has not good quality which justifies the low coherency between the electric and magnetic channels. We performed 1D inversion of the determinant data using a code from Pedersen (2004) for all sites. Since the quality of the determinant data was acceptable, we performed 2D inversion of the determinant data using a code from Siripunvaraporn and Egbert (2000). Besides an extra 2D inversion of MT data for TE and TM modes was performed using a code from Rodi and Mackie (2001).
The 2D models significantly illustrate two conductive zones, two resistive blocks and a large conductive zone hidden under the Quaternary alluviums along the profile. As significant results, in collaboration with geological information about the presence of the Talkhab and Tabarteh faults the conductivity features can be attributed to the faults. Besides, a probable hidden fault is also recognizable.https://jesphys.ut.ac.ir/article_21975_101f63783eb477e4d83777a2d37a9b20.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36320101122A new tidal model for the Persian Gulf and Oman Sea based on satellite altimetry and coastal tidal gauge observationsA new tidal model for the Persian Gulf and Oman Sea based on satellite altimetry and coastal tidal gauge observations21976FAAli Reza Ardalan0000-0001-5549-3189Mohammad Javad ToorianJournal Article19700101Based on 11 years of TOPEX/Poseidon satellite altimetry and coastal tide gauge sea level observations, four tidal constituents, namely O1, K1, M2 and S2, are modeled for the Persian Gulf and Oman Sea using a time-wise approach according to the following details. By selecting the cycle 100 as the reference, 772 points on 12 paths along the track of the altimetry satellite footprints over the Persian Gulf and Oman Sea are selected as the center points of 772 circular cells that were used to catch the repeated MGDR data from cycle 8 through 345 that fall within the aforementioned circular cells with a radius of 0.053?. In this way 772 time series of the sea level observations located at the center of the circular cells are developed. These data were further corrected for all effects whose correction parameters are given by the MGDR data files; except the tidal correction that was kept to become the source of information for our tidal modeling. The gaps in the 772 satellite derived time series are filled via inverse solution of the autocorrelation function that was applied to the existing sea level variation data. Besides, the 1 hourly time series of sea level observations at 17 tide gauges along the Iranian coast line at the Persian Gulf and Oman Sea were used both to check the validity of the tidal models developed by the altimetry data at the 772 point over the above mentioned sea areas and to increase the accuracy of satellite derived tidal models at the shallow waters. The equally spaced 772 satellite altimetry derived time series and 17 time series at the coastal tide gauge stations are subjected to Fourier analysis to obtain the major tidal constituents, which are next used as the initial value within a least square solution to obtain the adjusted tidal frequencies, their phase angles and amplitudes. The result of this step for the satellite derived time series were modeling of all the existing tidal constituents with periods greater than 20 days, as the repetition of the TOPEX/Poseidon satellite altimetry observation is 9.915 days, except for the crossover points where the repetition time period is half. Next, via forward modeling, the effects of the modeled tidal constituents were removed from the original 772 time series to remain with the residual time series that were re-orders according to their observation hour, without considering their observation date in order to develop 2 hourly residual time series, which were used to derive the other short period tidal constituents.
The result of the numerical computation and the comparison of the satellite derive models with that obtained by tide gauge observations granted the success of the method and such a new tidal model for four tidal constituents namely, O1, K1, M2 and S2 is developed for the test area, i.e. Persian Gulf and Oman Sea.Based on 11 years of TOPEX/Poseidon satellite altimetry and coastal tide gauge sea level observations, four tidal constituents, namely O1, K1, M2 and S2, are modeled for the Persian Gulf and Oman Sea using a time-wise approach according to the following details. By selecting the cycle 100 as the reference, 772 points on 12 paths along the track of the altimetry satellite footprints over the Persian Gulf and Oman Sea are selected as the center points of 772 circular cells that were used to catch the repeated MGDR data from cycle 8 through 345 that fall within the aforementioned circular cells with a radius of 0.053?. In this way 772 time series of the sea level observations located at the center of the circular cells are developed. These data were further corrected for all effects whose correction parameters are given by the MGDR data files; except the tidal correction that was kept to become the source of information for our tidal modeling. The gaps in the 772 satellite derived time series are filled via inverse solution of the autocorrelation function that was applied to the existing sea level variation data. Besides, the 1 hourly time series of sea level observations at 17 tide gauges along the Iranian coast line at the Persian Gulf and Oman Sea were used both to check the validity of the tidal models developed by the altimetry data at the 772 point over the above mentioned sea areas and to increase the accuracy of satellite derived tidal models at the shallow waters. The equally spaced 772 satellite altimetry derived time series and 17 time series at the coastal tide gauge stations are subjected to Fourier analysis to obtain the major tidal constituents, which are next used as the initial value within a least square solution to obtain the adjusted tidal frequencies, their phase angles and amplitudes. The result of this step for the satellite derived time series were modeling of all the existing tidal constituents with periods greater than 20 days, as the repetition of the TOPEX/Poseidon satellite altimetry observation is 9.915 days, except for the crossover points where the repetition time period is half. Next, via forward modeling, the effects of the modeled tidal constituents were removed from the original 772 time series to remain with the residual time series that were re-orders according to their observation hour, without considering their observation date in order to develop 2 hourly residual time series, which were used to derive the other short period tidal constituents.
The result of the numerical computation and the comparison of the satellite derive models with that obtained by tide gauge observations granted the success of the method and such a new tidal model for four tidal constituents namely, O1, K1, M2 and S2 is developed for the test area, i.e. Persian Gulf and Oman Sea.https://jesphys.ut.ac.ir/article_21976_2b93a0ff32688ffc5f64dd34c4622e2e.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36320101122Prestack time migration of 2D seismic records by the Kirchhoff methodPrestack time migration of 2D seismic records by the Kirchhoff method21977FAPouya HadianAbdolrahim JavaherianBehzad NazariJournal Article19700101In exploration seismology, migration refers to a multi-channel processing step that attempts to spatially re-position events and improve focusing. Before migration, seismic data is usually displayed with traces plotted at the surface location of the receivers and with a vertical time axis. This means that dipping reflections are systematically mispositioned in the lateral coordinate and the vertical time axis needs a transformation to depth. Also problematic is the unfocused nature of seismic data before migration. Migration moves dipping reflections to their true subsurface positions and collapses diffractions, thus increasing spatial resolution and yielding a seismic image of the subsurface.
Time migration which produces a migrated time section is appropriate as long as lateral velocity variations are mild to moderate. Dipping events on a stacked section call for time migration. Conflicting dips with different stacking velocities is one case in which a conventional stacked section differs from a zero-offset section. Thus, poststack migration which assumes that the stacked section is equivalent to a zero-offset is not valid to handle the case of conflicting dips. Instead, one needs to do prestack time migration.
Kirchhoff migration methods are based on the diffraction summation technique, which sums the seismic amplitudes along a diffraction hyperbola whose curvature is governed by the medium velocity, and maps the result to apex of the hyperbola. The Kirchhoff summation technique applies amplitude and phase corrections to the data before summation. These corrections make the summation consistent with the wave equation in that they account for spherical spreading, the obliquity factor (angle-dependency of amplitudes), and the phase shift inherent in Huygens' secondary sources. Since the Kirchhoff migration method is based on summing the amplitudes along the hyperbolic trajectory, as long as the diffraction curve is known, it can be adapted for any domain. The velocity function used in the diffraction curve equation is vrms for prestack migration. Poststack migration uses zero-offset data while prestack time migration applies to unstacked data, so uses shot record, common-offset, and equivalent–offset data. The Kirchhoff prestack time migration sums through the input space along hyperbolic paths to compute each point in the output space. For variable velocity the hyperbola is replaced by a more general shape. Amplitudes change under migration. Velocity model is a matrix of interval velocity at each sampling point. Velocity model is generated using reflector velocities. An important factor in the Kirchhoff migration is migration aperture. Reducing the aperture reduces the maximum dip to migrate. The effect of migration aperture is illustrated using different apertures. Small apertures eliminate steeply dipping events from the migrated section.
A software in MATLAB is written which is capable of migrating common shot records. Traveltime from source to a scatterpoint (i.e. the image point) is approximated by a Dix equation using the rms velocity from the model at the lateral position halfway between the source and the receiver and at the vertical traveltime of the scatterpoint. Similarly, from the scatterpoint to a receiver, a Dix equation using the rms velocity halfway between the scatterpoint and the receiver is used. The source and all receivers are assumed to be on the same horizontal plane.
In this paper, two models consisting of a 2 layered trapezoid model and a 3 layered model are synthesized and inputted to the migration algorithm. The traveltimes are calculated via ray tracing with respect to a shot in the center of the model. Shot records were migrated with the interval velocity model. Since prestack migration is very sensitive to the velocity model, an rms velocity model was used. Using rms velocity field improved the migrated section. The critical parameter of the Kirchhoff migration is a migration aperture width whose effect is more evident in steep dips and depths.In exploration seismology, migration refers to a multi-channel processing step that attempts to spatially re-position events and improve focusing. Before migration, seismic data is usually displayed with traces plotted at the surface location of the receivers and with a vertical time axis. This means that dipping reflections are systematically mispositioned in the lateral coordinate and the vertical time axis needs a transformation to depth. Also problematic is the unfocused nature of seismic data before migration. Migration moves dipping reflections to their true subsurface positions and collapses diffractions, thus increasing spatial resolution and yielding a seismic image of the subsurface.
Time migration which produces a migrated time section is appropriate as long as lateral velocity variations are mild to moderate. Dipping events on a stacked section call for time migration. Conflicting dips with different stacking velocities is one case in which a conventional stacked section differs from a zero-offset section. Thus, poststack migration which assumes that the stacked section is equivalent to a zero-offset is not valid to handle the case of conflicting dips. Instead, one needs to do prestack time migration.
Kirchhoff migration methods are based on the diffraction summation technique, which sums the seismic amplitudes along a diffraction hyperbola whose curvature is governed by the medium velocity, and maps the result to apex of the hyperbola. The Kirchhoff summation technique applies amplitude and phase corrections to the data before summation. These corrections make the summation consistent with the wave equation in that they account for spherical spreading, the obliquity factor (angle-dependency of amplitudes), and the phase shift inherent in Huygens' secondary sources. Since the Kirchhoff migration method is based on summing the amplitudes along the hyperbolic trajectory, as long as the diffraction curve is known, it can be adapted for any domain. The velocity function used in the diffraction curve equation is vrms for prestack migration. Poststack migration uses zero-offset data while prestack time migration applies to unstacked data, so uses shot record, common-offset, and equivalent–offset data. The Kirchhoff prestack time migration sums through the input space along hyperbolic paths to compute each point in the output space. For variable velocity the hyperbola is replaced by a more general shape. Amplitudes change under migration. Velocity model is a matrix of interval velocity at each sampling point. Velocity model is generated using reflector velocities. An important factor in the Kirchhoff migration is migration aperture. Reducing the aperture reduces the maximum dip to migrate. The effect of migration aperture is illustrated using different apertures. Small apertures eliminate steeply dipping events from the migrated section.
A software in MATLAB is written which is capable of migrating common shot records. Traveltime from source to a scatterpoint (i.e. the image point) is approximated by a Dix equation using the rms velocity from the model at the lateral position halfway between the source and the receiver and at the vertical traveltime of the scatterpoint. Similarly, from the scatterpoint to a receiver, a Dix equation using the rms velocity halfway between the scatterpoint and the receiver is used. The source and all receivers are assumed to be on the same horizontal plane.
In this paper, two models consisting of a 2 layered trapezoid model and a 3 layered model are synthesized and inputted to the migration algorithm. The traveltimes are calculated via ray tracing with respect to a shot in the center of the model. Shot records were migrated with the interval velocity model. Since prestack migration is very sensitive to the velocity model, an rms velocity model was used. Using rms velocity field improved the migrated section. The critical parameter of the Kirchhoff migration is a migration aperture width whose effect is more evident in steep dips and depths.https://jesphys.ut.ac.ir/article_21977_85f95a5a464e4ca8d1a63b1d085f70df.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36320101122NMO correction with nonstretch NMO methodNMO correction with nonstretch NMO method21978FABaban Mostafa YousefAbdolrahim JavaherianHessam Shini KimassiAbolfazl MoslemiJournal Article19700101The application of NMO (normal moveout) has been recognized as an effective method of generating quasi-zero-offset traces in traditional common-midpoint processing. Artifacts of the NMO method relate to the NMO-stretch effects. Conventional application of normal-moveout correction to a common-midpoint (CMP) reflection generates a stretch that increases with offset and decreases with zero-offset time. Shatilo and Aminzadeh (2000) introduced the technique which implies constant normal moveout (CNMO) for a finite time interval of a seismic trace. Perroud and Tygel (2004) introduced the implementation, called nonstretch NMO, automatically, which avoids the undesirable stretch effects that are present in the conventional NMO. They applied their new method (Nonstretch NMO) to shallow seismic data including high resolution (HR) seismic data and ground-penetrating radar (GPR) measurements.
In this paper, nonstretch NMO (Perroud and Tygel, 2004) is applied to seismic reflection data. NMO correction is usually considered in a hyperbolic equation where , is travel time, related to the x, offset between the source and the receiver, , two-way zero offset travel time, and , is NMO velocity which estimates the root-mean-square (RMS) velocity in a case of horizontal stratified earth. Hyperbolic equation represents a hyperbola whose asymptote passes through the origin and has slope equal to . Ideally, the entire pulse width must be shifted to the horizontal line without any distortion. Traditional NMO correction moves the samples in the vicinity of traveltime onto and with substituting these quantities in hyperbolic equation, then by comparing these equations the stretch ratio can be extracted. For avoiding stretching we try to parallel the hyperbolae traveltimes. The traveltimes in conventional NMO converge to each other whereas in nonstretch NMO the traveltimes are almost parallel to each other. It can be if , in this case, we obtain the adjusted velocity ( ) relation. It can be seen from the adjusted velocity relation that for the set of recorded events on a given trace, stronger effects on the stacking velocity are observed at shorter zero offset times. Also, we know that always decreases when the time shift increases. Therefore, even setting NMO velocity constant is not sufficient to avoid stretching, as it is done in the constant-velocity-stack (CVS) approach. In addition, the conventional increase of NMO velocity with time that results from interpolation, the time-velocity distribution is going the wrong way and further increases the stretching effect of the NMO. With adjusted velocity equation the original time-velocity point picked from conventional velocity analysis is replaced by adjusted velocity in the curve segment that was obtained by it. The quantity time shift ( ) is obtained by inversing the bandwidth of the propagating signal. In this method, the time-velocity distribution is dependent on a trace in the range as for each sample in the range about the the modified velocity calculated for all of traces. Because in the range the velocity decreases with time, the interpolated NMO velocity between events will increase faster, and thus the NMO stretch effect will be increased between events. The problem arises when events cross each other and it can be solved by processing the reflection events one at a time. At first, we obtained the traveltime corresponding to each reflection by the traditional velocity analysis. Then, for each event, we mute all samples above the corresponding hyperbola and below those for the next events and apply the nonstretch NMO. The process completed by summing all the events.
The no stretch NMO technique has been tested on synthetic and real data. The synthetic data include CMP gathers of a flat layer, two layers with crossing reflectors and four flat layers with crossing reflectors which contain multiples and noise. The real CMP data belong to one of the 2-D seismic reflection operations in Iran. The method improves the resolution of CMP stack. Nonstretch NMO correction reduces the stretch effects of conventional NMO. This results in higher spectral frequencies and smaller spectral distortion of shallow far offset reflected events. Following the nonstretch NMO correction, muting may be less when compared with conventional NMO.The application of NMO (normal moveout) has been recognized as an effective method of generating quasi-zero-offset traces in traditional common-midpoint processing. Artifacts of the NMO method relate to the NMO-stretch effects. Conventional application of normal-moveout correction to a common-midpoint (CMP) reflection generates a stretch that increases with offset and decreases with zero-offset time. Shatilo and Aminzadeh (2000) introduced the technique which implies constant normal moveout (CNMO) for a finite time interval of a seismic trace. Perroud and Tygel (2004) introduced the implementation, called nonstretch NMO, automatically, which avoids the undesirable stretch effects that are present in the conventional NMO. They applied their new method (Nonstretch NMO) to shallow seismic data including high resolution (HR) seismic data and ground-penetrating radar (GPR) measurements.
In this paper, nonstretch NMO (Perroud and Tygel, 2004) is applied to seismic reflection data. NMO correction is usually considered in a hyperbolic equation where , is travel time, related to the x, offset between the source and the receiver, , two-way zero offset travel time, and , is NMO velocity which estimates the root-mean-square (RMS) velocity in a case of horizontal stratified earth. Hyperbolic equation represents a hyperbola whose asymptote passes through the origin and has slope equal to . Ideally, the entire pulse width must be shifted to the horizontal line without any distortion. Traditional NMO correction moves the samples in the vicinity of traveltime onto and with substituting these quantities in hyperbolic equation, then by comparing these equations the stretch ratio can be extracted. For avoiding stretching we try to parallel the hyperbolae traveltimes. The traveltimes in conventional NMO converge to each other whereas in nonstretch NMO the traveltimes are almost parallel to each other. It can be if , in this case, we obtain the adjusted velocity ( ) relation. It can be seen from the adjusted velocity relation that for the set of recorded events on a given trace, stronger effects on the stacking velocity are observed at shorter zero offset times. Also, we know that always decreases when the time shift increases. Therefore, even setting NMO velocity constant is not sufficient to avoid stretching, as it is done in the constant-velocity-stack (CVS) approach. In addition, the conventional increase of NMO velocity with time that results from interpolation, the time-velocity distribution is going the wrong way and further increases the stretching effect of the NMO. With adjusted velocity equation the original time-velocity point picked from conventional velocity analysis is replaced by adjusted velocity in the curve segment that was obtained by it. The quantity time shift ( ) is obtained by inversing the bandwidth of the propagating signal. In this method, the time-velocity distribution is dependent on a trace in the range as for each sample in the range about the the modified velocity calculated for all of traces. Because in the range the velocity decreases with time, the interpolated NMO velocity between events will increase faster, and thus the NMO stretch effect will be increased between events. The problem arises when events cross each other and it can be solved by processing the reflection events one at a time. At first, we obtained the traveltime corresponding to each reflection by the traditional velocity analysis. Then, for each event, we mute all samples above the corresponding hyperbola and below those for the next events and apply the nonstretch NMO. The process completed by summing all the events.
The no stretch NMO technique has been tested on synthetic and real data. The synthetic data include CMP gathers of a flat layer, two layers with crossing reflectors and four flat layers with crossing reflectors which contain multiples and noise. The real CMP data belong to one of the 2-D seismic reflection operations in Iran. The method improves the resolution of CMP stack. Nonstretch NMO correction reduces the stretch effects of conventional NMO. This results in higher spectral frequencies and smaller spectral distortion of shallow far offset reflected events. Following the nonstretch NMO correction, muting may be less when compared with conventional NMO.https://jesphys.ut.ac.ir/article_21978_c41b3c78d092bb4bfbd1d704fe78f604.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36320101122Seismic attenuation coefficient estimation using smoothed pseudo Wigner-Ville distributionSeismic attenuation coefficient estimation using smoothed pseudo Wigner-Ville distribution21979FAAmin Roshandel KahooHamid Reza SiahKoohiJournal Article19700101Elastic energy of seismic wave is lost through propagation into the earth. Various factors affect seismic energy. Some of them are frequency independent and recovered in processing steps. But other factors such as intrinsic absorption of medium are frequency dependent and cannot be recovered by processing methods. Lost energy caused by these factors is called attenuation. There are various methods to study the attenuation of seismic energy.
Because of the frequency dependency behaviour of attenuation, it acts as a non-stationary quantity. Attenuation coefficient is usually studied in frequency domain based on power spectrum and statistical methods. Since Fourier transform does not consider the temporal variation of frequency content of seismic data, and due to the dependence of attenuation to frequency we used time-frequency tools in this study. Time-frequency transforms such as short-time Fourier transform, wavelet transform and S-transform are common tools in the processing and interpretation of seismic data. In this study, we used the Wigner-Ville distribution as a time-frequency tool to study the seismic wave attenuation.
Wigner-Ville Distribution: Wigner-Ville distribution (WVD) of a signal is defined as (Boashash, 2003):
(1)
The importance of the WVD is due to its marginal property and high resolution of time and frequency axis. But the existence of cross-term in WVD limited its application in engineering and science. Pseudo WVD (PWVD) and Smoothed Pseudo WVD (SPWVD) are two well-known methods with less cross-term than WVD. PWVD and SPWVD are defined as (Polarikas, 2000):
(2)
(3)
where, is the smoothing window which acts on frequency direction of distribution and is the smoothing window which acts on time direction of distribution. These two methods reduce the cross-term but extend the auto-term. In our study the noiselessness of distribution is more important than resolution. Therefore, we use SPWVD in our study.
Attenuation Estimation: The results of Zhang and Ulrich (2002) are the principle of attenuation estimation based on SPWVD. They show that the effect of attenuation on seismic wavelet can be referred as:
1- Peak frequency shift to lower frequencies.
2-Frequencies above the peak frequency are affected by attenuation more than frequencies which belong below the peak frequency.
The slope of the line which is fitted to WVD at each time for frequency range between peak frequency to half of Nyquist frequency can be used as a attenuation coefficient based on the relation below (Yandong and Xiaodong, 2007):
(4)
where, is the centeroid frequency and is defined as:
(5)
Because centeroid frequency is less sensitive to noise than peak frequency, we used centeroid frequency instead of peak frequency.
We investigated the efficiency of the method on both synthetic and real seismic data. Comparison of results with well logs and inversion of seismic data showed that this method can estimate both the attenuation coefficient and related anomaly location properly.Elastic energy of seismic wave is lost through propagation into the earth. Various factors affect seismic energy. Some of them are frequency independent and recovered in processing steps. But other factors such as intrinsic absorption of medium are frequency dependent and cannot be recovered by processing methods. Lost energy caused by these factors is called attenuation. There are various methods to study the attenuation of seismic energy.
Because of the frequency dependency behaviour of attenuation, it acts as a non-stationary quantity. Attenuation coefficient is usually studied in frequency domain based on power spectrum and statistical methods. Since Fourier transform does not consider the temporal variation of frequency content of seismic data, and due to the dependence of attenuation to frequency we used time-frequency tools in this study. Time-frequency transforms such as short-time Fourier transform, wavelet transform and S-transform are common tools in the processing and interpretation of seismic data. In this study, we used the Wigner-Ville distribution as a time-frequency tool to study the seismic wave attenuation.
Wigner-Ville Distribution: Wigner-Ville distribution (WVD) of a signal is defined as (Boashash, 2003):
(1)
The importance of the WVD is due to its marginal property and high resolution of time and frequency axis. But the existence of cross-term in WVD limited its application in engineering and science. Pseudo WVD (PWVD) and Smoothed Pseudo WVD (SPWVD) are two well-known methods with less cross-term than WVD. PWVD and SPWVD are defined as (Polarikas, 2000):
(2)
(3)
where, is the smoothing window which acts on frequency direction of distribution and is the smoothing window which acts on time direction of distribution. These two methods reduce the cross-term but extend the auto-term. In our study the noiselessness of distribution is more important than resolution. Therefore, we use SPWVD in our study.
Attenuation Estimation: The results of Zhang and Ulrich (2002) are the principle of attenuation estimation based on SPWVD. They show that the effect of attenuation on seismic wavelet can be referred as:
1- Peak frequency shift to lower frequencies.
2-Frequencies above the peak frequency are affected by attenuation more than frequencies which belong below the peak frequency.
The slope of the line which is fitted to WVD at each time for frequency range between peak frequency to half of Nyquist frequency can be used as a attenuation coefficient based on the relation below (Yandong and Xiaodong, 2007):
(4)
where, is the centeroid frequency and is defined as:
(5)
Because centeroid frequency is less sensitive to noise than peak frequency, we used centeroid frequency instead of peak frequency.
We investigated the efficiency of the method on both synthetic and real seismic data. Comparison of results with well logs and inversion of seismic data showed that this method can estimate both the attenuation coefficient and related anomaly location properly.https://jesphys.ut.ac.ir/article_21979_7348cfe3495f68535c5e09e7ca1da0ee.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36320101122Edge detection of potential field anomalies using vertical derivative of analytic signalEdge detection of potential field anomalies using vertical derivative of analytic signal21980FAKamal AlamdarAbdolhamid AnsariJournal Article19700101The analytic signal for magnetic anomalies was initially defined as a “complex field deriving from a complex potential” (Nabighian, 1972). This function can be computed easily in the frequency domain, its real part is the horizontal derivative of the field and its imaginary part is the vertical derivative. Analytic signal processing and interpretation requires few initial assumptions regarding the source body geometry and magnetization and is particularly efficient at an early stage of the interpretation even if constraints are not available.
For 2-D structures (Nabighian, 1972), the method assumes that the causative bodies have a polygonal cross-section with uniform magnetization. Such structures can also be considered as the superimposition of a finite number of magnetic steps. Narrow dikes and thin sheets can also be taken into account using a lower order of derivation; for example, the field itself instead of the horizontal derivatives. Nabighian (1972) demonstrates that the analytic signal has simple poles at each corner of the structures. The amplitude of the analytic signal is a bell-shaped symmetric function maximizing exactly over the top of each contact, with the width of the amplitude curve being related directly to the depth of the contact. This is also true for any of the derivatives of the signal (Nabighian, 1974); these properties can be used to locate the magnetic contacts and to estimate their depths.
Extension of the 2-D analytic signal to three dimensions will allow more general interpretation procedures to be developed, the two-dimensionality assumption being no longer required. The relationship between the horizontal and vertical derivatives for the 3-D case was first derived by Nabighian (1984).
As with the 2-D case, the vertical and horizontal derivatives are Hilbert transforms of each other and so the concept of the analytic signal in 2-D can be extended to three dimensions. The analytic signal amplitude can then be defined as “the square root of the squared sum of the vertical and the two horizontal first derivatives of the magnetic field” (Roest et al., 1992).
Because of interference effects, the use of the simple analytic signal in the 3-D case seems insufficient to detect geologic boundaries. Since the existing interference is usually inevitable, improving resolution becomes a requirement. In the 2-D case, Nabighian (1974) suggested using the following bell-shaped function to enhance the analytic signal from shallow sources:
(1)
where Gh and Gz are the horizontal and vertical gradients of the potential-field anomaly, respectively; h is the distance along the horizontal axis which is perpendicular to the strike of the 2-D structure; n is a positive integer; d is the depth to the top surface of the source, while the lower surface is at infinity; is the ambient parameter and is equal to when the step model of magnetic anomaly is applied; k is the susceptibility contrast of the step model; F is the earth’s magnetic field magnitude; is the dip angle of the step model; for total magnetic field anomalies; i is the inclination of the earth’s magnetic field; and is the angle between magnetic north and positive h - axis. All the above parameters are presented in Fig. 1.
In the 3-D case, the simple analytic signal is defined in Nabighian (1984) as:
(2)
and its amplitude as:
(3)
To extend equation (1) into the 3-D case, we define the nth-order enhanced analytic signal as:
(4)
and its amplitude as:
(5)
For n = 2, equation (4) corresponds to the enhanced analytic signal derived from second vertical derivative, and the amplitude of equation (5) becomes:
(6)
where G is the potential-field anomaly and ، ، .
Equation (6) is used hereafter as an example to demonstrate the improvement of the detection of geologic boundaries. In this paper the applicability of this method is demonstrated on synthetic gravity data from the vertical cylinder model. Also this method was applied on real gravity data from southwest England successfully. In this regard the granites bodies and their separating faults have been enhanced in which the results of our method have broad correlation with the geological map.The analytic signal for magnetic anomalies was initially defined as a “complex field deriving from a complex potential” (Nabighian, 1972). This function can be computed easily in the frequency domain, its real part is the horizontal derivative of the field and its imaginary part is the vertical derivative. Analytic signal processing and interpretation requires few initial assumptions regarding the source body geometry and magnetization and is particularly efficient at an early stage of the interpretation even if constraints are not available.
For 2-D structures (Nabighian, 1972), the method assumes that the causative bodies have a polygonal cross-section with uniform magnetization. Such structures can also be considered as the superimposition of a finite number of magnetic steps. Narrow dikes and thin sheets can also be taken into account using a lower order of derivation; for example, the field itself instead of the horizontal derivatives. Nabighian (1972) demonstrates that the analytic signal has simple poles at each corner of the structures. The amplitude of the analytic signal is a bell-shaped symmetric function maximizing exactly over the top of each contact, with the width of the amplitude curve being related directly to the depth of the contact. This is also true for any of the derivatives of the signal (Nabighian, 1974); these properties can be used to locate the magnetic contacts and to estimate their depths.
Extension of the 2-D analytic signal to three dimensions will allow more general interpretation procedures to be developed, the two-dimensionality assumption being no longer required. The relationship between the horizontal and vertical derivatives for the 3-D case was first derived by Nabighian (1984).
As with the 2-D case, the vertical and horizontal derivatives are Hilbert transforms of each other and so the concept of the analytic signal in 2-D can be extended to three dimensions. The analytic signal amplitude can then be defined as “the square root of the squared sum of the vertical and the two horizontal first derivatives of the magnetic field” (Roest et al., 1992).
Because of interference effects, the use of the simple analytic signal in the 3-D case seems insufficient to detect geologic boundaries. Since the existing interference is usually inevitable, improving resolution becomes a requirement. In the 2-D case, Nabighian (1974) suggested using the following bell-shaped function to enhance the analytic signal from shallow sources:
(1)
where Gh and Gz are the horizontal and vertical gradients of the potential-field anomaly, respectively; h is the distance along the horizontal axis which is perpendicular to the strike of the 2-D structure; n is a positive integer; d is the depth to the top surface of the source, while the lower surface is at infinity; is the ambient parameter and is equal to when the step model of magnetic anomaly is applied; k is the susceptibility contrast of the step model; F is the earth’s magnetic field magnitude; is the dip angle of the step model; for total magnetic field anomalies; i is the inclination of the earth’s magnetic field; and is the angle between magnetic north and positive h - axis. All the above parameters are presented in Fig. 1.
In the 3-D case, the simple analytic signal is defined in Nabighian (1984) as:
(2)
and its amplitude as:
(3)
To extend equation (1) into the 3-D case, we define the nth-order enhanced analytic signal as:
(4)
and its amplitude as:
(5)
For n = 2, equation (4) corresponds to the enhanced analytic signal derived from second vertical derivative, and the amplitude of equation (5) becomes:
(6)
where G is the potential-field anomaly and ، ، .
Equation (6) is used hereafter as an example to demonstrate the improvement of the detection of geologic boundaries. In this paper the applicability of this method is demonstrated on synthetic gravity data from the vertical cylinder model. Also this method was applied on real gravity data from southwest England successfully. In this regard the granites bodies and their separating faults have been enhanced in which the results of our method have broad correlation with the geological map.https://jesphys.ut.ac.ir/article_21980_c44ca5c40a6e81f5f9f62260124c7668.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36320101122Improvement of Temporal Resolution of Seismic Data Using Singular Spectrum Analysis And Autoregressive MethodsImprovement of Temporal Resolution of Seismic Data Using Singular Spectrum Analysis And Autoregressive Methods21981FAHamzeh Mohammadi GheimasiHamid Reza SiahkoohiKaro LucasJournal Article19700101Temporal resolution of seismic data is proportional to the seismic band width. Seismic data still have not enough temporal resolution because of the band-limited nature of available data even if it is deconvolved. Lower and higher frequencies of seismic data spectrum are missing and cannot be recovered by the usual deconvolution methods. Because of absorption, high frequencies belonging to the spectrum are missing and recovery of lower frequencies is also a big deal (Lindseth, 1979). Many different deconvolution techniques have been developed to process the data obtained from various sources ranging of seismic data. especially, since for many years in seismic processing, they have been used to improve the temporal resolution of seismic data. In this paper we introduce a method that is the generalization of the autoregressive (AR) spectral extrapolation based method originally applied by Hakan Karsli (2006), which extrapolates the deconvolved seismic spectrum for recovery of missed frequencies. When reflectors are numerous, the seismic spectrum is complicated and extrapolation by AR-based methods is uncertain. The introduced method takes a certain part of both real and imaginary parts of the spectrum, where S/N is high compare to the rest of the spectrum, and extrapolates lower and higher portions of the spectrum using Singular Spectrum Analysis (SSA) and Autoregressive model. Experience shows that a 3–10 dB drop from the maximum amplitude of the spectrum of the source wavelet represents a high SNR portion of the spectrum. Because of the existence of unwanted noise, the usual regression algorithms do not lead to favorable results. In second step of extrapolation algorithm we decompose selected spectrum by SSA.
SSA is a tool to extract information from short and noisy chaotic time series (Vautard et al., 1992). It relies on the Karhunen-Loeve decomposition of an estimate of the covariance matrix based on "M" lagged copies of the time series. Thus as the first step, the embedding procedure is applied to construct a sequence of M-dimensional vectors from the time series:
The N' × M trajectory matrix (D) of the time series has the M dimensional vectors as its columns, and is obviously a Hankel matrix (the elements on the diagonals j + j = constant are equal). In the second step, the M × M covariance matrix is calculated as:
Eigen elements can be determined by Singular Value Decomposition (SVD):
The elements of diagonal matrix ?= [diag (?1. . . ?M)] are the singular values of D and are equal to the square roots of the Eigenvalues. The Eigen elements {( , ): k = 1. . . M} are obtained from:
Each Eigenvalue, estimates the partial variance in the direction, and the sum of all Eigenvalues equals the total variance of the original time series. In the third step, the time series is projected onto each Eigenvector, and yields the corresponding principal component (PC) for each (t):
Each of the principal components, a nonlinear or linear trend, a periodic or quasi-periodic pattern, or a multiperiodic pattern, has a narrow band frequency spectrum and well defined characteristics to be estimated.
As the fourth step, the time series is reconstructed by combining the associated principal components:
Data extrapolation algorithms based on AR techniques have been commonly used for modeling the past values (backward) and future values (forward) of a signal Walker and Ulrych, 1983; Miyashita et al., 1985; Each principal component is applied to the AR extrapolation method, to obtain the next and previous missed frequencies of that principal component using the following extrapolation equation:
That is the autoregressive equation of order p, i.e. extrapolate kth frequency based on the linear sum of p previous frequencies. are AR coefficients and is random noise. The coefficients can be computed from autocorrelation estimates, from partial autocorrelation, and from least-squares matrix procedures. There are several approaches to select the model order for practical situations.
In this study, AR model order L is selected equal to 0.3 times of the length of the high S/N portion of the trace spectrum, which is suggested by Walker and Ulrych (1983).
After extrapolation of each principal component, the trace spectrum is reconstructed by combining the associated extrapolated principal components. The seismic data whose temporal resolution has been improved is calculated by an inverse Fourier transform of SSA and AR spectral extrapolated spectrum. The results from synthetic and real seismic data are presented.Temporal resolution of seismic data is proportional to the seismic band width. Seismic data still have not enough temporal resolution because of the band-limited nature of available data even if it is deconvolved. Lower and higher frequencies of seismic data spectrum are missing and cannot be recovered by the usual deconvolution methods. Because of absorption, high frequencies belonging to the spectrum are missing and recovery of lower frequencies is also a big deal (Lindseth, 1979). Many different deconvolution techniques have been developed to process the data obtained from various sources ranging of seismic data. especially, since for many years in seismic processing, they have been used to improve the temporal resolution of seismic data. In this paper we introduce a method that is the generalization of the autoregressive (AR) spectral extrapolation based method originally applied by Hakan Karsli (2006), which extrapolates the deconvolved seismic spectrum for recovery of missed frequencies. When reflectors are numerous, the seismic spectrum is complicated and extrapolation by AR-based methods is uncertain. The introduced method takes a certain part of both real and imaginary parts of the spectrum, where S/N is high compare to the rest of the spectrum, and extrapolates lower and higher portions of the spectrum using Singular Spectrum Analysis (SSA) and Autoregressive model. Experience shows that a 3–10 dB drop from the maximum amplitude of the spectrum of the source wavelet represents a high SNR portion of the spectrum. Because of the existence of unwanted noise, the usual regression algorithms do not lead to favorable results. In second step of extrapolation algorithm we decompose selected spectrum by SSA.
SSA is a tool to extract information from short and noisy chaotic time series (Vautard et al., 1992). It relies on the Karhunen-Loeve decomposition of an estimate of the covariance matrix based on "M" lagged copies of the time series. Thus as the first step, the embedding procedure is applied to construct a sequence of M-dimensional vectors from the time series:
The N' × M trajectory matrix (D) of the time series has the M dimensional vectors as its columns, and is obviously a Hankel matrix (the elements on the diagonals j + j = constant are equal). In the second step, the M × M covariance matrix is calculated as:
Eigen elements can be determined by Singular Value Decomposition (SVD):
The elements of diagonal matrix ?= [diag (?1. . . ?M)] are the singular values of D and are equal to the square roots of the Eigenvalues. The Eigen elements {( , ): k = 1. . . M} are obtained from:
Each Eigenvalue, estimates the partial variance in the direction, and the sum of all Eigenvalues equals the total variance of the original time series. In the third step, the time series is projected onto each Eigenvector, and yields the corresponding principal component (PC) for each (t):
Each of the principal components, a nonlinear or linear trend, a periodic or quasi-periodic pattern, or a multiperiodic pattern, has a narrow band frequency spectrum and well defined characteristics to be estimated.
As the fourth step, the time series is reconstructed by combining the associated principal components:
Data extrapolation algorithms based on AR techniques have been commonly used for modeling the past values (backward) and future values (forward) of a signal Walker and Ulrych, 1983; Miyashita et al., 1985; Each principal component is applied to the AR extrapolation method, to obtain the next and previous missed frequencies of that principal component using the following extrapolation equation:
That is the autoregressive equation of order p, i.e. extrapolate kth frequency based on the linear sum of p previous frequencies. are AR coefficients and is random noise. The coefficients can be computed from autocorrelation estimates, from partial autocorrelation, and from least-squares matrix procedures. There are several approaches to select the model order for practical situations.
In this study, AR model order L is selected equal to 0.3 times of the length of the high S/N portion of the trace spectrum, which is suggested by Walker and Ulrych (1983).
After extrapolation of each principal component, the trace spectrum is reconstructed by combining the associated extrapolated principal components. The seismic data whose temporal resolution has been improved is calculated by an inverse Fourier transform of SSA and AR spectral extrapolated spectrum. The results from synthetic and real seismic data are presented.https://jesphys.ut.ac.ir/article_21981_330a8b7141cd0800126853f564a48cda.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36320101122Phase split investigation of different magnetotelluric data polarizations in electrically anisotropic mediaPhase split investigation of different magnetotelluric data polarizations in electrically anisotropic media21982FAMansoureh MontahaieBehroz Oskooi0000-0003-3065-194XJournal Article19700101Electrical anisotropy in the earth, the effect of current density dependency on the electric field orientation in a medium, has been considered significantly in recent Magnetotelluric (MT) observations. Several suggestions for electrical anisotropy are based upon MT observations of "phase splits", analogous to shear wave splits in seismology. The MT phase data is accentuated more than its amplitude responses since shallow small scale conductivity heterogeneities cause a significant distortion in MT amplitude responses (known as Galvanic Distortion), while MT phase responses remain immune.
To investigate the MT phase response we will use the tensor representation of the MT phase introduced by Caldwell et al. (2004). This representation has the advantage of considerably simplifying the analysis of the Galvanic distortion effect. Moreover no assumption about the dimensionality of the underlying regional conductivity structure is essential.
The properties of a generally asymmetric "phase tensor" are best understood in terms of the tensor's graphical representation as an ellipse (figure1). The tensor principal axes and principal values correspond to the major and minor axes and the lengths of the corresponding ellipse radii, respectively.
For a uniform conductivity half-space, a circle of unit radius represents the phase tensor at all periods. More generally, if the conductivity is both isotropic and 1-D, the radius of the circle will vary with the period according to the variation of the conductivity with depth. For example, the radius will increase if the conductivity increases with depth. In the 2-D case the strike of the regional conductivity distribution defines a natural orientation for the coordinate system with the x-axis parallel to the structural strike direction. The phase tensor principal axis is parallel and perpendicular to the strike direction of the regional conductivity structure.
The results of anisotropic models discussed in this paper were computed with the 2D anisotropic resistivity modeling code of Pek and Verner (1997) that uses a finite difference algorithm and makes it possible to consider 2D structures with an arbitrarily oriented anisotropy.
The numerical examples investigated in this paper show that MT phase response of an electrically uniform but anisotropic half-space is independent of the polarization direction and no phase split occurs. Using several 1D and 2D anisotropic models, it is demonstrated that MT phase splitting results from spatial differences or gradients in conductivity, not the inherent bulk properties of the anisotropic conductivity tensor. Hence in this respect MT phase splitting is fundamentally different from shear wave splitting in elastic anisotropy investigations.Electrical anisotropy in the earth, the effect of current density dependency on the electric field orientation in a medium, has been considered significantly in recent Magnetotelluric (MT) observations. Several suggestions for electrical anisotropy are based upon MT observations of "phase splits", analogous to shear wave splits in seismology. The MT phase data is accentuated more than its amplitude responses since shallow small scale conductivity heterogeneities cause a significant distortion in MT amplitude responses (known as Galvanic Distortion), while MT phase responses remain immune.
To investigate the MT phase response we will use the tensor representation of the MT phase introduced by Caldwell et al. (2004). This representation has the advantage of considerably simplifying the analysis of the Galvanic distortion effect. Moreover no assumption about the dimensionality of the underlying regional conductivity structure is essential.
The properties of a generally asymmetric "phase tensor" are best understood in terms of the tensor's graphical representation as an ellipse (figure1). The tensor principal axes and principal values correspond to the major and minor axes and the lengths of the corresponding ellipse radii, respectively.
For a uniform conductivity half-space, a circle of unit radius represents the phase tensor at all periods. More generally, if the conductivity is both isotropic and 1-D, the radius of the circle will vary with the period according to the variation of the conductivity with depth. For example, the radius will increase if the conductivity increases with depth. In the 2-D case the strike of the regional conductivity distribution defines a natural orientation for the coordinate system with the x-axis parallel to the structural strike direction. The phase tensor principal axis is parallel and perpendicular to the strike direction of the regional conductivity structure.
The results of anisotropic models discussed in this paper were computed with the 2D anisotropic resistivity modeling code of Pek and Verner (1997) that uses a finite difference algorithm and makes it possible to consider 2D structures with an arbitrarily oriented anisotropy.
The numerical examples investigated in this paper show that MT phase response of an electrically uniform but anisotropic half-space is independent of the polarization direction and no phase split occurs. Using several 1D and 2D anisotropic models, it is demonstrated that MT phase splitting results from spatial differences or gradients in conductivity, not the inherent bulk properties of the anisotropic conductivity tensor. Hence in this respect MT phase splitting is fundamentally different from shear wave splitting in elastic anisotropy investigations.https://jesphys.ut.ac.ir/article_21982_4815069827c34afec6b4d29cc1086c84.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36320101122Verification of MM5 forecast precipitation over IranVerification of MM5 forecast precipitation over Iran21983FAAkram Hedayati DezfuliMajid Azadi0000-0002-5991-9703Journal Article19700101During the last several years, application of numerical weather prediction models in the country have become common in both research and operations, even though systematic verification of the models’ results using statistical methods has rarely been conducted (Sodoudi et al. 2009). This paper aims at comparing the MM5 24-hour precipitation forecasts with the corresponding observations using standard scores for categorical forecasts associated with 2×2 contingency tables for different precipitation thresholds over different nine sub-regions of Iran. Comparison is conducted for +24h/+48h/+72h forecasts for a four winter month period from December 2004 to March 2005. Performance of the model results were assessed for all available synoptic and climatological stations scattered across the country at three different precipitation thresholds. The 0.1 mm/24h threshold was considered as the rain/no rain event. The other two intervals are: 0.1-10 and greater than 10 mm/24h for light and heavy precipitation respectively. Based on the long term means of precipitation of different parts of Iran, nine different sub-regions were defined and verification was conducted for the whole country and nine different sub-regions separately.
In this study for verification scores the quantity of precipitation is considered as a dichotomous variable by considering different precipitation thresholds. The standard approach is to record the frequencies with which the precipitation was observed and forecasted in a two-by-two table, and then to quantify forecast quality with summary measures of the table. The structure of a typical contingency table is presented in table 1.
Table 1. Rain contingency table is applied at each verification observation site over the period of verification. A threshold value (e.g., 0.1 mm day-1) is chosen to separate rain from no-rain events. Here, a is the number of correct rain forecasts or hits, b is the number of false alarms, c is the number of misses, and d is the number of correct predictions of rain amount below the specified threshold. From McBride and Ebert (2000).
Observed
Predicted Rain No rain
Rain a b
No rain c d
The verification scores used in this study are as follows:
Threat score (TS), or critical success index (CSI). In terms of table1 the threat score is computed as
The worst possible threat score is zero, and the best possible threat score is one.
The bias
Unbiased forecasts exhibit B = 1. Bias greater than one indicates that the event was forecasted more often than observed, which is called over forecasting. Conversely, bias less than one indicates that the event was forecasted less often than observed, or was under forecasted. Regarding only the occurrence of event as “the” event of interest, the hit rate is the ratio of correct forecasts to the number of times this event occurred. It is
False alarm rate which is the ratio of false alarms to the total number of non occurrences of the event is
Examining the calculated scores, show that the model forecasts of the rain/no rain event for the four month period over Iran is correct for 80% of the times. But in general results show an over forecasting trend (B=1.5). Fairly good results of the model forecasts are mainly due to the fact that during the four month period considered here, the precipitation occurred over the country is mainly associated with large scale mid latitudes synoptic systems in which large scale advective processes are primarily responsible for producing the precipitation. Though it is a common observation by professional forecasters in Iran that the models are unable to predict the convective small scale precipitations successfully. Examining the results for different precipitation thresholds show that the model performance is different for different thresholds, so that the model results for below 0.1 and also above 10 mm/day thresholds are more accurate when compared with those of the other two thresholds. Results for different regions, show that the model performance for lower precipitation thresholds over fairly drier regions in the south and also for high precipitation thresholds over wetter regions in the north of the country are better. It should be mentioned that heavy precipitations in the south-eastern regions are mainly convective and as mentioned above, lower performance of the model is thus expected. Some of the deficiencies in the results are due to the fact that the high resolution model results are compared against the low density synoptic stations located mainly at low lands. For an extensive verification of the model results it is thus necessary to use a more dense observational network.During the last several years, application of numerical weather prediction models in the country have become common in both research and operations, even though systematic verification of the models’ results using statistical methods has rarely been conducted (Sodoudi et al. 2009). This paper aims at comparing the MM5 24-hour precipitation forecasts with the corresponding observations using standard scores for categorical forecasts associated with 2×2 contingency tables for different precipitation thresholds over different nine sub-regions of Iran. Comparison is conducted for +24h/+48h/+72h forecasts for a four winter month period from December 2004 to March 2005. Performance of the model results were assessed for all available synoptic and climatological stations scattered across the country at three different precipitation thresholds. The 0.1 mm/24h threshold was considered as the rain/no rain event. The other two intervals are: 0.1-10 and greater than 10 mm/24h for light and heavy precipitation respectively. Based on the long term means of precipitation of different parts of Iran, nine different sub-regions were defined and verification was conducted for the whole country and nine different sub-regions separately.
In this study for verification scores the quantity of precipitation is considered as a dichotomous variable by considering different precipitation thresholds. The standard approach is to record the frequencies with which the precipitation was observed and forecasted in a two-by-two table, and then to quantify forecast quality with summary measures of the table. The structure of a typical contingency table is presented in table 1.
Table 1. Rain contingency table is applied at each verification observation site over the period of verification. A threshold value (e.g., 0.1 mm day-1) is chosen to separate rain from no-rain events. Here, a is the number of correct rain forecasts or hits, b is the number of false alarms, c is the number of misses, and d is the number of correct predictions of rain amount below the specified threshold. From McBride and Ebert (2000).
Observed
Predicted Rain No rain
Rain a b
No rain c d
The verification scores used in this study are as follows:
Threat score (TS), or critical success index (CSI). In terms of table1 the threat score is computed as
The worst possible threat score is zero, and the best possible threat score is one.
The bias
Unbiased forecasts exhibit B = 1. Bias greater than one indicates that the event was forecasted more often than observed, which is called over forecasting. Conversely, bias less than one indicates that the event was forecasted less often than observed, or was under forecasted. Regarding only the occurrence of event as “the” event of interest, the hit rate is the ratio of correct forecasts to the number of times this event occurred. It is
False alarm rate which is the ratio of false alarms to the total number of non occurrences of the event is
Examining the calculated scores, show that the model forecasts of the rain/no rain event for the four month period over Iran is correct for 80% of the times. But in general results show an over forecasting trend (B=1.5). Fairly good results of the model forecasts are mainly due to the fact that during the four month period considered here, the precipitation occurred over the country is mainly associated with large scale mid latitudes synoptic systems in which large scale advective processes are primarily responsible for producing the precipitation. Though it is a common observation by professional forecasters in Iran that the models are unable to predict the convective small scale precipitations successfully. Examining the results for different precipitation thresholds show that the model performance is different for different thresholds, so that the model results for below 0.1 and also above 10 mm/day thresholds are more accurate when compared with those of the other two thresholds. Results for different regions, show that the model performance for lower precipitation thresholds over fairly drier regions in the south and also for high precipitation thresholds over wetter regions in the north of the country are better. It should be mentioned that heavy precipitations in the south-eastern regions are mainly convective and as mentioned above, lower performance of the model is thus expected. Some of the deficiencies in the results are due to the fact that the high resolution model results are compared against the low density synoptic stations located mainly at low lands. For an extensive verification of the model results it is thus necessary to use a more dense observational network.https://jesphys.ut.ac.ir/article_21983_a95cbfaafa64d129c87e09d79517e3d7.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36320101122An energetic study of the relation between NAO and the large-scale tropospheric flow in South West AsiaAn energetic study of the relation between NAO and the large-scale tropospheric flow in South West Asia21984FAMohammad Ali Nasre Esfahani0000-0001-6249-3355Farhang Ahmadi- GiviAli Reza MohebalhojehJournal Article19700101Previous studies show that the North Atlantic Oscillation (NAO) is the dominant mode of variability in the northern hemisphere winter with marked climatic effects on its downstream regions. In this article the effects of some important forcing terms in the time tendency equation of the Eddy Kinetic Energy (EKE) in critical positive months (CPM) and critical negative months (CNM) of the NAO are studied using NCEP/NCAR reanalysis data. The data covered span the years 1950-2005 for the winter months (December to February). The critical months are defined on the basis of the monthly index of NAO and include 29 CPM and 33 CNM. The selected forcing terms include baroclinic conversion (BCC), barotropic conversion (BTC), convergence of total energy flux (CTF) and ageostrophic flux (CAF). The ensemble mean of vertical average of these forcing terms as well as the baroclinic generation (BCG) term, representing the generation of available potential energy, are computed over an area from 0 to 90E and 20N to 70N for CPM and CNM. In addition to the usual ensemble mean, to avoid cancellation, separate averages are also taken of the positive and negative values of the foregoing quantities.
The results indicate that there is no considerable difference in the amount of EKE between CPM and CNM in the Mediterranean region. However, moving eastward, the values of EKE become greater in the CPM than in the CNM in such a way that the difference between the two reaches its maximum over the South West of Iran. In the CPM, all of the computed forcing terms are larger than in the CNM. This is particularly true for the extreme. The largest values of BCG are observed in high latitudes with two distinct centers of maxima for the CPM and CNM. In the Mediterranean region, the average over the positive values of the BCG shows greater values in CPM, whereas the average over the negative values shows negligible differences between CPM and CNM. Despite the greater generation of available potential energy in the Mediterranean region in CPM, it seems that the other terms act in such a way as to make the mean EKE nearly equal in the CPM and CNM.
In the Mediterranean region and the Middle East, the BTC is the dominant forcing term in EKE production. The comparison of the distribution of BTC and the situation of 300-hPa subtropical jet shows that the two maxima centers of BTC (in the west of Iran and the central Mediterranean) are located in the north of the subtropical jet. The minimum values of BTC are observed in the Red Sea in the south of the subtropical jet in winter. The interesting point is that the magnitude of the BTC term in CPM is greater than CNM, coinciding with a stronger subtropical jet. This fact points to a direct relation between the magnitude of BTC and the intensity of the subtropical jet.
The maps of the ensemble mean of the negative and positive values of energy flux show the same pattern in CNM while some differences are observed in CPM. In the central Mediterranean region, there is energy divergence in both phases of NAO which is stronger in CPM. The direction of energy flux vectors indicates that energy radiated from the central Mediterranean region is transferred southeastward to an energy flux convergence over northeastern Africa and the Red Sea. This convergence is stronger and more extensive in CPM. Converting EKE to the zonal mean kinetic energy by the BTC in this area can be responsible for the observed stronger subtropical jet in CPM. In other words, the presence of a strong energy divergence in the central Mediterranean can be a significant source of energy for eastward travelling tropospheric disturbances in CPM.
The pattern of BCG shows considerable conversion of zonal mean available potential energy to eddy available potential energy in cyclogenesis centers in the Mediterranean region as well as in the west and east of Iran. The comparison of BCG in the two phases of NAO indicates that the connection of the Mediterranean storm track to the Atlantic storm track is weaker in CPM compared to the CNM. This means that in CPM, the Mediterranean storm track can act as a distinct center of action much in the same way as the Pacific and Atlantic storm tracks.Previous studies show that the North Atlantic Oscillation (NAO) is the dominant mode of variability in the northern hemisphere winter with marked climatic effects on its downstream regions. In this article the effects of some important forcing terms in the time tendency equation of the Eddy Kinetic Energy (EKE) in critical positive months (CPM) and critical negative months (CNM) of the NAO are studied using NCEP/NCAR reanalysis data. The data covered span the years 1950-2005 for the winter months (December to February). The critical months are defined on the basis of the monthly index of NAO and include 29 CPM and 33 CNM. The selected forcing terms include baroclinic conversion (BCC), barotropic conversion (BTC), convergence of total energy flux (CTF) and ageostrophic flux (CAF). The ensemble mean of vertical average of these forcing terms as well as the baroclinic generation (BCG) term, representing the generation of available potential energy, are computed over an area from 0 to 90E and 20N to 70N for CPM and CNM. In addition to the usual ensemble mean, to avoid cancellation, separate averages are also taken of the positive and negative values of the foregoing quantities.
The results indicate that there is no considerable difference in the amount of EKE between CPM and CNM in the Mediterranean region. However, moving eastward, the values of EKE become greater in the CPM than in the CNM in such a way that the difference between the two reaches its maximum over the South West of Iran. In the CPM, all of the computed forcing terms are larger than in the CNM. This is particularly true for the extreme. The largest values of BCG are observed in high latitudes with two distinct centers of maxima for the CPM and CNM. In the Mediterranean region, the average over the positive values of the BCG shows greater values in CPM, whereas the average over the negative values shows negligible differences between CPM and CNM. Despite the greater generation of available potential energy in the Mediterranean region in CPM, it seems that the other terms act in such a way as to make the mean EKE nearly equal in the CPM and CNM.
In the Mediterranean region and the Middle East, the BTC is the dominant forcing term in EKE production. The comparison of the distribution of BTC and the situation of 300-hPa subtropical jet shows that the two maxima centers of BTC (in the west of Iran and the central Mediterranean) are located in the north of the subtropical jet. The minimum values of BTC are observed in the Red Sea in the south of the subtropical jet in winter. The interesting point is that the magnitude of the BTC term in CPM is greater than CNM, coinciding with a stronger subtropical jet. This fact points to a direct relation between the magnitude of BTC and the intensity of the subtropical jet.
The maps of the ensemble mean of the negative and positive values of energy flux show the same pattern in CNM while some differences are observed in CPM. In the central Mediterranean region, there is energy divergence in both phases of NAO which is stronger in CPM. The direction of energy flux vectors indicates that energy radiated from the central Mediterranean region is transferred southeastward to an energy flux convergence over northeastern Africa and the Red Sea. This convergence is stronger and more extensive in CPM. Converting EKE to the zonal mean kinetic energy by the BTC in this area can be responsible for the observed stronger subtropical jet in CPM. In other words, the presence of a strong energy divergence in the central Mediterranean can be a significant source of energy for eastward travelling tropospheric disturbances in CPM.
The pattern of BCG shows considerable conversion of zonal mean available potential energy to eddy available potential energy in cyclogenesis centers in the Mediterranean region as well as in the west and east of Iran. The comparison of BCG in the two phases of NAO indicates that the connection of the Mediterranean storm track to the Atlantic storm track is weaker in CPM compared to the CNM. This means that in CPM, the Mediterranean storm track can act as a distinct center of action much in the same way as the Pacific and Atlantic storm tracks.https://jesphys.ut.ac.ir/article_21984_8fa4619cec35e9b181dcaf5cb8387ce3.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36320101122Numerical solution of unsteady and non-linear Rossby adjustment problem using fourth-order compact MacCormack schemeNumerical solution of unsteady and non-linear Rossby adjustment problem using fourth-order compact MacCormack scheme21985FASarmad Ghader0000-0001-9666-5493Abbas Ali Aliakbari-Bidokhti0000-0003-4841-2218Saeed FalahatJournal Article19700101The compact finite difference schemes have been found to give simple ways of reaching the objectives of high accuracy and low computational cost. During the past two decades, the compact schemes have been used extensively for numerical simulation of various fluid dynamics problems. These methods have also been applied for numerical solution of some prototype geophysical fluid dynamics problems (e.g., shallow water equations). Most of the compact finite difference schemes are symmetric (usually with 3 or 5 point stencil) and finding each derivative requires a matrix inversion. However, by splitting the derivative operator of a central compact scheme into one-sided forward and backward operators, a family of compact MacCormack-type schemes can be derived. While these classes of compact schemes are as accurate as the original central compact methods used to derive the one-sided forward and backward operators, they need less computational work per point. In addition, the one-sided nature of the method is an essential advantage of the method especially when severe gradients are present. These two features (i.e. high accuracy and low computational cost) makes the compact MacCormack-type scheme an attractive candidate for numerical models of the atmosphere and oceans.
This work focuses on the application of the fourth-order compact MacCormack-type scheme for numerical solution of the unsteady and non-linear Rossby adjustment problem (one and two dimensional cases). The second-order MacCormack method is also used for numerical solution of the equations. In the one-dimensional case, a single layer shallow water model is used to study the unsteady and nonlinear Rossby adjustment problem. The conservative form of the two-dimensional shallow water equations is used to study the unsteady and nonlinear Rossby adjustment problem in the two-dimensional case. For both cases, the time evolution of a fluid layer initially at rest with a discontinuity in the height filed is considered for numerical simulations.
Examination of the accuracy and efficiency of the fourth-order compact MacCormack scheme for some analytical linear and nonlinear prototype problems, indicates the superiority of the fourth-order compact MacCormack scheme over the fourth-order centered compact, second-order centered and second-order MacCormack finite difference schemes especially in the presence of a discontinuity in numerical solution.
For the Rossby adjustment problem, results show a clear improvement of the numerical solution, in particular near the discontinuity generated by the fourth-order compact MacCormack scheme compared to the second-order MacCormack method. Moreover, the overhead computational cost of the fourth-order scheme over the second-order method is very low. It is also observed that to keep the numerical stability it is necessary to use a compact spatial filter with the fourth-order compact MacCormack-type scheme at each time step.The compact finite difference schemes have been found to give simple ways of reaching the objectives of high accuracy and low computational cost. During the past two decades, the compact schemes have been used extensively for numerical simulation of various fluid dynamics problems. These methods have also been applied for numerical solution of some prototype geophysical fluid dynamics problems (e.g., shallow water equations). Most of the compact finite difference schemes are symmetric (usually with 3 or 5 point stencil) and finding each derivative requires a matrix inversion. However, by splitting the derivative operator of a central compact scheme into one-sided forward and backward operators, a family of compact MacCormack-type schemes can be derived. While these classes of compact schemes are as accurate as the original central compact methods used to derive the one-sided forward and backward operators, they need less computational work per point. In addition, the one-sided nature of the method is an essential advantage of the method especially when severe gradients are present. These two features (i.e. high accuracy and low computational cost) makes the compact MacCormack-type scheme an attractive candidate for numerical models of the atmosphere and oceans.
This work focuses on the application of the fourth-order compact MacCormack-type scheme for numerical solution of the unsteady and non-linear Rossby adjustment problem (one and two dimensional cases). The second-order MacCormack method is also used for numerical solution of the equations. In the one-dimensional case, a single layer shallow water model is used to study the unsteady and nonlinear Rossby adjustment problem. The conservative form of the two-dimensional shallow water equations is used to study the unsteady and nonlinear Rossby adjustment problem in the two-dimensional case. For both cases, the time evolution of a fluid layer initially at rest with a discontinuity in the height filed is considered for numerical simulations.
Examination of the accuracy and efficiency of the fourth-order compact MacCormack scheme for some analytical linear and nonlinear prototype problems, indicates the superiority of the fourth-order compact MacCormack scheme over the fourth-order centered compact, second-order centered and second-order MacCormack finite difference schemes especially in the presence of a discontinuity in numerical solution.
For the Rossby adjustment problem, results show a clear improvement of the numerical solution, in particular near the discontinuity generated by the fourth-order compact MacCormack scheme compared to the second-order MacCormack method. Moreover, the overhead computational cost of the fourth-order scheme over the second-order method is very low. It is also observed that to keep the numerical stability it is necessary to use a compact spatial filter with the fourth-order compact MacCormack-type scheme at each time step.https://jesphys.ut.ac.ir/article_21985_d0c261de80853d5a1d4a830a8e456fdd.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36320101122Using the magnetovariational sounding method to image the deep resistivity structure of the Trans-European suture zoneUsing the magnetovariational sounding method to image the deep resistivity structure of the Trans-European suture zone21986FABanafsheh Habibian DehkordiBehroz OskooiHanrish BrasseJournal Article19700101The conductivity distribution across the Trans-European Suture Zone (TESZ) is presented based on the measurements along a 400 km northeastern directed profile, starting from the German-Polish Basin, crossing the TESZ and ending at the East European Craton. Two-dimensional inversion was applied to magnetotelluric transfer functions and magnetovariational responses corresponding to 38 long-period simultaneously observed sites. Input data for the inversion procedure were created by rotating all transfer functions to strike direction obtained from strike and dimensionality analysis. The results show a thick sedimentary cover, several crustal inhomogeneities and a deep conductive structure below the center of the TESZ. In order to achieve a stable model, several sensitivity analysis were carried out.The conductivity distribution across the Trans-European Suture Zone (TESZ) is presented based on the measurements along a 400 km northeastern directed profile, starting from the German-Polish Basin, crossing the TESZ and ending at the East European Craton. Two-dimensional inversion was applied to magnetotelluric transfer functions and magnetovariational responses corresponding to 38 long-period simultaneously observed sites. Input data for the inversion procedure were created by rotating all transfer functions to strike direction obtained from strike and dimensionality analysis. The results show a thick sedimentary cover, several crustal inhomogeneities and a deep conductive structure below the center of the TESZ. In order to achieve a stable model, several sensitivity analysis were carried out.https://jesphys.ut.ac.ir/article_21986_33cf0ec8e4e4910aeac341387d1b6d81.pdf