Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Applying optically stimulated luminescence to determine the slip-rate of part of the Har-Us-Nuur FaultApplying optically stimulated luminescence to determine the slip-rate of part of the Har-Us-Nuur Fault1142843010.22059/jesphys.2012.28430FASamiraAmeriMortezaFattahiHamidehAminiEdvinNissenRichardWalkerJournal Article19700101Optically Stimulated Luminescence (OSL) is currently one of the most important methods for dating minerals during the Quaternary. This method plays an important role in studies related to Paleoseismology and tectonic activities, particularly in arid and semi- arid regions. OSL dates the last exposure to sunlight; therefore, it can directly find the ages of the coseismic or post seismic evidences.
The aim of this study was to employ OSL method to date samples collected from alluvial fans around the Har- us- Nuur fault in Mongolia. Mongolia is an arid zone and therefore, OSL dating should be able to provide reliable ages for this region. The Har- us- Nuur is one of the most important faults in the eastern margin and depression of Great Lake of Altai Mountain in western Mongolia. This fault is an active and right lateral strike slip fault and source of large earthquakes in the region. Two alluvial fans A2, F1 which were cut by this fault were selected for slip rate determination. Displacements were estimated by differential GPS and landsat images. The displacements were 130 ± 40 m and 15 ± 10 m for A2 and F1 fans, respectively.
Two OSL samples from fan A and one OSL sample from fan F1 were collected using steel food cans. The samples were wrapped in several layers of black plastic to prevent them from being exposed to light. Under dim red light in the lab, we unwrapped the cans. The sediments from both sides of metal cans were used for ICP measurements. We calculated alpha, beta and gamma dose rates using the conversion factors of Aitken (1985), Bell (1980) and Mejdahl (1979). Cosmic rate was determined following the method of Prescott and Hutton (1988). The moisture content was determined by drying at 40°C. The radioactive materials (U, Th and K40), moisture content, cosmic rate and other parameters were employed to calculate the annual dose (Table 1).
The middle part of the sediments inside the cans was sieved (90-250 µm) under the water, using plastic sieves. Each plastic sieve was disposed after it was used for one sample. The sample was then treated in the laboratory with 1N HCl for 2 days and H2O2 to remove carbonates and organic matters, respectively. Following removing heavy mineral greater than 2.72 g cm-3, using heavy liquid separation method, the sediments were etched with 48% HF for one hour. Then, the remaining quartz was rinsed in distilled water, treated with 10% Hcl and again rinsed in distilled water. After this the wet quartz were dried in oven and dry-sieved (90µm) before mounting as a monolayer on 10 mm aluminum disks using Silko-Spray silicone oil. Between 10 to 14 aliquots were prepared and measured for De determination for each sample. The De was calculated employing the SAR method (Table 1) of Murray and Wintle (2000). The detailed experimental condition was similar to what is outlined by Fattahi et al (2006). Three regeneration dose points was used for making dose growth curves. One point was employed to estimate the recuperation effect and recycling point was used to check the reliability of sensitivity corrections. Accepted individual De values (in Gray) for samples A2a, A2b and F1 were 7, 6 and 11 aliquots (Figure 6). The average De for each sample which was calculated using histogram method of analyst program, were 97.23 ± 43.52, 71.03 ± 28.84 and 33.97 ± 13.66 Gy (Table 1).
OSL provided ages of 18.89 ± 7.72, 26.28 ± 11.83 and 7.47 ± 3.02Ka for A2a, A2b, and F1, respectively. Finally, the slip rates were determined by dividing the displacement by ages which provided 3.48- 8.68mmyr-1 and 0.79-3.23 mmyr-1 for A2 and F1 surface, respectively (Table 3).Optically Stimulated Luminescence (OSL) is currently one of the most important methods for dating minerals during the Quaternary. This method plays an important role in studies related to Paleoseismology and tectonic activities, particularly in arid and semi- arid regions. OSL dates the last exposure to sunlight; therefore, it can directly find the ages of the coseismic or post seismic evidences.
The aim of this study was to employ OSL method to date samples collected from alluvial fans around the Har- us- Nuur fault in Mongolia. Mongolia is an arid zone and therefore, OSL dating should be able to provide reliable ages for this region. The Har- us- Nuur is one of the most important faults in the eastern margin and depression of Great Lake of Altai Mountain in western Mongolia. This fault is an active and right lateral strike slip fault and source of large earthquakes in the region. Two alluvial fans A2, F1 which were cut by this fault were selected for slip rate determination. Displacements were estimated by differential GPS and landsat images. The displacements were 130 ± 40 m and 15 ± 10 m for A2 and F1 fans, respectively.
Two OSL samples from fan A and one OSL sample from fan F1 were collected using steel food cans. The samples were wrapped in several layers of black plastic to prevent them from being exposed to light. Under dim red light in the lab, we unwrapped the cans. The sediments from both sides of metal cans were used for ICP measurements. We calculated alpha, beta and gamma dose rates using the conversion factors of Aitken (1985), Bell (1980) and Mejdahl (1979). Cosmic rate was determined following the method of Prescott and Hutton (1988). The moisture content was determined by drying at 40°C. The radioactive materials (U, Th and K40), moisture content, cosmic rate and other parameters were employed to calculate the annual dose (Table 1).
The middle part of the sediments inside the cans was sieved (90-250 µm) under the water, using plastic sieves. Each plastic sieve was disposed after it was used for one sample. The sample was then treated in the laboratory with 1N HCl for 2 days and H2O2 to remove carbonates and organic matters, respectively. Following removing heavy mineral greater than 2.72 g cm-3, using heavy liquid separation method, the sediments were etched with 48% HF for one hour. Then, the remaining quartz was rinsed in distilled water, treated with 10% Hcl and again rinsed in distilled water. After this the wet quartz were dried in oven and dry-sieved (90µm) before mounting as a monolayer on 10 mm aluminum disks using Silko-Spray silicone oil. Between 10 to 14 aliquots were prepared and measured for De determination for each sample. The De was calculated employing the SAR method (Table 1) of Murray and Wintle (2000). The detailed experimental condition was similar to what is outlined by Fattahi et al (2006). Three regeneration dose points was used for making dose growth curves. One point was employed to estimate the recuperation effect and recycling point was used to check the reliability of sensitivity corrections. Accepted individual De values (in Gray) for samples A2a, A2b and F1 were 7, 6 and 11 aliquots (Figure 6). The average De for each sample which was calculated using histogram method of analyst program, were 97.23 ± 43.52, 71.03 ± 28.84 and 33.97 ± 13.66 Gy (Table 1).
OSL provided ages of 18.89 ± 7.72, 26.28 ± 11.83 and 7.47 ± 3.02Ka for A2a, A2b, and F1, respectively. Finally, the slip rates were determined by dividing the displacement by ages which provided 3.48- 8.68mmyr-1 and 0.79-3.23 mmyr-1 for A2 and F1 surface, respectively (Table 3).https://jesphys.ut.ac.ir/article_28430_4f723fc240abee0a137f1c1e3a2d3a6c.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Fractal distribution of induced seismicity in Masjed Soleyman dam site
(South West of Iran)Fractal distribution of induced seismicity in Masjed Soleyman dam site
(South West of Iran)15272843110.22059/jesphys.2012.28431FAMohammad RezaEbrahimiMohammadTatarJournal Article19700101Masjed Soleyman dam site is located in the Zagros Mountain of western Iran, which is one of the most seismically active zones of the Alpe-Himalayan belt. After impounding of dam, the seismicity of area increased considerably, showing the impact of reservoir (177 m height, 261 million m3) in changing the rate of seismicity in the area. We have analyzed the seismicity of area in terms of the spatial variation of fractal dimension and have compared it with the frequency-magnitude relation b-value. In the mid-2006, a digital seismograph network of five medium-band stations consist of Trillium-40T, 40 sec-50 Hz seismometer connected to 24 bit Nonometric Taurus recorder was established around the axis and reservoir of the Masjed Soleyman dam site by International Institute of Earthquake Engineering and Seismology (IIEES). The earthquake catalog of Masjed Soleyman network since June 1, 2006 to October 1, 2007 contains 3609 well-located events. As the epicenter of induced earthquakes are mostly within the 30 kilometers radius of dam and their hypocenter is located at depth less than 20 kilometers, therefore we selected only 1924 events which satisfied above selection criteria. During monitoring period, two large earthquakes with magnitude of 3.9 and 3.6 struck the area on November 23, 2006 and September 1, 2007, respectively. These earthquakes happened following a rapid change in water level of reservoir. The phenomenon such as earthquakes and faults are self-similar systems at any scale and can be described by a power-law relation. The exponent of this relation that shows the degree of complexity in a chaotic system is called the fractal dimension. Although the term of fractals was introduced to the world by Mandelbort in 1967, power-law relations in order to investigate the geophysical phenomena were used earlier.
Gutenberg and Richter defined a relationship between the frequencies and magnitudes of earthquakes in 1945. The slop of the plot of occurrence numbers against magnitudes is known as the b-value and has been used as the indicator of seismic activity of the studied region. In this study we try to understand the loading impact due to reservoir water impoundment on both D and b-value parameters. We used correlation dimension which is the most commonly method for calculation the fractal dimension of earthquake hypocenter. To map the variation of fractal dimension as a function of space, the entire area was set into 0.05° × 0.05° grids. An overlapping of 25% is made for a comprehensive picture of the map. The 23 grids were created interactively, and the region with events less than 40 did not used in this estimation. The number of events in each grid varied from 40 to 109. In this exercise, the fractal dimension value ranged from 0.40 to 0.93, but the values were less than 1. The number of earthquakes, N with magnitude greater than Mc is related to the magnitude by logN=a?bM, which is widely known as the Gutenberg–Richter relation. In this study, the b-value of the Gutenberg–Richter relation was estimated by a maximum likelihood method which is claimed to be a better estimation. The threshold magnitude of 0.9 has been found for the dam region by examining the log-linear plot of cumulative number of events versus magnitude. We have obtained that b-value in the area varies from 0.6–1.3. The low D value obtained here may have several explanations.
Seismic activity in the Masjed Soleyman reservoir site occurs in a small area, and the distances between mainshocks and aftershocks are very small (sometimes less than 3 km) which may result in a low D value. If we consider that the seismicity of the Masjed Soleyman region is due to the fault interactions and reservoir-triggered forces which generate earthquakes in small clusters, since our data are in protracted step of seismicity in the area and the main factors for occurrence of earthquakes in this step are seepage of fluid in the crust and increase in pore pressure diffusion, therefore high permeability and presence of fluids in the fault and surrounding area may reduce effective stresses and show relatively low D. The low D value could also be the effect of high pore fluid pressure in the region as pore fluid pressure reflects redistribution of stress in the substratum. When D tends to zero, the seismicity of the area may not be due to any particular fault but may be connected to the stress generated by high pore fluid pressure; this indicates the point source zone. Having high b-value in the studied region especially in the vicinity of dam indicates heterogeneous stress distribution in the crust whereas homogeneous stresses results in lower b-values. Gradual increase in pore pressure is the main factor in occurrence of induced earthquakes, whereas this increase could be the result of faults weakening due to pore pressure diffusion, therefore we could see a heterogeneous stress distribution in the area and high b-values. So the lower D values (D ? 0.6) and higher b-values (b ? 0.8) in some grids indicate that there is a heterogeneous structure in this area and it seems that dam region is susceptible for occurrence of induced earthquakes. The results show a desirable correlation between D and b-value that indicates on existing of induced seismicity in area. The impact of increasing pure fluid pressure on both parameters (fractal dimension D and b-value) is truly observable for this area.Masjed Soleyman dam site is located in the Zagros Mountain of western Iran, which is one of the most seismically active zones of the Alpe-Himalayan belt. After impounding of dam, the seismicity of area increased considerably, showing the impact of reservoir (177 m height, 261 million m3) in changing the rate of seismicity in the area. We have analyzed the seismicity of area in terms of the spatial variation of fractal dimension and have compared it with the frequency-magnitude relation b-value. In the mid-2006, a digital seismograph network of five medium-band stations consist of Trillium-40T, 40 sec-50 Hz seismometer connected to 24 bit Nonometric Taurus recorder was established around the axis and reservoir of the Masjed Soleyman dam site by International Institute of Earthquake Engineering and Seismology (IIEES). The earthquake catalog of Masjed Soleyman network since June 1, 2006 to October 1, 2007 contains 3609 well-located events. As the epicenter of induced earthquakes are mostly within the 30 kilometers radius of dam and their hypocenter is located at depth less than 20 kilometers, therefore we selected only 1924 events which satisfied above selection criteria. During monitoring period, two large earthquakes with magnitude of 3.9 and 3.6 struck the area on November 23, 2006 and September 1, 2007, respectively. These earthquakes happened following a rapid change in water level of reservoir. The phenomenon such as earthquakes and faults are self-similar systems at any scale and can be described by a power-law relation. The exponent of this relation that shows the degree of complexity in a chaotic system is called the fractal dimension. Although the term of fractals was introduced to the world by Mandelbort in 1967, power-law relations in order to investigate the geophysical phenomena were used earlier.
Gutenberg and Richter defined a relationship between the frequencies and magnitudes of earthquakes in 1945. The slop of the plot of occurrence numbers against magnitudes is known as the b-value and has been used as the indicator of seismic activity of the studied region. In this study we try to understand the loading impact due to reservoir water impoundment on both D and b-value parameters. We used correlation dimension which is the most commonly method for calculation the fractal dimension of earthquake hypocenter. To map the variation of fractal dimension as a function of space, the entire area was set into 0.05° × 0.05° grids. An overlapping of 25% is made for a comprehensive picture of the map. The 23 grids were created interactively, and the region with events less than 40 did not used in this estimation. The number of events in each grid varied from 40 to 109. In this exercise, the fractal dimension value ranged from 0.40 to 0.93, but the values were less than 1. The number of earthquakes, N with magnitude greater than Mc is related to the magnitude by logN=a?bM, which is widely known as the Gutenberg–Richter relation. In this study, the b-value of the Gutenberg–Richter relation was estimated by a maximum likelihood method which is claimed to be a better estimation. The threshold magnitude of 0.9 has been found for the dam region by examining the log-linear plot of cumulative number of events versus magnitude. We have obtained that b-value in the area varies from 0.6–1.3. The low D value obtained here may have several explanations.
Seismic activity in the Masjed Soleyman reservoir site occurs in a small area, and the distances between mainshocks and aftershocks are very small (sometimes less than 3 km) which may result in a low D value. If we consider that the seismicity of the Masjed Soleyman region is due to the fault interactions and reservoir-triggered forces which generate earthquakes in small clusters, since our data are in protracted step of seismicity in the area and the main factors for occurrence of earthquakes in this step are seepage of fluid in the crust and increase in pore pressure diffusion, therefore high permeability and presence of fluids in the fault and surrounding area may reduce effective stresses and show relatively low D. The low D value could also be the effect of high pore fluid pressure in the region as pore fluid pressure reflects redistribution of stress in the substratum. When D tends to zero, the seismicity of the area may not be due to any particular fault but may be connected to the stress generated by high pore fluid pressure; this indicates the point source zone. Having high b-value in the studied region especially in the vicinity of dam indicates heterogeneous stress distribution in the crust whereas homogeneous stresses results in lower b-values. Gradual increase in pore pressure is the main factor in occurrence of induced earthquakes, whereas this increase could be the result of faults weakening due to pore pressure diffusion, therefore we could see a heterogeneous stress distribution in the area and high b-values. So the lower D values (D ? 0.6) and higher b-values (b ? 0.8) in some grids indicate that there is a heterogeneous structure in this area and it seems that dam region is susceptible for occurrence of induced earthquakes. The results show a desirable correlation between D and b-value that indicates on existing of induced seismicity in area. The impact of increasing pure fluid pressure on both parameters (fractal dimension D and b-value) is truly observable for this area.https://jesphys.ut.ac.ir/article_28431_f333d03304249bc124f04d3f981df702.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Seismic zoning of Tehran Region using fuzzy setsSeismic zoning of Tehran Region using fuzzy sets29442843210.22059/jesphys.2012.28432FAElhamBoostanNoorbakhshMirzaeiMortezaEskandari GhadiAliShafieeJournal Article19700101Seismic hazard assessment like many other issues in seismology is a complicated problem, which is due to variety of parameters affecting the occurrence of earthquake. Uncertainty, which is a result of vagueness and incompleteness of the data, should be considered in a rationale way. Herein, fuzzy set theory is used to take into account the uncertainty existed in the seismic hazard analysis. The Fuzzy Set Theory (FST) is an attractive methodology when vague and subjective judgments of a unique phenomenon enter probabilistic or mathematical models. Fuzzy sets are sets whose elements have degrees of membership. In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition; an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. Fuzzy sets generalize classical sets, since the indicator functions of classical sets are special cases of the membership functions of fuzzy sets, if the latter only take values 0 or 1. In fuzzy set theory, classical bivalent sets are usually called crisp sets.
Tehran is a densely populated metropolitan in which more than 10 million people live. Many destructive earthquakes happened in Iran in the last centuries. It comes from historical references that at least 6 times, Tehran has been destroyed by catastrophic earthquakes. The oldest one happened in the 4th century BC.
In this study, seismic hazard assessment of Tehran region, capital city of Iran, is conducted using a combination of probabilistic approach and fuzzy sets theory. The earthquake catalog used in the current study contains occurrence times and hypocentre locations of Iranian earthquakes and is derived from the seismic catalog published by Mirzaei et al (2002) for earthquakes during 1975 to 2000. The International Seismological Centre catalog (www.isc.ac.uk) was used to update the catalog data up until the year 2007.
In order to calculate seismic hazard for different return periods in probabilistic procedure for the Tehran region, an area encompassed by the 49.5°–54°E longitudes and 34°–37°N latitudes has been divided by 0.1° intervals generating 1350 grid points. Seismicity parameters are evaluated using the method in which magnitude uncertainty and incompleteness of earthquake data are considered.
A total of 20 area potential seismic sources are introduced, and several seismicity rates and b-values, maximum expected magnitudes are assigned to each of seismotectonic province and potential seismic sources.
To carry out seismic hazard analysis in the framework of fuzzy sets theory, all of the variables converted into triangular fuzzy sets with -cut method. Eventually, the fuzzy response is defuzzified using the surface center method. Two maps are developed to indicate the earthquake hazard of the region in the form of iso-acceleration contours. They display a fuzzy-probabilistic estimate of peak ground acceleration (PGA) over bedrock for the return periods of 475 and 50 years. PGA values for this region are estimated to be 0.35g-0.38g and 0.12g-0.15g for 475- and 50-years return periods, respectively. Outcomes of this study would contribute for the quick and better estimation of the seismic design of structures.Seismic hazard assessment like many other issues in seismology is a complicated problem, which is due to variety of parameters affecting the occurrence of earthquake. Uncertainty, which is a result of vagueness and incompleteness of the data, should be considered in a rationale way. Herein, fuzzy set theory is used to take into account the uncertainty existed in the seismic hazard analysis. The Fuzzy Set Theory (FST) is an attractive methodology when vague and subjective judgments of a unique phenomenon enter probabilistic or mathematical models. Fuzzy sets are sets whose elements have degrees of membership. In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition; an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. Fuzzy sets generalize classical sets, since the indicator functions of classical sets are special cases of the membership functions of fuzzy sets, if the latter only take values 0 or 1. In fuzzy set theory, classical bivalent sets are usually called crisp sets.
Tehran is a densely populated metropolitan in which more than 10 million people live. Many destructive earthquakes happened in Iran in the last centuries. It comes from historical references that at least 6 times, Tehran has been destroyed by catastrophic earthquakes. The oldest one happened in the 4th century BC.
In this study, seismic hazard assessment of Tehran region, capital city of Iran, is conducted using a combination of probabilistic approach and fuzzy sets theory. The earthquake catalog used in the current study contains occurrence times and hypocentre locations of Iranian earthquakes and is derived from the seismic catalog published by Mirzaei et al (2002) for earthquakes during 1975 to 2000. The International Seismological Centre catalog (www.isc.ac.uk) was used to update the catalog data up until the year 2007.
In order to calculate seismic hazard for different return periods in probabilistic procedure for the Tehran region, an area encompassed by the 49.5°–54°E longitudes and 34°–37°N latitudes has been divided by 0.1° intervals generating 1350 grid points. Seismicity parameters are evaluated using the method in which magnitude uncertainty and incompleteness of earthquake data are considered.
A total of 20 area potential seismic sources are introduced, and several seismicity rates and b-values, maximum expected magnitudes are assigned to each of seismotectonic province and potential seismic sources.
To carry out seismic hazard analysis in the framework of fuzzy sets theory, all of the variables converted into triangular fuzzy sets with -cut method. Eventually, the fuzzy response is defuzzified using the surface center method. Two maps are developed to indicate the earthquake hazard of the region in the form of iso-acceleration contours. They display a fuzzy-probabilistic estimate of peak ground acceleration (PGA) over bedrock for the return periods of 475 and 50 years. PGA values for this region are estimated to be 0.35g-0.38g and 0.12g-0.15g for 475- and 50-years return periods, respectively. Outcomes of this study would contribute for the quick and better estimation of the seismic design of structures.https://jesphys.ut.ac.ir/article_28432_b19a32a07a5cda4e9d428b2b4124de54.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Improving seismic facies analysis using WTMMLA attributes, self-organizing maps and K-mean clusteringImproving seismic facies analysis using WTMMLA attributes, self-organizing maps and K-mean clustering45562843310.22059/jesphys.2012.28433FASaeedHadilooHamid RezaSiahkoohiAliEdalatJournal Article19700101Reservoir models are initially generated from estimates of specific rock properties and maps of reservoir heterogeneity. Many types of information are used in reservoir model construction. One of the most important sources of information comes from wells, including well logs and core samples. Unfortunately well log and core data are local measurements that may not reflect the reservoir behavior as a whole. In addition, well data are not available at the initial phases of exploration. In contrast to sparse well data, 3D seismic data cover large areas. Seismic attributes extracted from 3D seismicdata can provide information for the construction of reservoir models. Seismic facies analysis can be accomplished through the use of pattern recognition techniques. When the geological information is incomplete or nonexistent, seismic facies analysis can be done using unsupervised learning techniques. One of the most promising mathematical techniques of unsupervised learning is the Kohonen's Self Organizing Map (SOM) (Kohonen, 2001).
In this paper we use the SOM and time-frequency analysis to characterize reservoirs. Since variations in frequency content are sensitive to subtle changes in reflective information. In this context, we show that the wavelet transform modulus maxima line amplitudes (WTMMLA) that extracted from continuous wavelet transforms (CWT) can be applied to detect singularities. These singularities are analyzed and clustered by SOM.
The SOM networks map points of input space to points in an output space while preserving the topology. Topology preservation means that points which are close in the input space should also be close in the output space (map). Normally, the input space is of high dimension while the output is two-dimensional. The seismic attributes, can be represented by vectors in the space Rn, x = [x1,x2...xn].We assume that the map has Pelements; therefore, there will exist P prototype vectors mi, mi= [mi1. . . min], i = 1, 2, . . . ,P, where n is the dimension of the input vector. After the SOM training, prototype vectors represent the input data set of seismic attributes, the distances between x and all the prototype vectors are computed. The mapunit with the smallest distance mbto the input vector x is called the best matching unit (BMU) and is computed by, . The prototype vector corresponding to the BMU and their neighbors are moved towards the input winner vector in the input space. Since one of the main objectives of this work was the identification of data clusters, we displayed the distances between the neighbor prototype vectors to identify similarities among the vector prototypes. We used the U-matrix (Ultsch, 1993), to represent these distances. After the SOM learning, the U-matrix was generated by computing, for each SOM prototype vector, the distance between the neighbor prototype vectors and their average.
For estimation of the number of existing seismic facies in the data, we used a K-means partitive clustering algorithm. We clustered the prototype vectors instead of the original data. In this manner, large data sets formed by the SOM prototype vectors can be indirectly grouped. Results showed that the proposed method not only provides a better understanding about the group formations, but it is also computationally efficient. Another benefit of this methodology is noise reduction because the prototype vectors represent local averages of the original data without any loss of resolution. To automate the classification process, we used the Davies and Bouldin (1979) index (DBI) as means of evaluating the results of the K-means partitioning.
Transitions, or irregular structures, present in any kind of signals carry information related to its physical phenomena (Mallat, 1999). Besides the horizon locations, the identified transition characterization in the interpretation is associated with geological processes. In this way, a possible transition classification could be linked to the seismic facies. Detection of transitions or singularities in signals is based on simple mathematical concepts. The signal inflection points are associated with the first-derivative extremes which correspond to the second-derivative zero crossings. For the signal inflection-point positions, using the CWT local peak locations, a wavelet should be chosen as the first-derivative of the smoothing function . One of the wavelet functions that fulfill this requirement is the first-derivative of the Gaussian function, called the Gauss wavelet. We can extract scalogram's local peaks coincide from the signal inflection points. It can be proven that these lines, which are called WTMMLA, can be used to characterize the signal irregularity. The signal irregularities can be characterized mathematically through the WTMMLA and H?lder exponent (Mallat, 1999). The exponent can be obtained from the slope estimation of the curve created by the log2 of the WTMMLA coefficients divided by the log2 of the scales. In This study we used WTMMLA as a direct seismic attribute. We calculated CWT coefficients and WTMMLA for sixteen seismic data samples around the picked reservoir horizon. The extracted WTMMLA can show the possible heterogeneity and singularity within the reservoir. We used these attribute as input vector for the SOM step and obtained the U_matrix. The K-mean and DBI estimate the number of seismic facies.Utilizing of CWT to locate events in time through the identification of signal singularities also proved to be useful as an appropriate tool for detection of seismic events. Therefor this method proved to be less sensitive to interpretation errors. The performance of the method was tested on Kangan formation at one of the Iranian oil fields.Reservoir models are initially generated from estimates of specific rock properties and maps of reservoir heterogeneity. Many types of information are used in reservoir model construction. One of the most important sources of information comes from wells, including well logs and core samples. Unfortunately well log and core data are local measurements that may not reflect the reservoir behavior as a whole. In addition, well data are not available at the initial phases of exploration. In contrast to sparse well data, 3D seismic data cover large areas. Seismic attributes extracted from 3D seismicdata can provide information for the construction of reservoir models. Seismic facies analysis can be accomplished through the use of pattern recognition techniques. When the geological information is incomplete or nonexistent, seismic facies analysis can be done using unsupervised learning techniques. One of the most promising mathematical techniques of unsupervised learning is the Kohonen's Self Organizing Map (SOM) (Kohonen, 2001).
In this paper we use the SOM and time-frequency analysis to characterize reservoirs. Since variations in frequency content are sensitive to subtle changes in reflective information. In this context, we show that the wavelet transform modulus maxima line amplitudes (WTMMLA) that extracted from continuous wavelet transforms (CWT) can be applied to detect singularities. These singularities are analyzed and clustered by SOM.
The SOM networks map points of input space to points in an output space while preserving the topology. Topology preservation means that points which are close in the input space should also be close in the output space (map). Normally, the input space is of high dimension while the output is two-dimensional. The seismic attributes, can be represented by vectors in the space Rn, x = [x1,x2...xn].We assume that the map has Pelements; therefore, there will exist P prototype vectors mi, mi= [mi1. . . min], i = 1, 2, . . . ,P, where n is the dimension of the input vector. After the SOM training, prototype vectors represent the input data set of seismic attributes, the distances between x and all the prototype vectors are computed. The mapunit with the smallest distance mbto the input vector x is called the best matching unit (BMU) and is computed by, . The prototype vector corresponding to the BMU and their neighbors are moved towards the input winner vector in the input space. Since one of the main objectives of this work was the identification of data clusters, we displayed the distances between the neighbor prototype vectors to identify similarities among the vector prototypes. We used the U-matrix (Ultsch, 1993), to represent these distances. After the SOM learning, the U-matrix was generated by computing, for each SOM prototype vector, the distance between the neighbor prototype vectors and their average.
For estimation of the number of existing seismic facies in the data, we used a K-means partitive clustering algorithm. We clustered the prototype vectors instead of the original data. In this manner, large data sets formed by the SOM prototype vectors can be indirectly grouped. Results showed that the proposed method not only provides a better understanding about the group formations, but it is also computationally efficient. Another benefit of this methodology is noise reduction because the prototype vectors represent local averages of the original data without any loss of resolution. To automate the classification process, we used the Davies and Bouldin (1979) index (DBI) as means of evaluating the results of the K-means partitioning.
Transitions, or irregular structures, present in any kind of signals carry information related to its physical phenomena (Mallat, 1999). Besides the horizon locations, the identified transition characterization in the interpretation is associated with geological processes. In this way, a possible transition classification could be linked to the seismic facies. Detection of transitions or singularities in signals is based on simple mathematical concepts. The signal inflection points are associated with the first-derivative extremes which correspond to the second-derivative zero crossings. For the signal inflection-point positions, using the CWT local peak locations, a wavelet should be chosen as the first-derivative of the smoothing function . One of the wavelet functions that fulfill this requirement is the first-derivative of the Gaussian function, called the Gauss wavelet. We can extract scalogram's local peaks coincide from the signal inflection points. It can be proven that these lines, which are called WTMMLA, can be used to characterize the signal irregularity. The signal irregularities can be characterized mathematically through the WTMMLA and H?lder exponent (Mallat, 1999). The exponent can be obtained from the slope estimation of the curve created by the log2 of the WTMMLA coefficients divided by the log2 of the scales. In This study we used WTMMLA as a direct seismic attribute. We calculated CWT coefficients and WTMMLA for sixteen seismic data samples around the picked reservoir horizon. The extracted WTMMLA can show the possible heterogeneity and singularity within the reservoir. We used these attribute as input vector for the SOM step and obtained the U_matrix. The K-mean and DBI estimate the number of seismic facies.Utilizing of CWT to locate events in time through the identification of signal singularities also proved to be useful as an appropriate tool for detection of seismic events. Therefor this method proved to be less sensitive to interpretation errors. The performance of the method was tested on Kangan formation at one of the Iranian oil fields.https://jesphys.ut.ac.ir/article_28433_8db0c95cbc591fa17c89ce2701076011.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Local structural discontinuity attribute robustness on 3-D seismic data to identify subtle faultsLocal structural discontinuity attribute robustness on 3-D seismic data to identify subtle faults57762843410.22059/jesphys.2012.28434FANafisehZarifKarimiMohammad AliRiahiAminRoshandelKahooJournal Article19700101Limited seismic data quality and complex tectonics make for less than ideal interpretation conditions. However, modern geometric attributes including coherency has shown to be effective in showing the lateral extents of subtle and small-scale geologic features not usually visible in conventional seismic sections. This geometric attributes are better suited than some older generation seismic attributes as they work on the full data volume and eliminate the need for pre-picked horizons for them to be implemented. Coherency attributes applied to 3D seismic data volume have confirmed to be an effective method for imaging geological discontinuities such as fault and stratigraphic features. These geological features are significant since they are often associated with the formation of subsurface traps in which petroleum might accumulate. Coherence calculations can help with the problems mentioned above. 3-D Seismic coherency provides interpreters a different view, revealing subtle features not easily seen in the seismic data. It calculates the local waveform similarity in both In-line and X-line direction and estimates lateral discontinuity caused by variation in structure, stratigraphy, lithology, porosity, and the presence of hydrocarbons. Small regions of seismic traces cut by a fault surface generally have a different seismic character than the corresponding regions of neighboring traces. This results in a sharp discontinuity in local trace-to-trace coherency. Calculating coherency for each grid point along a time slice results in lineaments of low coherency along faults. When this process is repeated for a series of time slices, these lineaments become fault surfaces, even though fault plane reflections have not been recorded. Stratigraphic boundaries generate similar discontinuities. The technique may be employed to produce coherency horizon slice maps, or to transform a reflection amplitude 3-D data volume into an entirely new volume or “cube” of coherence coefficients. Map views of coherency data afford the opportunity to see stratigraphic changes more clearly. For example, the channel features that are readily apparent to laymen in the coherency time slice are very difficult to see in a traditional amplitude time slice.
Conventional amplitude time slices are often useful for viewing faults that run perpendicular to strike. However, when faults run parallel to strike, they become more difficult to see because the fault lineamentsbecome superimposed on bedding lineaments. The coherence calculation suppresses laterally consistent features, in effect removing the bedding. Because of this, the 3-D coherence algorithm reveals faults in any orientation equally well.
Until recent years, most 3-D surveys covered relatively small areas. But the success of the technique and falling costs have caused surveys to becomelarger. Now some vast spec 3-D surveys cover hundreds of square kilometers and run to tens of millions of traces. Sorting through that amount of information is a daunting task. However, since calculating coherence is an non interpretive process, it can quickly provide the geoscientist with a view of regional faulting.
The first generation coherence algorithm, cross correlates each trace with its in-line and cross-line neighbor and then combines the two results after normalizing by the energy. Since this approach deals with only three traces, it is computationally very efficient but may lack robustness, especially when dealing with noisy data. The second generation coherency algorithm uses a multi-trace semblance measure. Using more traces in the coherency computations results in greater stability in the presence of noise. The third generation algorithm is also a multi-trace coherency measure. However, it is based on the Eigen-structure of the covariance matrix formed from the traces in the analysis cube.
In this paper, an analysis method is developed for the robust and efficient estimation of 3D seismic local structural entropy, which is a measure of local discontinuity of 3D seismic data to identify its subtle faults. This method avoids the computation of large covariance matrices and eigenvalues, associated with the eigenstructure-based and semblance-based coherency estimates. We introduce a number of local discontinuity measures, based on the relations between subvolumes (quadrants) of the analysis cube. The scale of the analysis is determined by the type of geological feature that is of interest to the interpreter. By combining local structural entropy volumes using various scales, we obtain a higher lateral resolution and better discrimination between incoherent and coherent seismic events. Furthermore, the method developed is computationally much more efficient than the eigenstructure-based coherency method. Its robustness is demonstrated by synthetic and real data examples. To study the robustness of the algorithm and the effective parameters, three synthetic geological model containing faults generated and the best values for these parameters were suggested. This local attributes was applied on real 3D seismic data of a faulted gas field. Results show the robustness of this algorithm in revealing subtle faults. In this algorithm local similarity in both In-line and X-line direction and in depth direction was calculated based on the relations between sub-volumes (quadrants) of the analysis cube and estimates the lateral and vertical discontinuity. The scale of the analysis cube is determined by the type of geological feature that is of interest to the interpreter.
Not only is this method robust to noise than original three-trace cross-correlation-based algorithm but also is computationally efficient because it avoids the computation of large covariance matrixes and Eigen-values, associated with the Eigen-structure-based and semblance-based coherency estimates.Limited seismic data quality and complex tectonics make for less than ideal interpretation conditions. However, modern geometric attributes including coherency has shown to be effective in showing the lateral extents of subtle and small-scale geologic features not usually visible in conventional seismic sections. This geometric attributes are better suited than some older generation seismic attributes as they work on the full data volume and eliminate the need for pre-picked horizons for them to be implemented. Coherency attributes applied to 3D seismic data volume have confirmed to be an effective method for imaging geological discontinuities such as fault and stratigraphic features. These geological features are significant since they are often associated with the formation of subsurface traps in which petroleum might accumulate. Coherence calculations can help with the problems mentioned above. 3-D Seismic coherency provides interpreters a different view, revealing subtle features not easily seen in the seismic data. It calculates the local waveform similarity in both In-line and X-line direction and estimates lateral discontinuity caused by variation in structure, stratigraphy, lithology, porosity, and the presence of hydrocarbons. Small regions of seismic traces cut by a fault surface generally have a different seismic character than the corresponding regions of neighboring traces. This results in a sharp discontinuity in local trace-to-trace coherency. Calculating coherency for each grid point along a time slice results in lineaments of low coherency along faults. When this process is repeated for a series of time slices, these lineaments become fault surfaces, even though fault plane reflections have not been recorded. Stratigraphic boundaries generate similar discontinuities. The technique may be employed to produce coherency horizon slice maps, or to transform a reflection amplitude 3-D data volume into an entirely new volume or “cube” of coherence coefficients. Map views of coherency data afford the opportunity to see stratigraphic changes more clearly. For example, the channel features that are readily apparent to laymen in the coherency time slice are very difficult to see in a traditional amplitude time slice.
Conventional amplitude time slices are often useful for viewing faults that run perpendicular to strike. However, when faults run parallel to strike, they become more difficult to see because the fault lineamentsbecome superimposed on bedding lineaments. The coherence calculation suppresses laterally consistent features, in effect removing the bedding. Because of this, the 3-D coherence algorithm reveals faults in any orientation equally well.
Until recent years, most 3-D surveys covered relatively small areas. But the success of the technique and falling costs have caused surveys to becomelarger. Now some vast spec 3-D surveys cover hundreds of square kilometers and run to tens of millions of traces. Sorting through that amount of information is a daunting task. However, since calculating coherence is an non interpretive process, it can quickly provide the geoscientist with a view of regional faulting.
The first generation coherence algorithm, cross correlates each trace with its in-line and cross-line neighbor and then combines the two results after normalizing by the energy. Since this approach deals with only three traces, it is computationally very efficient but may lack robustness, especially when dealing with noisy data. The second generation coherency algorithm uses a multi-trace semblance measure. Using more traces in the coherency computations results in greater stability in the presence of noise. The third generation algorithm is also a multi-trace coherency measure. However, it is based on the Eigen-structure of the covariance matrix formed from the traces in the analysis cube.
In this paper, an analysis method is developed for the robust and efficient estimation of 3D seismic local structural entropy, which is a measure of local discontinuity of 3D seismic data to identify its subtle faults. This method avoids the computation of large covariance matrices and eigenvalues, associated with the eigenstructure-based and semblance-based coherency estimates. We introduce a number of local discontinuity measures, based on the relations between subvolumes (quadrants) of the analysis cube. The scale of the analysis is determined by the type of geological feature that is of interest to the interpreter. By combining local structural entropy volumes using various scales, we obtain a higher lateral resolution and better discrimination between incoherent and coherent seismic events. Furthermore, the method developed is computationally much more efficient than the eigenstructure-based coherency method. Its robustness is demonstrated by synthetic and real data examples. To study the robustness of the algorithm and the effective parameters, three synthetic geological model containing faults generated and the best values for these parameters were suggested. This local attributes was applied on real 3D seismic data of a faulted gas field. Results show the robustness of this algorithm in revealing subtle faults. In this algorithm local similarity in both In-line and X-line direction and in depth direction was calculated based on the relations between sub-volumes (quadrants) of the analysis cube and estimates the lateral and vertical discontinuity. The scale of the analysis cube is determined by the type of geological feature that is of interest to the interpreter.
Not only is this method robust to noise than original three-trace cross-correlation-based algorithm but also is computationally efficient because it avoids the computation of large covariance matrixes and Eigen-values, associated with the Eigen-structure-based and semblance-based coherency estimates.https://jesphys.ut.ac.ir/article_28434_4ad6ad2e4ff21caf1fc026db5de05171.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Investigating the effect of filling material thickness on blast-induced stress wave in plexiglass by numerical analysisInvestigating the effect of filling material thickness on blast-induced stress wave in plexiglass by numerical analysis77902843510.22059/jesphys.2012.28435FAHassanBakhshandeh AmniehAbdolrahimJavaherianJournal Article19700101In blasting operations different proportions of the energy released are consumed in fragmenting the rock mass, heating up the surrounding and some portion of it is used in propagating blast waves, causing vibration of the particles in the nearby environment. However, reflections, transmissions and absorptions lead the attenuation of these waves. The influence of the extent of rock-mass discontinuities and different thickness of layers in fragmentation efficiency and stability of the mine steps are important for controlling the ground vibrations in the vicinity of blast holes. Hence, predicting the influence of blast-induced waves on the surrounding should be carried out prior to the operation for safety concerns. The main criterion for evaluating the damage caused by the blasting includes the particle displacement analysis, peak particle velocity, particle acceleration, changes in the applied stress waves and frequency contents. Research activities in this area could be classified into the followings: field studies, experimental investigations, analytical and numerical methods; the first two are expensive, and the analytical methods are often based on the unrealistic simplifications with respect to the rockmass behavior. However, the numerical methods by far offer a cost effective, speedy and reliable analysis. In this article, the effect of filling material thickness on the wave propagation of PETN blasting in a vertical discontinuity in plexiglass has been investigated using UDEC simulation. For this purpose, a uniform normal pressure dynamic of 11.0GPa was applied on a 5.08mm diameter blasthole wall. This dynamic pressure was obtained using a semi-empirical Liu and Tidman correlation. The shockwave caused by the blasting was assumed to be a triangular pulse, having a 0.05ms duration. A polymeric plexiglass, with dimensions of 230×114.3 mm was used for this study. Super glue was used as the filling material. Having considered the fixed geometrical involved, the influence of the filling material thickness (5, 10, 15, 20 and 25 mm) were investigated in a cylindrical blasthole vertically drilled (with large length to diameter ratio and complete decoupling). Peak particle horizontal displacement, peak particle velocity and induced stresses on either sides of the vertical discontinuity were measured in the elastic behaviour mode of the surrounding media. The energy reflection coefficient (ERC) and the energy transmission coefficient (ETC) obtained from the numerical analysis showed a good agreement with those of the analytical ones. However, the ETC in the numerical method decreased from 28% to 8.7% when filling material thickness increased from 5 to 25 mm, while this remained constant at 50 % for analytical method. An exponential correlation has been proposed showing the relationship between the ETC obtained from the numerical and analytical methods, with a correlation coefficient of 0.9962. Increased filling material thickness led to nonlinear reduction of both PPV and stress wave, and considering the constant reflection coefficient, it also led to the transmission coefficient being reduced exponentially while energy absorption increased exponentially, too.In blasting operations different proportions of the energy released are consumed in fragmenting the rock mass, heating up the surrounding and some portion of it is used in propagating blast waves, causing vibration of the particles in the nearby environment. However, reflections, transmissions and absorptions lead the attenuation of these waves. The influence of the extent of rock-mass discontinuities and different thickness of layers in fragmentation efficiency and stability of the mine steps are important for controlling the ground vibrations in the vicinity of blast holes. Hence, predicting the influence of blast-induced waves on the surrounding should be carried out prior to the operation for safety concerns. The main criterion for evaluating the damage caused by the blasting includes the particle displacement analysis, peak particle velocity, particle acceleration, changes in the applied stress waves and frequency contents. Research activities in this area could be classified into the followings: field studies, experimental investigations, analytical and numerical methods; the first two are expensive, and the analytical methods are often based on the unrealistic simplifications with respect to the rockmass behavior. However, the numerical methods by far offer a cost effective, speedy and reliable analysis. In this article, the effect of filling material thickness on the wave propagation of PETN blasting in a vertical discontinuity in plexiglass has been investigated using UDEC simulation. For this purpose, a uniform normal pressure dynamic of 11.0GPa was applied on a 5.08mm diameter blasthole wall. This dynamic pressure was obtained using a semi-empirical Liu and Tidman correlation. The shockwave caused by the blasting was assumed to be a triangular pulse, having a 0.05ms duration. A polymeric plexiglass, with dimensions of 230×114.3 mm was used for this study. Super glue was used as the filling material. Having considered the fixed geometrical involved, the influence of the filling material thickness (5, 10, 15, 20 and 25 mm) were investigated in a cylindrical blasthole vertically drilled (with large length to diameter ratio and complete decoupling). Peak particle horizontal displacement, peak particle velocity and induced stresses on either sides of the vertical discontinuity were measured in the elastic behaviour mode of the surrounding media. The energy reflection coefficient (ERC) and the energy transmission coefficient (ETC) obtained from the numerical analysis showed a good agreement with those of the analytical ones. However, the ETC in the numerical method decreased from 28% to 8.7% when filling material thickness increased from 5 to 25 mm, while this remained constant at 50 % for analytical method. An exponential correlation has been proposed showing the relationship between the ETC obtained from the numerical and analytical methods, with a correlation coefficient of 0.9962. Increased filling material thickness led to nonlinear reduction of both PPV and stress wave, and considering the constant reflection coefficient, it also led to the transmission coefficient being reduced exponentially while energy absorption increased exponentially, too.https://jesphys.ut.ac.ir/article_28435_3a6174611df26f395beff07a969f5ce6.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Improving thickness estimation for thin layers in quefrency domainImproving thickness estimation for thin layers in quefrency domain911052843610.22059/jesphys.2012.28436FASamiraMohammadiHamid RezaSiahkoohiJournal Article19700101Obtaining a seismic section with high temporal and spatial resolution was always one of the goals of seismic data processors and interpreters. Accurate estimation of the thicknesses of thin beds is an important tool in this regard.
The basic problem is that the wavelength of the signal must be similar in dimention to that of the bed thinness. If it is much longer than the bed thinness, the determination of interference or phase shift is less reliable. If it is much shorter, the problem is not one of a thin bed. The thin bed problem assumes that the bed is thin compared to the dominant wavelength of the wavelet.
The Rayleigh criterion states that the resolution limit of a reflection is 4, but Widess (1973) extended this limit to 8. In this research, we assume that the thinness of a thin bed is less than 8.
The differences between thin bed response and thick bed response are that thick bed response has a separate response for the top and bottom of the bed, the two wavelets do not interfere and the amplitude of the wavelet depends on reflection coefficient. But for thin beds, reflections from top and bottom of the bed interfere. The result is a signal wavelet response which approximates the time derivative of the original wavelet.
Quantitatively, bed thickness can be calculated in three ways: from the time difference of the seismic events, from the first spectral peak frequency and from the cepstral peak.
In the first method, we can calculate bed thickness from the time difference of two peaks (for two sequential traces in same polarity), or from the time differences of one peak and a trough (for two sequential traces in opposite polarity).
Widess pioneered a widely used method for quantifying thin bed thickness in 1973. Because it uses peak to trough time separation in conjunction with amplitude, this method depends on careful processing to establish the correct wavelet phase and true trace to trace amplitudes.
But by transforming the seismic data into the frequency domain via the discrete Fourier transform, the amplitude spectra delineate temporal bed thickness variability while the phase spectra indicate lateral geologic discontinuities. So, this method which is spectral decomposition uses a more robust phase independent amplitude spectrum and is designed for examining thin bed responses surveys.
Spectral decomposition unravels the seismic signal into its constituent frequencies. This allows the interpreter to see amplitude and phase tuned to specific wavelengths. Since the stratigraphy resonates at wavelengths dependent on the bedding thickness, the interpreter can image subtle thickness variations and discontinuities and predict bedding thickness quantitatively.
Thin beds produce periodic peaks and notches in the spectrum of seismic data. In classical spectral decomposition technique, the frequency of the first local maximum in the amplitude spectrum (the first spectral peak) is doubled to estimate the period of the notches which is equal to the inverse of the bed thickness.
In this study we describe a novel extension of the spectral decomposition method called cepstral decomposition.
Cepstral decomposition method can accurately measure the spacing of notches by calculating the Fourier transform of the logarithms of the spectrum. To suggest this, note that a signal with a simple echo can be represented as:
(1)
The Fourier spectral density (spectrum) of such a signal is given by:
(2)
Thus, we see from (2) that the spectral density of a signal with an echo has the form of an envelope (the spectrum of the original signal) that modulates a periodic function of frequency. By taking the logarithm of the spectrum, this product is converted to the sum of two components:
(3)
Thus, C(f) viewed as a waveform has an additive periodic component whose fundamental frequency is the echo delay In analysis of time waveforms, such periodic components show up as lines or sharp peaks in the corresponding Fourier spectrum. Therefore, the spectrum of the log spectrum would show a peak when the original time waveform contained an echo.
This new spectral representation domain was not the frequency domain, nor was it the time domain. So we call it as the “quefrency domain”, and the spectrum of the log of the spectrum of a time waveform as the “cepstrum”.
In new quefrency domain periodic notches appear as sharp peaks. The peaks are sharp and clear enough to use them for estimating thin beds thickness.
We tested the cepstral decomposition technique for estimating the thickness of thin layer on a synthetic model with different random noise levels and compared the results by that of the two conventional methods: the spectral peak method and the time difference of the seismic events.
The results indicated that cepstral decomposition method has the potential to improve the accuracy of thin bed thickness estimation from reflection seismic data.Obtaining a seismic section with high temporal and spatial resolution was always one of the goals of seismic data processors and interpreters. Accurate estimation of the thicknesses of thin beds is an important tool in this regard.
The basic problem is that the wavelength of the signal must be similar in dimention to that of the bed thinness. If it is much longer than the bed thinness, the determination of interference or phase shift is less reliable. If it is much shorter, the problem is not one of a thin bed. The thin bed problem assumes that the bed is thin compared to the dominant wavelength of the wavelet.
The Rayleigh criterion states that the resolution limit of a reflection is 4, but Widess (1973) extended this limit to 8. In this research, we assume that the thinness of a thin bed is less than 8.
The differences between thin bed response and thick bed response are that thick bed response has a separate response for the top and bottom of the bed, the two wavelets do not interfere and the amplitude of the wavelet depends on reflection coefficient. But for thin beds, reflections from top and bottom of the bed interfere. The result is a signal wavelet response which approximates the time derivative of the original wavelet.
Quantitatively, bed thickness can be calculated in three ways: from the time difference of the seismic events, from the first spectral peak frequency and from the cepstral peak.
In the first method, we can calculate bed thickness from the time difference of two peaks (for two sequential traces in same polarity), or from the time differences of one peak and a trough (for two sequential traces in opposite polarity).
Widess pioneered a widely used method for quantifying thin bed thickness in 1973. Because it uses peak to trough time separation in conjunction with amplitude, this method depends on careful processing to establish the correct wavelet phase and true trace to trace amplitudes.
But by transforming the seismic data into the frequency domain via the discrete Fourier transform, the amplitude spectra delineate temporal bed thickness variability while the phase spectra indicate lateral geologic discontinuities. So, this method which is spectral decomposition uses a more robust phase independent amplitude spectrum and is designed for examining thin bed responses surveys.
Spectral decomposition unravels the seismic signal into its constituent frequencies. This allows the interpreter to see amplitude and phase tuned to specific wavelengths. Since the stratigraphy resonates at wavelengths dependent on the bedding thickness, the interpreter can image subtle thickness variations and discontinuities and predict bedding thickness quantitatively.
Thin beds produce periodic peaks and notches in the spectrum of seismic data. In classical spectral decomposition technique, the frequency of the first local maximum in the amplitude spectrum (the first spectral peak) is doubled to estimate the period of the notches which is equal to the inverse of the bed thickness.
In this study we describe a novel extension of the spectral decomposition method called cepstral decomposition.
Cepstral decomposition method can accurately measure the spacing of notches by calculating the Fourier transform of the logarithms of the spectrum. To suggest this, note that a signal with a simple echo can be represented as:
(1)
The Fourier spectral density (spectrum) of such a signal is given by:
(2)
Thus, we see from (2) that the spectral density of a signal with an echo has the form of an envelope (the spectrum of the original signal) that modulates a periodic function of frequency. By taking the logarithm of the spectrum, this product is converted to the sum of two components:
(3)
Thus, C(f) viewed as a waveform has an additive periodic component whose fundamental frequency is the echo delay In analysis of time waveforms, such periodic components show up as lines or sharp peaks in the corresponding Fourier spectrum. Therefore, the spectrum of the log spectrum would show a peak when the original time waveform contained an echo.
This new spectral representation domain was not the frequency domain, nor was it the time domain. So we call it as the “quefrency domain”, and the spectrum of the log of the spectrum of a time waveform as the “cepstrum”.
In new quefrency domain periodic notches appear as sharp peaks. The peaks are sharp and clear enough to use them for estimating thin beds thickness.
We tested the cepstral decomposition technique for estimating the thickness of thin layer on a synthetic model with different random noise levels and compared the results by that of the two conventional methods: the spectral peak method and the time difference of the seismic events.
The results indicated that cepstral decomposition method has the potential to improve the accuracy of thin bed thickness estimation from reflection seismic data.https://jesphys.ut.ac.ir/article_28436_82d9ae651a541bd39730f367a023b806.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Interpretation of near-surface gravity anomalies by the normalized full gradient methodInterpretation of near-surface gravity anomalies by the normalized full gradient method1071212843710.22059/jesphys.2012.28437FAMeysamAbediAhmadAfsharVahidEbrahimzadeh ArdestaniGholamhoseeinNorouziJournal Article19700101The normalized full gradient (NFG) method defined by Berezkin (1967, 1973 and 1998) is used for downward continuation maps. Analytical downward continuation is a method to estimate the field closer to the source and consequently results in a better resolution of underground rock distribution. However, the usefulness of this process is limited by the fact that the operation is extremely sensitive to noise. With noise free data, downward continuation is well defined; we do not attempt to continue below the source level. In the presence of noise, the amplification of high frequencies is so strong that it quickly masks the information in the original profile. Low-pass Fourier filtering, while suppressing such noise, also blurs the signal, overcoming the purpose of sharpening by downward continuation.
Despite the above-mentioned problems, most geophysical experts have long been interested in this technique because of its importance to the mineral exploration. Furthermore, this method is a fast and cheap way to determine the initial depth of the subsurface features, especially where there is no other geophysical or well-logging data. A good analytical downward continuation process could provide subsurface general images, allowing an enhanced interpretation. Also, analytical downward continuation has the ability to determine accurately both horizontal and vertical extents of geological sources.
This method is concisely described in the following section. The 2-D NFG of gravity anomalies is defined as (Berezkin, 1973):
(1)
Where GH(x, z) is the NFG at point (x, z) on a cross-section x-z; Vzz(x, z) and Vxz(x, z) are the first vertical derivative and the first horizontal (along the x-direction) derivative of gravity anomalies (or Vz) at point (x, z), respectively; G(x, z) is the full (total) gradient of gravity anomalies at point (x, z); GCP(z) is the average of the full gradient of gravity anomalies at level z; and M is the number of samples in a data set.
Berezkin (1973) expressed the gravity anomalies over the range (-L, L) by the finite Fourier sine series,
(2)
where
(3)
L is the integral interval or length of the gravity profile; and N is the number of harmonics of the series. From Eq. (2) it follows that
(4)
(5)
Defining a smoothing factor for eliminating high-frequency noise resulting from downward continuation, we have,
(6)
Where, m is known as the degree of smoothing. It was suggested to choose m =1 or 2 to reach reasonable results. Finally,
(7)
(8)
(9)
Substituting Eqs. 8 and 9 into Eq. 1, the NFG is calculated.
The NFG method nullifies perturbations due to the passage of mass depth during downward continuation. The method depends on the downwards analytical continuation of normalized full gradient values of gravity data. Analytical continuation discriminates certain structural anomalies which cannot be distinguished in the observed gravity field. It can be used to estimate location, depth to the top and center of the deposit that is applied also for detecting oil reserviors and tectonic studies. One of the important parameter to estimate accurate shape of the deposit is true selection of the harmonic number. In this paper, the correct range of the harmonic number is determined and then this method will be tested for noise-free and noise-corruption synthetic data. Finally, 2D and 3D of this method are applied on real data, Dehloran Bitumen.The normalized full gradient (NFG) method defined by Berezkin (1967, 1973 and 1998) is used for downward continuation maps. Analytical downward continuation is a method to estimate the field closer to the source and consequently results in a better resolution of underground rock distribution. However, the usefulness of this process is limited by the fact that the operation is extremely sensitive to noise. With noise free data, downward continuation is well defined; we do not attempt to continue below the source level. In the presence of noise, the amplification of high frequencies is so strong that it quickly masks the information in the original profile. Low-pass Fourier filtering, while suppressing such noise, also blurs the signal, overcoming the purpose of sharpening by downward continuation.
Despite the above-mentioned problems, most geophysical experts have long been interested in this technique because of its importance to the mineral exploration. Furthermore, this method is a fast and cheap way to determine the initial depth of the subsurface features, especially where there is no other geophysical or well-logging data. A good analytical downward continuation process could provide subsurface general images, allowing an enhanced interpretation. Also, analytical downward continuation has the ability to determine accurately both horizontal and vertical extents of geological sources.
This method is concisely described in the following section. The 2-D NFG of gravity anomalies is defined as (Berezkin, 1973):
(1)
Where GH(x, z) is the NFG at point (x, z) on a cross-section x-z; Vzz(x, z) and Vxz(x, z) are the first vertical derivative and the first horizontal (along the x-direction) derivative of gravity anomalies (or Vz) at point (x, z), respectively; G(x, z) is the full (total) gradient of gravity anomalies at point (x, z); GCP(z) is the average of the full gradient of gravity anomalies at level z; and M is the number of samples in a data set.
Berezkin (1973) expressed the gravity anomalies over the range (-L, L) by the finite Fourier sine series,
(2)
where
(3)
L is the integral interval or length of the gravity profile; and N is the number of harmonics of the series. From Eq. (2) it follows that
(4)
(5)
Defining a smoothing factor for eliminating high-frequency noise resulting from downward continuation, we have,
(6)
Where, m is known as the degree of smoothing. It was suggested to choose m =1 or 2 to reach reasonable results. Finally,
(7)
(8)
(9)
Substituting Eqs. 8 and 9 into Eq. 1, the NFG is calculated.
The NFG method nullifies perturbations due to the passage of mass depth during downward continuation. The method depends on the downwards analytical continuation of normalized full gradient values of gravity data. Analytical continuation discriminates certain structural anomalies which cannot be distinguished in the observed gravity field. It can be used to estimate location, depth to the top and center of the deposit that is applied also for detecting oil reserviors and tectonic studies. One of the important parameter to estimate accurate shape of the deposit is true selection of the harmonic number. In this paper, the correct range of the harmonic number is determined and then this method will be tested for noise-free and noise-corruption synthetic data. Finally, 2D and 3D of this method are applied on real data, Dehloran Bitumen.https://jesphys.ut.ac.ir/article_28437_6e086bb220c772c5eeb79264eb84fde7.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X382201207223-D crustal density modeling, Case study: Iran3-D crustal density modeling, Case study: Iran1231362843810.22059/jesphys.2012.28438FAAlirezaArdalan0000-0001-5549-3189DavoodZamzamMohammad AliSharifi0000-0003-0745-4147Journal Article19700101Various sources of geodetic, geological and geophysical information can improve our understanding about the crustal density. In this paper we propose a methodology for determination of crustal density based on terrestrial gravity observations, geopotential models, Digital Terrain Models (DTM), geological maps and cross-sections, deep geophysical profiles, and geoid models. The method can be algorithmically explained as follows: (i) Extraction of density information of topography (outer part of the crust above the geoid) from the existing geological maps, cross-sections and knowledge about geological formations. (ii) Removal of the global gravity field from terrestrial gravity observations via a geopotential model plus the centrifugal effect. (iii) Removal of the effect of the terrain masses above the geoid using Newton's integral and the computed density model in step (i). (iv) Downward continuation of the resulted residual observations from the surface of the Earth down to the geoid by free-air reduction. (v) Interpolation of the downward continued gravity residuals to develop a regular grid on the geoid. (vi) Setup of the observation equations based on Newton integral for the gridded residual observations. (vii) Solution of the Newton integral equations using the least squares method, stabilized by regularization techniques. As practical capability of the method its patch-wise implementation can be mentioned, which allows changes of resolution and location of the derived density model. This capability enables zooming and applying the method over different geological features such as faults, subduction zones, and other tectonic features.
The efficiency of the mentioned methodology was first approved by a simulation and next was used for the computation of a new residual density model of the crust between the geoid and the Moho discontinuity in the geographical region of Iran. Following is a summary of the practical implementation of the method. From the study of the published regional deep sounding geophysical profiles we came to the conclusion that the case study region is consists of 3 layers and accordingly 3 matrices consist of the depths of the layers with 1 degree resolution was compiled. This part is critical part in our computations since it has been proved that any error in the location of the layers can adversely affect the computed density model. Having derived 42.5 km as the average Moho depth of the region 151 was selected as the maximum related degree and order of spherical harmonic expansion for the removal of the long wavelength signals from the gravity observations. As the global geopotential model to supply the mentioned spherical harmonic coefficients, EGM2008 up to degree and order 151 was implemented. The needed point gravity data of the study were supplied from two sources, namely 6800 point gravity from BGI (Bureau Gravimetrique International) and 6500 point gravity from NCC (National Cartographic Center). Newton integral over topographical masses was used to remove the short wavelength gravitation signals from the gravity data. The boundary of the integration over the topographical masses was provided by SRTM digital terrain model with 30 arc second resolution, and the needed mass densities were extracted from 1/250000 and 1/100000 geological maps of Iran, published papers, and National Iranian Oil Company (NIOC) reports. The regional geoidal heights needed for downward continuation model was taken from the latest geoid model developed by the Surveying and Geomatics department of the University of Tehran. The resulted gravitational effects were then downward continued to the surface of the geoid by free-air reduction.
After application of the band-pass filter to the surface gravity data the resulting residuals were downward continued to the surface of the geoid. Next, the residual gravitation values were introduced to left-hand side of the Newton integral equation to start our constrained inversion process. The final product of the computations, i.e. the density variation models of the 3 crustal layers, provided us with a new 3-D density model of the case study region. According to the results, the range of density variations within the first layer is -120 to 40 (kg/m^3), while within the second, and third layers the range is almost the same, i.e. -40 to 40 (kg/m^3), which is due to the natural characteristic of the density layers, i.e. the deeper the layer the smoother the density variations. Besides, the calculated density models show remarkable correlation with (i) the observed residual gravitations, (ii) main tectonic units of Iran i.e. Zagros, Alborz and at margin of Lut and Dashte Kavir , Kopetdag, and (iii) Moho depth. Those correlations are also depending on the amplitude and wavelength of source crustal anomalies. The minima of both residual gravitation and residual density model are over Zagros, Alborz and at margins of Lut and Dashte Kavir where the maxima of the both quantities are located over Lut, Ddashte Kavir and the west of Tehran province.Various sources of geodetic, geological and geophysical information can improve our understanding about the crustal density. In this paper we propose a methodology for determination of crustal density based on terrestrial gravity observations, geopotential models, Digital Terrain Models (DTM), geological maps and cross-sections, deep geophysical profiles, and geoid models. The method can be algorithmically explained as follows: (i) Extraction of density information of topography (outer part of the crust above the geoid) from the existing geological maps, cross-sections and knowledge about geological formations. (ii) Removal of the global gravity field from terrestrial gravity observations via a geopotential model plus the centrifugal effect. (iii) Removal of the effect of the terrain masses above the geoid using Newton's integral and the computed density model in step (i). (iv) Downward continuation of the resulted residual observations from the surface of the Earth down to the geoid by free-air reduction. (v) Interpolation of the downward continued gravity residuals to develop a regular grid on the geoid. (vi) Setup of the observation equations based on Newton integral for the gridded residual observations. (vii) Solution of the Newton integral equations using the least squares method, stabilized by regularization techniques. As practical capability of the method its patch-wise implementation can be mentioned, which allows changes of resolution and location of the derived density model. This capability enables zooming and applying the method over different geological features such as faults, subduction zones, and other tectonic features.
The efficiency of the mentioned methodology was first approved by a simulation and next was used for the computation of a new residual density model of the crust between the geoid and the Moho discontinuity in the geographical region of Iran. Following is a summary of the practical implementation of the method. From the study of the published regional deep sounding geophysical profiles we came to the conclusion that the case study region is consists of 3 layers and accordingly 3 matrices consist of the depths of the layers with 1 degree resolution was compiled. This part is critical part in our computations since it has been proved that any error in the location of the layers can adversely affect the computed density model. Having derived 42.5 km as the average Moho depth of the region 151 was selected as the maximum related degree and order of spherical harmonic expansion for the removal of the long wavelength signals from the gravity observations. As the global geopotential model to supply the mentioned spherical harmonic coefficients, EGM2008 up to degree and order 151 was implemented. The needed point gravity data of the study were supplied from two sources, namely 6800 point gravity from BGI (Bureau Gravimetrique International) and 6500 point gravity from NCC (National Cartographic Center). Newton integral over topographical masses was used to remove the short wavelength gravitation signals from the gravity data. The boundary of the integration over the topographical masses was provided by SRTM digital terrain model with 30 arc second resolution, and the needed mass densities were extracted from 1/250000 and 1/100000 geological maps of Iran, published papers, and National Iranian Oil Company (NIOC) reports. The regional geoidal heights needed for downward continuation model was taken from the latest geoid model developed by the Surveying and Geomatics department of the University of Tehran. The resulted gravitational effects were then downward continued to the surface of the geoid by free-air reduction.
After application of the band-pass filter to the surface gravity data the resulting residuals were downward continued to the surface of the geoid. Next, the residual gravitation values were introduced to left-hand side of the Newton integral equation to start our constrained inversion process. The final product of the computations, i.e. the density variation models of the 3 crustal layers, provided us with a new 3-D density model of the case study region. According to the results, the range of density variations within the first layer is -120 to 40 (kg/m^3), while within the second, and third layers the range is almost the same, i.e. -40 to 40 (kg/m^3), which is due to the natural characteristic of the density layers, i.e. the deeper the layer the smoother the density variations. Besides, the calculated density models show remarkable correlation with (i) the observed residual gravitations, (ii) main tectonic units of Iran i.e. Zagros, Alborz and at margin of Lut and Dashte Kavir , Kopetdag, and (iii) Moho depth. Those correlations are also depending on the amplitude and wavelength of source crustal anomalies. The minima of both residual gravitation and residual density model are over Zagros, Alborz and at margins of Lut and Dashte Kavir where the maxima of the both quantities are located over Lut, Ddashte Kavir and the west of Tehran province.https://jesphys.ut.ac.ir/article_28438_9574b09dd7221cec774a9a040d5dc43a.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Analytic signal method to estimate the magnetic source parameters
of 2D structuresAnalytic signal method to estimate the magnetic source parameters
of 2D structures1371472843910.22059/jesphys.2012.28439FABehrouzOskooiAliKarimi KalayeVahidEbrahimzadeh ArdestaniJournal Article19700101The analytic signal method is usually used to interpret the magnetic and gravity anomalies. The main advantage of using analytic signal method in interpretation of magnetic source parameters from magnetic anomalies is that the amplitude dip of the analytic signal is independent of directional parameters like magnetization and the dip of the source. Using the coefficient of the amplitude to estimate the depth can make the interpretation of the magnetic data easier. By this method the location depth and angle of the anomalies and even the dip thickness and the direction of its extension can be estimated. The amplitude of the signal is a symmetric bell-shaped function that the maximum of the curve is located exactly above the edge of the source and its width is proportional to the depth of the top of the magnetic source. This analytic signal characteristic is used to estimate the magnetic source parameters like depth location and the thickness.
We use these characteristics of the analytic signal function to estimate the depth, horizontal location and thickness of the magnetic sources. In this study, we use MATLAB in order to write codes for calculating the analytic signal of two dimensional anomalies, determine the geometrical parameters of the source and forward modeling of magnetic anomalies.
For evaluating the efficiency of the codes, first we applied them on the synthetic noise contaminated data. We then used analytic signal method to interpret the field magnetic data from Sorkh-Dizej, Zanjan. To estimate the depth and thickness of the magnetic source, we used 5 profiles perpendicular to the strike of the magnetic anomaly. These profiles are located in the longitude from 305986 m to 306176 m and latitude from 4051813 m to 4052013 m. Finally, we compared the results of analytic signal with the results of Euler method.
To consider the fact that the horizontal and vertical differentials are each other's Hilbert conversion, in each step of 2D analytic signal differentiation of the anomalies of 3D sources can be interpreted by extending 2D analytic signals.
The function of the analytic signal can be calculated easily in frequency region that its real and imaginary parts are horizontal and vertical differentials of the magnetic field in sequence. Another advantage of using analytic signal differential is the interpretation of magnetic and gravity data in common projects by Poisson equation.
The previous data in analytic signal method is not essential while in other methods the interpretation of the magnetic data needs some data such as the geology of the structure and other parameters. For example in Euler deconvolution method, that it is one of the depth estimation methods, knowing the structural index is vital and difficult.
In Sorkhdizaj in Zanjanin an iron mine with apatite content the results of analyzing data by analytic signal method show that the structure of the anomalies is too similar to a thin dike so that the method recognizes the thin dike with high accuracy and low uncertainty. Besides, it is shown that the depth of the top of the anomalies are changing from 10 to 55 m so that the results are too close to the ones obtained by Euler method calculated by Geosoft software. Also dikes with thickness less than 1 m to about 10 m have been estimated.The analytic signal method is usually used to interpret the magnetic and gravity anomalies. The main advantage of using analytic signal method in interpretation of magnetic source parameters from magnetic anomalies is that the amplitude dip of the analytic signal is independent of directional parameters like magnetization and the dip of the source. Using the coefficient of the amplitude to estimate the depth can make the interpretation of the magnetic data easier. By this method the location depth and angle of the anomalies and even the dip thickness and the direction of its extension can be estimated. The amplitude of the signal is a symmetric bell-shaped function that the maximum of the curve is located exactly above the edge of the source and its width is proportional to the depth of the top of the magnetic source. This analytic signal characteristic is used to estimate the magnetic source parameters like depth location and the thickness.
We use these characteristics of the analytic signal function to estimate the depth, horizontal location and thickness of the magnetic sources. In this study, we use MATLAB in order to write codes for calculating the analytic signal of two dimensional anomalies, determine the geometrical parameters of the source and forward modeling of magnetic anomalies.
For evaluating the efficiency of the codes, first we applied them on the synthetic noise contaminated data. We then used analytic signal method to interpret the field magnetic data from Sorkh-Dizej, Zanjan. To estimate the depth and thickness of the magnetic source, we used 5 profiles perpendicular to the strike of the magnetic anomaly. These profiles are located in the longitude from 305986 m to 306176 m and latitude from 4051813 m to 4052013 m. Finally, we compared the results of analytic signal with the results of Euler method.
To consider the fact that the horizontal and vertical differentials are each other's Hilbert conversion, in each step of 2D analytic signal differentiation of the anomalies of 3D sources can be interpreted by extending 2D analytic signals.
The function of the analytic signal can be calculated easily in frequency region that its real and imaginary parts are horizontal and vertical differentials of the magnetic field in sequence. Another advantage of using analytic signal differential is the interpretation of magnetic and gravity data in common projects by Poisson equation.
The previous data in analytic signal method is not essential while in other methods the interpretation of the magnetic data needs some data such as the geology of the structure and other parameters. For example in Euler deconvolution method, that it is one of the depth estimation methods, knowing the structural index is vital and difficult.
In Sorkhdizaj in Zanjanin an iron mine with apatite content the results of analyzing data by analytic signal method show that the structure of the anomalies is too similar to a thin dike so that the method recognizes the thin dike with high accuracy and low uncertainty. Besides, it is shown that the depth of the top of the anomalies are changing from 10 to 55 m so that the results are too close to the ones obtained by Euler method calculated by Geosoft software. Also dikes with thickness less than 1 m to about 10 m have been estimated.https://jesphys.ut.ac.ir/article_28439_8729065f76f7362125b4c2db13105155.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Two dimensional modeling of very low frequency (VLF) data along a profile across North Tehran Fault in Shahran region, IranTwo dimensional modeling of very low frequency (VLF) data along a profile across North Tehran Fault in Shahran region, Iran1491562844010.22059/jesphys.2012.28440FANazliSabaBehroozOskooiJournal Article19700101Two dimensional single frequency scalar VLF data modeling has been studied in this research. VLF method is one of the electromagnetic investigation methods. Radio frequency waves are used for investigation of conductive deposits (Paal, 1968). Synthetic modeling and inversion of single frequency and multiple frequencies of VLF signals was done by Oskooi (2004) for two dimensional simple structures. VLF transmitter antennas are high vertical wires that have the alternative current.
Conductivity of subsurface structures can be measured by single very low frequency two dimensional inversion methods. Ground VLF data is fast suitable facility for study geological structures with maximum depth of 100 m. Temporal and spatial variations of VLF primary field on surveying should be noticed. There is a linear relation between horizontal and vertical component of magnetic field.
(1)
The complex tipper vector (A, B) is only dependent on a ground structure and is independent from a transmitter azimuth. The x axis is on the direction of VLF transmitter on the geological strike. Y axis is the profile direction. For each site there is a transfer function, called complex scalar tipper. This value is recorded by instrument and it is measured by:
(2)
Interpretation and modeling of the tipper scalar data have been performed using synthetic and field data at the frequency of 23300 Hz. Considering that the limitation of VLF transmitters, the waves are coming from NWC station in Australia. With noticing the relation (2), there are 59 real and 59 imaginary data numbers. There is 5 m between each station. Skin depth is the relation between the depth and frequency.
(3)
Electromagnetic waves in conductive zones propagate in low frequency. We can estimate the depth of anomaly considering the electrical resistivity variations.
Forward model of a fault have been considered to apply a two dimensional modeling. In the forward model of fault initial electrical resistivity is drawn logarithmic and the values are 100 ohmm and 700 ohmm. In a level of 2% noise was added to the data in order to compare the results with field results. In inversion model there is a resistive layer in a distance of 150 meter. The depth of resistive layer in both inversion and forward model is 20 m. The conductive region in both models has the electrical resistivity of about 100 ohm-meter.
Field data are recorded by WADI instrument from Shahran region at the North West of Tehran. Field data acquisition along a 250 m SN-profile crossing the North Tehran Fault (NTF) consist of 50 stations with 5 m spacing. The signal is reported at a frequency of 23300 Hz that is the NWC signals. The profile is incidence with North Tehran Fault. Travers direction among profile is South-North. Geological structure under the Shahran area is east-west. Geological study of that region is done by Hafizi and Vali (1999) for estimating the underground water sources in cracks using resistivity and IP methods.
We can conclude in that region in the station of 20, there is a crack so conductive materials such as underground water are near the surface in the depth of 10 meter. Under the conductive zone there is tuff formation with electrical resistivity of about 50-100 ohm. At the distance of 100-250 meter, conductive layer is lying under the tuffs.
Final model could resolve geological features spatially and the size of the anomaly and the location of the estimated one form the model are consistent. The tipper data has been depicted in terms of distance along the profile to realize the changes laterally. Using tipper data, the location of the anomalies can be diagnosed. In this investigation field data were collected from Shahran area in North West of Tehran. The main features of the NTF are determined properly using the presented VLF data inversion and interpretation.Two dimensional single frequency scalar VLF data modeling has been studied in this research. VLF method is one of the electromagnetic investigation methods. Radio frequency waves are used for investigation of conductive deposits (Paal, 1968). Synthetic modeling and inversion of single frequency and multiple frequencies of VLF signals was done by Oskooi (2004) for two dimensional simple structures. VLF transmitter antennas are high vertical wires that have the alternative current.
Conductivity of subsurface structures can be measured by single very low frequency two dimensional inversion methods. Ground VLF data is fast suitable facility for study geological structures with maximum depth of 100 m. Temporal and spatial variations of VLF primary field on surveying should be noticed. There is a linear relation between horizontal and vertical component of magnetic field.
(1)
The complex tipper vector (A, B) is only dependent on a ground structure and is independent from a transmitter azimuth. The x axis is on the direction of VLF transmitter on the geological strike. Y axis is the profile direction. For each site there is a transfer function, called complex scalar tipper. This value is recorded by instrument and it is measured by:
(2)
Interpretation and modeling of the tipper scalar data have been performed using synthetic and field data at the frequency of 23300 Hz. Considering that the limitation of VLF transmitters, the waves are coming from NWC station in Australia. With noticing the relation (2), there are 59 real and 59 imaginary data numbers. There is 5 m between each station. Skin depth is the relation between the depth and frequency.
(3)
Electromagnetic waves in conductive zones propagate in low frequency. We can estimate the depth of anomaly considering the electrical resistivity variations.
Forward model of a fault have been considered to apply a two dimensional modeling. In the forward model of fault initial electrical resistivity is drawn logarithmic and the values are 100 ohmm and 700 ohmm. In a level of 2% noise was added to the data in order to compare the results with field results. In inversion model there is a resistive layer in a distance of 150 meter. The depth of resistive layer in both inversion and forward model is 20 m. The conductive region in both models has the electrical resistivity of about 100 ohm-meter.
Field data are recorded by WADI instrument from Shahran region at the North West of Tehran. Field data acquisition along a 250 m SN-profile crossing the North Tehran Fault (NTF) consist of 50 stations with 5 m spacing. The signal is reported at a frequency of 23300 Hz that is the NWC signals. The profile is incidence with North Tehran Fault. Travers direction among profile is South-North. Geological structure under the Shahran area is east-west. Geological study of that region is done by Hafizi and Vali (1999) for estimating the underground water sources in cracks using resistivity and IP methods.
We can conclude in that region in the station of 20, there is a crack so conductive materials such as underground water are near the surface in the depth of 10 meter. Under the conductive zone there is tuff formation with electrical resistivity of about 50-100 ohm. At the distance of 100-250 meter, conductive layer is lying under the tuffs.
Final model could resolve geological features spatially and the size of the anomaly and the location of the estimated one form the model are consistent. The tipper data has been depicted in terms of distance along the profile to realize the changes laterally. Using tipper data, the location of the anomalies can be diagnosed. In this investigation field data were collected from Shahran area in North West of Tehran. The main features of the NTF are determined properly using the presented VLF data inversion and interpretation.https://jesphys.ut.ac.ir/article_28440_06efdfdec0d72e9ee477ff85f2e29733.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Terracing in interpretation of the potential field dataTerracing in interpretation of the potential field data1571662844110.22059/jesphys.2012.28441FAAbdolhamidAnsariKamalAlamdarAbolghasemKamkar RouhaniJournal Article19700101Terracing is an operator that is applied to potential field data to produce regions of constant field amplitude that are separated by sharp boundaries. Magnetic data are usually transformed into pseudo-gravity data (Baranov, 1957) prior to the application of terracing. The objective of terracing is ‘to recast potential field maps into a geologic map like format’ (Cordell and McCafferty 1989). Terracing is performed by moving a window through the data and computing the curvature at each point. The curvature of the field f is calculated using a three coefficient numerical approximation to the Laplacian derivative operator, which for profile data is given by:
(Scaling factors relating to the data sampling interval are unimportant here and have been ignored). The output value (located at the centre of the window) takes on one of three possible values. It becomes the value at the centre of the window, if this is greater than or lower than the rest of the data values in the window. If the curvature is positive, then the output value is set to the minimum of the data values in the window, while if it is negative then the output value is set to the maximum of the data values in the window. Terracing is performed in an iterative manner, with the data being sharpened progressively.
Cordell and McCafferty (1989) found that the terracing algorithm tended to square off the corners of anomalies, resulting in ragged domain boundaries. To compensate for this they computed the total horizontal derivative of the data and then tracked its local maxima using the algorithm of Blakely and Simpson (1986). These ridges were then overlain on the terraced data. The problem of the square boundaries is due to the fact that the calculation of the Laplacian function is only numerically approximated.
We propose that the problem of square domain boundaries was due to the curvature of the data being computed using a directionally biased approximation to the Laplacian and that it can be solved by using instead the profile curvature, which is the curvature computed in the direction of steepest ascent at each point of the data. Note that because both the Laplacian derivative operator and the profile curvature use the second horizontal derivatives of the data they are prone to noise problems and data may benefit from smoothing prior to their computation.
The objective of this work was to improve the output of the terracing filter and it was shown (by using both synthetic and real gravity data sets) that this can be achieved if the filter is based on the sign of the profile curvature of the data rather than on the sign of the Laplacian derivative operator. Although both the original and the modified algorithms are sensitive to noise because they use the second horizontal derivatives of the data, the modified algorithm appears to be more robust in this respect.
In this paper this method is applied on synthetic gravity data from adjacent prisms in different depths. Results show that this operator enhances subsurface boundary more accurately than other filters. Also we applied the proposed methods on real gravity data from southwest England. in this region, location of faults and Granite bodies enhanced.Terracing is an operator that is applied to potential field data to produce regions of constant field amplitude that are separated by sharp boundaries. Magnetic data are usually transformed into pseudo-gravity data (Baranov, 1957) prior to the application of terracing. The objective of terracing is ‘to recast potential field maps into a geologic map like format’ (Cordell and McCafferty 1989). Terracing is performed by moving a window through the data and computing the curvature at each point. The curvature of the field f is calculated using a three coefficient numerical approximation to the Laplacian derivative operator, which for profile data is given by:
(Scaling factors relating to the data sampling interval are unimportant here and have been ignored). The output value (located at the centre of the window) takes on one of three possible values. It becomes the value at the centre of the window, if this is greater than or lower than the rest of the data values in the window. If the curvature is positive, then the output value is set to the minimum of the data values in the window, while if it is negative then the output value is set to the maximum of the data values in the window. Terracing is performed in an iterative manner, with the data being sharpened progressively.
Cordell and McCafferty (1989) found that the terracing algorithm tended to square off the corners of anomalies, resulting in ragged domain boundaries. To compensate for this they computed the total horizontal derivative of the data and then tracked its local maxima using the algorithm of Blakely and Simpson (1986). These ridges were then overlain on the terraced data. The problem of the square boundaries is due to the fact that the calculation of the Laplacian function is only numerically approximated.
We propose that the problem of square domain boundaries was due to the curvature of the data being computed using a directionally biased approximation to the Laplacian and that it can be solved by using instead the profile curvature, which is the curvature computed in the direction of steepest ascent at each point of the data. Note that because both the Laplacian derivative operator and the profile curvature use the second horizontal derivatives of the data they are prone to noise problems and data may benefit from smoothing prior to their computation.
The objective of this work was to improve the output of the terracing filter and it was shown (by using both synthetic and real gravity data sets) that this can be achieved if the filter is based on the sign of the profile curvature of the data rather than on the sign of the Laplacian derivative operator. Although both the original and the modified algorithms are sensitive to noise because they use the second horizontal derivatives of the data, the modified algorithm appears to be more robust in this respect.
In this paper this method is applied on synthetic gravity data from adjacent prisms in different depths. Results show that this operator enhances subsurface boundary more accurately than other filters. Also we applied the proposed methods on real gravity data from southwest England. in this region, location of faults and Granite bodies enhanced.https://jesphys.ut.ac.ir/article_28441_819885f3ec7ef97ebd5d961aece23602.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Using analytic signal in determination of the magnetization to density ratio (MDR) of the geological bodiesUsing analytic signal in determination of the magnetization to density ratio (MDR) of the geological bodies1671822844210.22059/jesphys.2012.28442FAKamalAlamdarAbdolhamidAnsariAbolghasemKamkare-RouhaniJournal Article19700101Gravity and magnetic data are usually processed and interpreted separately, and fully integrated results basically are created in the mind of the interpreter. Data interpretation in such a manner requires an interpreter experienced both on topics concerning potential field theory and the geology of the study area. To simplify the joint interpretation of data, the automatic production of auxiliary interpreting products, in the form of maps or profiles, is useful to help a less experienced interpreter or when investigating regions with poorly known geology. Fortunately, a suitable theoretical background for the joint interpretation of gravity and magnetic anomalies is well established and can serve promptly in generating such products. Because of its mathematical expression, this theory commonly is referred to as the Poisson relation or the Poisson theorem, as in more recent publications. In summary, the Poisson theorem (term adopted here) establishes a linear relationship between the gravity and magnetic potentials and, by extension, between the corresponding anomalies measured in practice or derived from them by applying suitable data processing. For the joint interpretation of potential field data, the Poisson theorem has been used mainly to determine the magnetization–density ratio (MDR) (Garland, 1951; Chandler et al., 1981; Chandler and Malek, 1991) and, less often, the magnetization direction of single dense and magnetic structures (Ross and Lavin, 1966; Cordell and Taylor, 1971).
In this study we propose to combine a 3-D analytic signal method and Poisson theorem to calculate the MDR value. The amplitude of the simple analytic signal is defined as the square root of the squared sum of the vertical and two horizontal derivatives of the magnetic field (Roest et al. 1992). The outlines of the geological boundaries can be determined by tracing the maximum amplitudes of the analytic signal. The analytic signal exhibits maximum amplitudes over magnetization contrasts. Hence, the advantage is that in addition to the geological boundaries indicated by the maximum amplitudes of analytic signals, we can determine the MDR without considering the datum levels.
The final equation for estimation of MDR is:
Where |MAS0| represents the amplitude of simple zeroth-order analytic signal of magnetic anomaly and |GAS1| represents the amplitude of first-order analytic signal of gravity anomaly. In this equation G is gravitational constant.
On the basis of gravity and magnetic anomaly data, we have proposed a method by applying analytic signals to Poisson theorem to calculate the MDRs of geological structures. The advantage of using this method is that not only we can estimate the MDR distribution of the subsurface sources; we can also determine the geological boundary. The synthetic models and real data have shown that the proposed method is feasible. Also we applied the proposed method on real gravity and magnetic data from Gol-e-Gohar No.3 anomaly. Based on the estimated MDR values, the maximum of the MDR has located on southern part of study area which is in agreement with location of subsurface ore body. Furthermore, this method proves that there are two major rocks in the study area namely, Metamorphism and Igneous.Gravity and magnetic data are usually processed and interpreted separately, and fully integrated results basically are created in the mind of the interpreter. Data interpretation in such a manner requires an interpreter experienced both on topics concerning potential field theory and the geology of the study area. To simplify the joint interpretation of data, the automatic production of auxiliary interpreting products, in the form of maps or profiles, is useful to help a less experienced interpreter or when investigating regions with poorly known geology. Fortunately, a suitable theoretical background for the joint interpretation of gravity and magnetic anomalies is well established and can serve promptly in generating such products. Because of its mathematical expression, this theory commonly is referred to as the Poisson relation or the Poisson theorem, as in more recent publications. In summary, the Poisson theorem (term adopted here) establishes a linear relationship between the gravity and magnetic potentials and, by extension, between the corresponding anomalies measured in practice or derived from them by applying suitable data processing. For the joint interpretation of potential field data, the Poisson theorem has been used mainly to determine the magnetization–density ratio (MDR) (Garland, 1951; Chandler et al., 1981; Chandler and Malek, 1991) and, less often, the magnetization direction of single dense and magnetic structures (Ross and Lavin, 1966; Cordell and Taylor, 1971).
In this study we propose to combine a 3-D analytic signal method and Poisson theorem to calculate the MDR value. The amplitude of the simple analytic signal is defined as the square root of the squared sum of the vertical and two horizontal derivatives of the magnetic field (Roest et al. 1992). The outlines of the geological boundaries can be determined by tracing the maximum amplitudes of the analytic signal. The analytic signal exhibits maximum amplitudes over magnetization contrasts. Hence, the advantage is that in addition to the geological boundaries indicated by the maximum amplitudes of analytic signals, we can determine the MDR without considering the datum levels.
The final equation for estimation of MDR is:
Where |MAS0| represents the amplitude of simple zeroth-order analytic signal of magnetic anomaly and |GAS1| represents the amplitude of first-order analytic signal of gravity anomaly. In this equation G is gravitational constant.
On the basis of gravity and magnetic anomaly data, we have proposed a method by applying analytic signals to Poisson theorem to calculate the MDRs of geological structures. The advantage of using this method is that not only we can estimate the MDR distribution of the subsurface sources; we can also determine the geological boundary. The synthetic models and real data have shown that the proposed method is feasible. Also we applied the proposed method on real gravity and magnetic data from Gol-e-Gohar No.3 anomaly. Based on the estimated MDR values, the maximum of the MDR has located on southern part of study area which is in agreement with location of subsurface ore body. Furthermore, this method proves that there are two major rocks in the study area namely, Metamorphism and Igneous.https://jesphys.ut.ac.ir/article_28442_9ebe6a3ef220a07ffd4586b700fe194b.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Prediction of daily carbon monoxide concentration using hybrid FS-ANFIS model based on atmospheric stability analysis; case study: city of TehranPrediction of daily carbon monoxide concentration using hybrid FS-ANFIS model based on atmospheric stability analysis; case study: city of Tehran1832012844310.22059/jesphys.2012.28443FAKhosrouAshrafiGholamaliHoshyaripourBabakNadjar AraabiHomaKeshavarzi ShiraziJournal Article19700101In big cities, air pollution has become a great environmental issue nowadays. In city of Tehran, 90% of air pollutants are generated from traffic, among which carbon monoxide (CO) is the most important one because it constitutes more than 75% by weight of total air pollutants. This study aims to predict daily CO concentration of the urban area of Tehran using a hybrid forward selection- ANFIS (adaptive neuro-fuzzy inference system) model based on atmospheric stability analysis.
Atmospheric stability is the most important parameter affecting dilution of air pollutants. It plays a central role in the investigation of parameters that affect ambient pollutant concentrations. Therefore, it can be considered as an input parameter for developing air pollution prediction models. Although different methods are used for stability determination with varying degrees of complexity, most of them incorporate considerations of both mechanical and buoyant turbulence. In this study two aspects for atmospheric stability analysis are considered and thus, two models are developed.
ANFIS1: frictional wind velocity and temperature gradient are used for representing mechanical and buoyant turbulence, respectively. For predicting CO concentration at a certain time step (CO (t)), total candidates for inputs are: CO (t-1), u10(t), u10(t-1), rad(t), rad(t-1).
ANFIS2: wind velocity and solar radiation are considered as the indicators of mechanical and buoyant turbulence, respectively. For predicting CO concentration at a certain time step (CO(t)), there are 9 candidates for the inputs: CO (t-1), u10(t), u10(t-1), u24(t), u24(t-1), temp(t), temp (t-1), dtemp(t), dtemp(t-1).
Input selection is a crucial step in ANFIS implementation. This technique is not engineered to eliminate superfluous inputs. In the case of a high number of input variables, irrelevant, redundant, and noisy variables might be included in the data set, simultaneously; meaningful variables could be hidden. Moreover, high number of input variables may prevent ANFIS from finding the optimized models. Therefore, reducing input variables is recommended even though this causes some of the information to be omitted. In this research, input selection is carried out based on forward selection (FS) procedure. When the number of candidate covariates (N) is small, one can choose a prediction model by computing a reasonable criterion (e.g., RMSE, SSE, FPE or cross-validation error) for all possible subsets of the predictors. However, as N increases, the computational burden of this approach increases very quickly. This is one of the main reasons why step-by-step algorithms like forward selection are popular. In this approach, which is based on linear regression model, first step is ordering of the explanatory variables according to their correlation with the dependent variable (from the most to the least correlated variable). Then, the explanatory variable, which is best correlated with the dependent variable, is selected as the first input. All remained variables are then added one by one as the second input according to their correlation with the output and the variable which most significantly increases the correlation coefficient (R2) is selected as the second input. This step is repeated N-1 times for evaluating the effect of each variable on model output. Finally, among N obtained subsets, the subset with optimum R2 is selected as the model input subset. The optimum R2 is integral to a set of variables after which adding new variable dose not significantly increase the R2.
FS is applied on the input sets of this study which reduces the inputs of the models to 5 and 4 for ANFIS1 and ANFIS2, respectively. In order to identify the effect of FS on modeling results, the complete input sets are considered. Thus, 4 models are defined: ANFIS1, ANFIS2, FS- ANFIS1 and FS-ANFIS2. The selected inputs are used for Neuro-fuzzy modeling approach. Neuro-fuzzy modeling refers to the method of applying various learning techniques developed in the neural network literature to fuzzy modeling or Fuzzy Inference System (FIS). A specific approach in neuro-fuzzy development is ANFIS (adaptive neuro-fuzzy inference system), which has shown significant results in modeling nonlinear functions. ANFIS uses a feed forward network to optimize parameters of a given FIS to perform well on a given task. The learning algorithm for ANFIS is a hybrid algorithm, which is combination of the gradient descent and least squares methods. The used FIS here is the Sugeno first-order fuzzy model with its equivalent ANFIS architecture.
Results show that the forward selection reduces not only calculation burden but also the output error. FS-ANFIS models produce more accurate results with R2 of 0.52 and 0.41 for FS-ANFIS1 and FS-ANFIS2, respectively. Moreover, although both models can satisfactorily predict trends in CO concentration level, FS-ANFIS2, which is based on temperature and wind speed gradients, is the superior model.In big cities, air pollution has become a great environmental issue nowadays. In city of Tehran, 90% of air pollutants are generated from traffic, among which carbon monoxide (CO) is the most important one because it constitutes more than 75% by weight of total air pollutants. This study aims to predict daily CO concentration of the urban area of Tehran using a hybrid forward selection- ANFIS (adaptive neuro-fuzzy inference system) model based on atmospheric stability analysis.
Atmospheric stability is the most important parameter affecting dilution of air pollutants. It plays a central role in the investigation of parameters that affect ambient pollutant concentrations. Therefore, it can be considered as an input parameter for developing air pollution prediction models. Although different methods are used for stability determination with varying degrees of complexity, most of them incorporate considerations of both mechanical and buoyant turbulence. In this study two aspects for atmospheric stability analysis are considered and thus, two models are developed.
ANFIS1: frictional wind velocity and temperature gradient are used for representing mechanical and buoyant turbulence, respectively. For predicting CO concentration at a certain time step (CO (t)), total candidates for inputs are: CO (t-1), u10(t), u10(t-1), rad(t), rad(t-1).
ANFIS2: wind velocity and solar radiation are considered as the indicators of mechanical and buoyant turbulence, respectively. For predicting CO concentration at a certain time step (CO(t)), there are 9 candidates for the inputs: CO (t-1), u10(t), u10(t-1), u24(t), u24(t-1), temp(t), temp (t-1), dtemp(t), dtemp(t-1).
Input selection is a crucial step in ANFIS implementation. This technique is not engineered to eliminate superfluous inputs. In the case of a high number of input variables, irrelevant, redundant, and noisy variables might be included in the data set, simultaneously; meaningful variables could be hidden. Moreover, high number of input variables may prevent ANFIS from finding the optimized models. Therefore, reducing input variables is recommended even though this causes some of the information to be omitted. In this research, input selection is carried out based on forward selection (FS) procedure. When the number of candidate covariates (N) is small, one can choose a prediction model by computing a reasonable criterion (e.g., RMSE, SSE, FPE or cross-validation error) for all possible subsets of the predictors. However, as N increases, the computational burden of this approach increases very quickly. This is one of the main reasons why step-by-step algorithms like forward selection are popular. In this approach, which is based on linear regression model, first step is ordering of the explanatory variables according to their correlation with the dependent variable (from the most to the least correlated variable). Then, the explanatory variable, which is best correlated with the dependent variable, is selected as the first input. All remained variables are then added one by one as the second input according to their correlation with the output and the variable which most significantly increases the correlation coefficient (R2) is selected as the second input. This step is repeated N-1 times for evaluating the effect of each variable on model output. Finally, among N obtained subsets, the subset with optimum R2 is selected as the model input subset. The optimum R2 is integral to a set of variables after which adding new variable dose not significantly increase the R2.
FS is applied on the input sets of this study which reduces the inputs of the models to 5 and 4 for ANFIS1 and ANFIS2, respectively. In order to identify the effect of FS on modeling results, the complete input sets are considered. Thus, 4 models are defined: ANFIS1, ANFIS2, FS- ANFIS1 and FS-ANFIS2. The selected inputs are used for Neuro-fuzzy modeling approach. Neuro-fuzzy modeling refers to the method of applying various learning techniques developed in the neural network literature to fuzzy modeling or Fuzzy Inference System (FIS). A specific approach in neuro-fuzzy development is ANFIS (adaptive neuro-fuzzy inference system), which has shown significant results in modeling nonlinear functions. ANFIS uses a feed forward network to optimize parameters of a given FIS to perform well on a given task. The learning algorithm for ANFIS is a hybrid algorithm, which is combination of the gradient descent and least squares methods. The used FIS here is the Sugeno first-order fuzzy model with its equivalent ANFIS architecture.
Results show that the forward selection reduces not only calculation burden but also the output error. FS-ANFIS models produce more accurate results with R2 of 0.52 and 0.41 for FS-ANFIS1 and FS-ANFIS2, respectively. Moreover, although both models can satisfactorily predict trends in CO concentration level, FS-ANFIS2, which is based on temperature and wind speed gradients, is the superior model.https://jesphys.ut.ac.ir/article_28443_caee95edea25f5216e98b43976e87426.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Possible thermal seismic precursors along Western boundary of Lut plate (IRAN)–Kerman areaPossible thermal seismic precursors along Western boundary of Lut plate (IRAN)–Kerman area2032182844410.22059/jesphys.2012.28444FAHosseinJalal KamaliAbbas AliAli-Akbari BidokhtiHosseinAmiriJournal Article19700101So far a number of works on the heat flux anomalies of the earth surface based on the satellite images and underground water as seismic precursors have been carried out. In this paper the soil temperature data at 1m depth (deep enough to filter temperature variation due to high frequency meteorological forcings) has been considered to see if there are connections between earthquakes in Bam and Zarand (Kerman province) and seismic activities. Bam and Zarand are situated near the active faults; hence the activities of these faults may lead to thermal anomalies recorded at the soil temperature monitoring stations (Iranian Meteorological Organization). Usually near the surface soil temperature changes very rapidly by the meteorological forcing as daily variations of surface temperature due to changes of solar heating , but at deeper depth, say more than 0.5 m the changes are nearly negligible. Temperature data of different soil depth, namely, 10, 20, 30, 50, and 100 cm at 03, 09, and 15 universal times for years 1996-2005 are obtained from the Metrological Organization for Bam and 2003-2005 for Zarand stations. By analyzing the penetration of daily heat waves into the ground, the damping depths indicate that temperature deeper than 10 cm is usually unaffected by surface temperature variations. Hence we used temperature records of 100 cm and try to see if there are any anomalous changes prior to the major earthquakes in these two regions. The anomalies are deviation of temperature records from the mean trends of temperatures at these stations. The time lags between the time of troughs of anomalous signals and the time of minimum temperatures of the month earthquake occurred, were also estimated indicating that they are about 4-7 days. From the vertical temperature graients in the deeper soil, thermal diffusivity of the soil at and around of the time of the events were also calculated, indicating some changes. Also the water levels of some wells at these two stations indicated some changes, going down or up rather suddenly. All these changes show that there may be some variations prior and after the earthquakes. From these the actual temperature changes more vividly. Mean daily soil temperatures at 1m depth at these stations were analyzed, showing decrease (about 1.5 degrees) before the strong earthquakes and larger increase (about 2.5 degrees) after them. These changes may be due to seismic effects near the active zones.So far a number of works on the heat flux anomalies of the earth surface based on the satellite images and underground water as seismic precursors have been carried out. In this paper the soil temperature data at 1m depth (deep enough to filter temperature variation due to high frequency meteorological forcings) has been considered to see if there are connections between earthquakes in Bam and Zarand (Kerman province) and seismic activities. Bam and Zarand are situated near the active faults; hence the activities of these faults may lead to thermal anomalies recorded at the soil temperature monitoring stations (Iranian Meteorological Organization). Usually near the surface soil temperature changes very rapidly by the meteorological forcing as daily variations of surface temperature due to changes of solar heating , but at deeper depth, say more than 0.5 m the changes are nearly negligible. Temperature data of different soil depth, namely, 10, 20, 30, 50, and 100 cm at 03, 09, and 15 universal times for years 1996-2005 are obtained from the Metrological Organization for Bam and 2003-2005 for Zarand stations. By analyzing the penetration of daily heat waves into the ground, the damping depths indicate that temperature deeper than 10 cm is usually unaffected by surface temperature variations. Hence we used temperature records of 100 cm and try to see if there are any anomalous changes prior to the major earthquakes in these two regions. The anomalies are deviation of temperature records from the mean trends of temperatures at these stations. The time lags between the time of troughs of anomalous signals and the time of minimum temperatures of the month earthquake occurred, were also estimated indicating that they are about 4-7 days. From the vertical temperature graients in the deeper soil, thermal diffusivity of the soil at and around of the time of the events were also calculated, indicating some changes. Also the water levels of some wells at these two stations indicated some changes, going down or up rather suddenly. All these changes show that there may be some variations prior and after the earthquakes. From these the actual temperature changes more vividly. Mean daily soil temperatures at 1m depth at these stations were analyzed, showing decrease (about 1.5 degrees) before the strong earthquakes and larger increase (about 2.5 degrees) after them. These changes may be due to seismic effects near the active zones.https://jesphys.ut.ac.ir/article_28444_14d6166b7e98c8885fbbfcc201a38114.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Introducing and examining of a second order closure boundary layer modelIntroducing and examining of a second order closure boundary layer model2192312844510.22059/jesphys.2012.28445FAMajidMazraeh FarahaniAliJalaliMohammad AliSaghafiJournal Article19700101One of the popular and well known models in boundary layer that is implemented in most of the operational and familiar models is the Mellor-Yamada model of atmospheric boundary layer. One reason that this model is so popular is its ability in simulations and exposing of different phenomena in the boundary layer. This model has been constructed based on second order closure hypothesis. Beside all of the positive properties that make this model admirable, it has some deficiency and insufficient accuracy in approximating turbulent terms in relevant equations. The most important weakness of this model is its inability of differentiation between vertical and horizontal components of turbulent kinetic energy (it can not separate from ). This feature of model make it unable to distinguish large eddies which grow up in depth of boundary layer from smaller one that has similar scales in both vertical and horizontal directions. In addition this model assumes that the production of turbulent kinetic energy is equivalent to the consumption of it. This characteristic results in error in small eddies simulation in the boundary layer.
The shortcoming of mentioned model motivate scientist to introduce new models without those deficiencies. Canuto and his team recently introduced a model more compliance with ongoing phenomena in the boundary layer. This model is discussed extensively here and for being implemented in the numerical weather prediction models, it is examined by available Large Eddy Simulation data. The turbulent fluxes such as ، ، are calculated and discussed with details of achievements. The other important variables in the boundary layer that has critical role on setting up of fluxes and currents in turbulent boundary layer like dept of boundary layer and Richardson number are also computed and analyzed.
Results show that the proposed new model is able to distinguish between large and small eddies by differentiating between vertical and horizontal component of variables specially of turbulent kinetic energy. The resulted height of boundary layer has a very good agreement with observation. Various simulations in different scientific centers are presented and are compared with new model results. It is clear in the figures that this model could approximate the turbulent currents better than previous ones.One of the popular and well known models in boundary layer that is implemented in most of the operational and familiar models is the Mellor-Yamada model of atmospheric boundary layer. One reason that this model is so popular is its ability in simulations and exposing of different phenomena in the boundary layer. This model has been constructed based on second order closure hypothesis. Beside all of the positive properties that make this model admirable, it has some deficiency and insufficient accuracy in approximating turbulent terms in relevant equations. The most important weakness of this model is its inability of differentiation between vertical and horizontal components of turbulent kinetic energy (it can not separate from ). This feature of model make it unable to distinguish large eddies which grow up in depth of boundary layer from smaller one that has similar scales in both vertical and horizontal directions. In addition this model assumes that the production of turbulent kinetic energy is equivalent to the consumption of it. This characteristic results in error in small eddies simulation in the boundary layer.
The shortcoming of mentioned model motivate scientist to introduce new models without those deficiencies. Canuto and his team recently introduced a model more compliance with ongoing phenomena in the boundary layer. This model is discussed extensively here and for being implemented in the numerical weather prediction models, it is examined by available Large Eddy Simulation data. The turbulent fluxes such as ، ، are calculated and discussed with details of achievements. The other important variables in the boundary layer that has critical role on setting up of fluxes and currents in turbulent boundary layer like dept of boundary layer and Richardson number are also computed and analyzed.
Results show that the proposed new model is able to distinguish between large and small eddies by differentiating between vertical and horizontal component of variables specially of turbulent kinetic energy. The resulted height of boundary layer has a very good agreement with observation. Various simulations in different scientific centers are presented and are compared with new model results. It is clear in the figures that this model could approximate the turbulent currents better than previous ones.https://jesphys.ut.ac.ir/article_28445_56552c41244da274d69f93b380264cff.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Detailed crustal discontinuities, seismotectonic and seismicity parameters of the east-middle Alborz, Iran, flower structure of subsurface fault geometryDetailed crustal discontinuities, seismotectonic and seismicity parameters of the east-middle Alborz, Iran, flower structure of subsurface fault geometry1152844610.22059/jesphys.2012.28446FAMajidNematiMortezaTalebianAhmadSadidkhouyNoorbakhshMirzaeiMohammad RezaGheitanchiJournal Article19700101The investigation of this paper focuses on eastern part of Shahroud fault system in middle-east Alborz. This fault system is an important part in the seismotectonic map of the area. We used two local temporary dense seismological networks data installed around the fault system for several months during 2007 and 2008 and simultaneously micro-earthquakes data recorded by the permanent seismological network of the Geophysics Institute of University of Tehran were used. The seismicity of both networks has overlapping with the surface outcrops and the depth of Shahroud fault system faults, mainly Astaneh fault. Processing the data provided us a P wave velocity range within the east Alborz which resulted discontinuities like seismogenic zone thickness, 24 km. An initial estimation of Moho depth located at 34 km, near vertical and north dipping seismicity dips corresponding with the Astaneh and North Semnan faults respectively were the other results. Faults dipping and seismogenic zone thickness do not support the flower structure hypothesis at the east Alborz in spite of some author's idea. A few focal mechanisms indicated left-lateral motion and confirm high angle of the faults planes. The crustal movement directions resulted from P and T vectors show well correspondence with the GPS measured direction in the area. The b-value which could be considered as inverse short term background seismicity intense, was determined about 0.9.The investigation of this paper focuses on eastern part of Shahroud fault system in middle-east Alborz. This fault system is an important part in the seismotectonic map of the area. We used two local temporary dense seismological networks data installed around the fault system for several months during 2007 and 2008 and simultaneously micro-earthquakes data recorded by the permanent seismological network of the Geophysics Institute of University of Tehran were used. The seismicity of both networks has overlapping with the surface outcrops and the depth of Shahroud fault system faults, mainly Astaneh fault. Processing the data provided us a P wave velocity range within the east Alborz which resulted discontinuities like seismogenic zone thickness, 24 km. An initial estimation of Moho depth located at 34 km, near vertical and north dipping seismicity dips corresponding with the Astaneh and North Semnan faults respectively were the other results. Faults dipping and seismogenic zone thickness do not support the flower structure hypothesis at the east Alborz in spite of some author's idea. A few focal mechanisms indicated left-lateral motion and confirm high angle of the faults planes. The crustal movement directions resulted from P and T vectors show well correspondence with the GPS measured direction in the area. The b-value which could be considered as inverse short term background seismicity intense, was determined about 0.9.https://jesphys.ut.ac.ir/article_28446_88b9d4e61be0d26c0e9a73b5603af9aa.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X38220120722Seismic study of upper mantle beneath the NW Iran using P receiver functionSeismic study of upper mantle beneath the NW Iran using P receiver function17282844710.22059/jesphys.2012.28447FAFatanehTaghizadeh-FarahmandForoughSodoudiNargesAfsariJournal Article19700101RF method is now a well-known tool for studying crustal and upper mantle structure when such a complete data set is available. We compute P receiver functions to investigate the upper mantle discontinuity beneath the Northwest of Iran. We selected data from teleseismic events (Mb ? 5.5, 30 ?>? > 95?) recorded from 1995 to 2008 at 8 three component short period stations from Tabriz Telemetry Seismic Network with high signal-to-noise ratio. The P to S converted phases from 410 and 660 km discontinuities are delayed by more 2 and 1 s with respect to IASP91 global reference model, indicating that the upper mantle above 410 km is 3-4% slower and high temperature than the standard earth model. Because the 410 and 660 km discontinuities do not show the same delay, the transition zone is also could be thinner. This could mean that the upper mantle in the region is still influenced by several geodynamical processes involving rifting, uplift and magmatism.RF method is now a well-known tool for studying crustal and upper mantle structure when such a complete data set is available. We compute P receiver functions to investigate the upper mantle discontinuity beneath the Northwest of Iran. We selected data from teleseismic events (Mb ? 5.5, 30 ?>? > 95?) recorded from 1995 to 2008 at 8 three component short period stations from Tabriz Telemetry Seismic Network with high signal-to-noise ratio. The P to S converted phases from 410 and 660 km discontinuities are delayed by more 2 and 1 s with respect to IASP91 global reference model, indicating that the upper mantle above 410 km is 3-4% slower and high temperature than the standard earth model. Because the 410 and 660 km discontinuities do not show the same delay, the transition zone is also could be thinner. This could mean that the upper mantle in the region is still influenced by several geodynamical processes involving rifting, uplift and magmatism.https://jesphys.ut.ac.ir/article_28447_b978fbffb8c37539cbdcc47e226f5463.pdf