Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36420110121Estimating shear-waves velocity structure by combining array methods and inversion of ellipticity curves at a site in south of TehranEstimating shear-waves velocity structure by combining array methods and inversion of ellipticity curves at a site in south of Tehran22399FAElhamShabaniNorbakhshMirzaeiEbrahimHaghshenasMortezaEskandari-GhadiJournal Article19700101Tehran, the capital of Iran, is under the threat of large magnitude earthquakes (above 7) located on known active faults. Previous studies on the effect of local surface geology on earthquake ground motion using 1D calculation of SH transfer function (Jafari et al., 2001; JICA & CEST, 2000) and using experimental methods (Haghshenas, 2005) based on earthquake and ambient noise vibration recordings resulted in very different and unexpected results. By assuming a layer with Vs = 700 m/s as seismic bedrock, 1D SH transfer functions show indeed a weak amplification for frequencies above 2 Hz, while the site-to-reference spectral ratios exhibit a significant amplification (up to 8) within a large frequency band from 0.3 to 8 Hz. Such discrepancy might be explained by very thick and stiff sedimentary layers overlying very rigid bedrock.
Different methods including microtremors array (FK and SPAC) and H/V calculated by TFA techniques, joint inversion of dispersion curves and ellipticty curves and finally SH transfer function were used to constrain the shear-wave velocity and the bedrock depth at a site exhibiting high ground motion amplification at low frequencies in the south of Tehran. Results show that using the array data alone (arrays with a limited aperture of 100 meters) can only provide Vs profile for the superficial layers. Combining the array methods and single station measurement can give deeper and better constrain on shear-wave velocity models. Knowing the range of fundamental resonance frequencies from earthquake data gives us the opportunity to filter the inverted Vs models obtained from joint inversion of dispersion curve and H/V ellipticity curve, by applying SH transfer functions calculated for various inverted Vs profile and comparing the subsequent resonance frequency to the actual one.
In this paper, array processing by using MSPAC (Bettig et al., 2001) and FK techniques were performed using Sesarray software package (Wathelet et al., 2008). Results from MSPAC analysis are not shown here since it did not add more significant results to the dispersion curve than the one derived from FK analysis. Inversion was performed using the Conditional Neighborhood Algorithm (Wathelet, 2008). Although inverted shear wave velocity profiles fit well the borehole Vs measurements (Figure 3a, black line), they do not provide any constrain of Vs for depths deeper than 100 meters. This is explained by the limited array aperture resulting in phase velocity estimates ranging between 6 and 10 Hz
Applying this procedure gives estimate about the shear-wave velocity and the bedrock depth in the south of Tehran which may lie between 700 to 1200 meters. This procedure should be applied to different sites in Tehran in order to retrieve the Vs profile and the spatial variation of sediment-to-bedrock depth throughout Tehran. This will then allow us to better understand observed site amplification.Tehran, the capital of Iran, is under the threat of large magnitude earthquakes (above 7) located on known active faults. Previous studies on the effect of local surface geology on earthquake ground motion using 1D calculation of SH transfer function (Jafari et al., 2001; JICA & CEST, 2000) and using experimental methods (Haghshenas, 2005) based on earthquake and ambient noise vibration recordings resulted in very different and unexpected results. By assuming a layer with Vs = 700 m/s as seismic bedrock, 1D SH transfer functions show indeed a weak amplification for frequencies above 2 Hz, while the site-to-reference spectral ratios exhibit a significant amplification (up to 8) within a large frequency band from 0.3 to 8 Hz. Such discrepancy might be explained by very thick and stiff sedimentary layers overlying very rigid bedrock.
Different methods including microtremors array (FK and SPAC) and H/V calculated by TFA techniques, joint inversion of dispersion curves and ellipticty curves and finally SH transfer function were used to constrain the shear-wave velocity and the bedrock depth at a site exhibiting high ground motion amplification at low frequencies in the south of Tehran. Results show that using the array data alone (arrays with a limited aperture of 100 meters) can only provide Vs profile for the superficial layers. Combining the array methods and single station measurement can give deeper and better constrain on shear-wave velocity models. Knowing the range of fundamental resonance frequencies from earthquake data gives us the opportunity to filter the inverted Vs models obtained from joint inversion of dispersion curve and H/V ellipticity curve, by applying SH transfer functions calculated for various inverted Vs profile and comparing the subsequent resonance frequency to the actual one.
In this paper, array processing by using MSPAC (Bettig et al., 2001) and FK techniques were performed using Sesarray software package (Wathelet et al., 2008). Results from MSPAC analysis are not shown here since it did not add more significant results to the dispersion curve than the one derived from FK analysis. Inversion was performed using the Conditional Neighborhood Algorithm (Wathelet, 2008). Although inverted shear wave velocity profiles fit well the borehole Vs measurements (Figure 3a, black line), they do not provide any constrain of Vs for depths deeper than 100 meters. This is explained by the limited array aperture resulting in phase velocity estimates ranging between 6 and 10 Hz
Applying this procedure gives estimate about the shear-wave velocity and the bedrock depth in the south of Tehran which may lie between 700 to 1200 meters. This procedure should be applied to different sites in Tehran in order to retrieve the Vs profile and the spatial variation of sediment-to-bedrock depth throughout Tehran. This will then allow us to better understand observed site amplification.https://jesphys.ut.ac.ir/article_22399_d966cea330fc6cb4b7ecefdf17fff95c.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36420110121The intermediate-term earthquake prediction based on seismic gaps in Central-East Iran seismotectonic provinceThe intermediate-term earthquake prediction based on seismic gaps in Central-East Iran seismotectonic province22400FAMahinArbabiNorbakhshMirzaeiJournal Article19700101Seismic gaps as a method of earthquake prediction, initially, were used mainly for long-term prediction. Nowadays, seismic gaps are one of the most important precursory phenomena for intermediate-term earthquake prediction. These are part of the tectonic regions that are quiescent at the moment, but might cause damaging earthquakes in the future.
Based on the study of strong earthquakes in mainland of China it is suggested that intraplate gaps due to the activity of moderate and small earthquakes may be divided into "background gaps" and “preparation gaps”. A background gap is a gap surrounded by larger earthquakes in a larger area and with a longer duration before the main shock. A preparation gap is a gap surrounded by small earthquakes in a smaller area and with a shorter duration before the main shock(Lu and Song, 1989). Background gaps are of critical importance and are a clue to relatively high Ms earthquakes (Ms > 5). Preparation gaps build up in a region inside a background gap or its surroundings in a short time interval (afew years) before the main earthquake. The preparation gap is usually surrounded by small precursory earthquakes, even though one or few relatively large earthquakes (still smaller than the main earthquake) may occur in regions on the edge of the gap. Such smaller magnitude earthquake activity has been considered as premonitory phenomena useful for intermediate-term and even short-term earthquake prediction. Three criteria are proposed for the identification of these gaps (Lu and Song , 1989): (1) with the formation of a preparation gap the seismic strain release should accelerate both in the gap and in its vicinity, (2) the ratio of earthquake frequency outside the gap to that within it should reach a maximum value during the formation of the gap, and (3) some moderate earthquakes often occur in the forthcoming seismic source area before formation of the background gap. The former two are the main criteria for identification of the gap, and the third is a subsidiary criterion for determining the location of the forthcoming earthquake.
In this study, based on the history of earthquakes in the Central-East Iran seismotectonic province, we have found a background gap and three preparation gaps. One of these gaps is related to the destructive earthquake of Ms =6.8, which occurred on 26 December2003 in the Bam region of southern Central-East Iran. This earthquake occurred in the edge of the recognized preparation gap in the Bam region. The other gap is related to a large earthquake of Ms =6.0, on February in the Sefidabeh region of south-eastern Iran. This earthquake also occurred in the edge of the recognized preparation gap in the Sefidabeh region.The strain release curve, the ratio of the earthquake frequency outside the gap to that within it, and the cumulative number-time curve, have good correlation with the earthquakes happened. In addition, recognition of a preparation gap in the Dasht-e-Bayaz region, eastern Central-East Iran, implies accumulating seismic strain and a large earthquake may occur in that region in the future. The well- known Dasht-e-Bayaz and Abiz earthquake faults are located in this preparation gap.Seismic gaps as a method of earthquake prediction, initially, were used mainly for long-term prediction. Nowadays, seismic gaps are one of the most important precursory phenomena for intermediate-term earthquake prediction. These are part of the tectonic regions that are quiescent at the moment, but might cause damaging earthquakes in the future.
Based on the study of strong earthquakes in mainland of China it is suggested that intraplate gaps due to the activity of moderate and small earthquakes may be divided into "background gaps" and “preparation gaps”. A background gap is a gap surrounded by larger earthquakes in a larger area and with a longer duration before the main shock. A preparation gap is a gap surrounded by small earthquakes in a smaller area and with a shorter duration before the main shock(Lu and Song, 1989). Background gaps are of critical importance and are a clue to relatively high Ms earthquakes (Ms > 5). Preparation gaps build up in a region inside a background gap or its surroundings in a short time interval (afew years) before the main earthquake. The preparation gap is usually surrounded by small precursory earthquakes, even though one or few relatively large earthquakes (still smaller than the main earthquake) may occur in regions on the edge of the gap. Such smaller magnitude earthquake activity has been considered as premonitory phenomena useful for intermediate-term and even short-term earthquake prediction. Three criteria are proposed for the identification of these gaps (Lu and Song , 1989): (1) with the formation of a preparation gap the seismic strain release should accelerate both in the gap and in its vicinity, (2) the ratio of earthquake frequency outside the gap to that within it should reach a maximum value during the formation of the gap, and (3) some moderate earthquakes often occur in the forthcoming seismic source area before formation of the background gap. The former two are the main criteria for identification of the gap, and the third is a subsidiary criterion for determining the location of the forthcoming earthquake.
In this study, based on the history of earthquakes in the Central-East Iran seismotectonic province, we have found a background gap and three preparation gaps. One of these gaps is related to the destructive earthquake of Ms =6.8, which occurred on 26 December2003 in the Bam region of southern Central-East Iran. This earthquake occurred in the edge of the recognized preparation gap in the Bam region. The other gap is related to a large earthquake of Ms =6.0, on February in the Sefidabeh region of south-eastern Iran. This earthquake also occurred in the edge of the recognized preparation gap in the Sefidabeh region.The strain release curve, the ratio of the earthquake frequency outside the gap to that within it, and the cumulative number-time curve, have good correlation with the earthquakes happened. In addition, recognition of a preparation gap in the Dasht-e-Bayaz region, eastern Central-East Iran, implies accumulating seismic strain and a large earthquake may occur in that region in the future. The well- known Dasht-e-Bayaz and Abiz earthquake faults are located in this preparation gap.https://jesphys.ut.ac.ir/article_22400_327050766aaf597d86753f4d3532d5b0.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36420110121A New Approach for Evaluation of Global Geopotential Models Case Study: IranA New Approach for Evaluation of Global Geopotential Models Case Study: Iran22401FAMohammad AliSharifi0000-0003-0745-4147MehdiNikkhooMajidAbbaszadehJournal Article19700101The Earth’s gravity field dedicated missions provide homogeneous and uniformly accurate information on the long wavelengths of the Earth’s gravity field. Many different global geopotential models have introduced by Earth and space research centers during the last years.
The geopotential models derived from the GRACE satellite measurements are compared with the other models at different frequency levels. As expected, the achieved results show that the GRACE-derived models are more accurate in low frequency. Although the GRACE-derived models have lower accuracy in medium frequencies they outperform the other alternatives.
Moreover, accuracy of the geopotential models are usually evaluated by comparing the geoidal heights derived from the models with those of the local GPS-leveling stations. Due to the different signal content of two observation types, it may lead to incorrect conclusion. For compatibility, we propose the idea of filtering the high-frequency signals of the GPS-leveling observations. We have employed spatial filters with the uniform and Gaussian kernels to filter the high-frequency components of the terrestrial (GPS-leveling) data.
Herein, we introduce an innovative approach for evaluation of the geopotential models. In the new method, the geodial height differences over baselines with different length are used for the evaluation of the geopotential models. The models behave differently for the baselines of different lengths and orientations. Sub-meter accuracy can be obtained using different models for baselines with lengths up to 50 km. However, for the longer baselines the geoidal height accuracy behaves differently. Furthermore, the lowest accuracy is observed for the baselines in the south-north direction. It might correspond to the accumulative error of the leveling network of Iran which spans the whole country from the Persian Gulf northward.
It is also recommended to perform data screening and outlier detection for terrestrial data. Among 490 GPS-leveling stations used in this study, 40 stations have been removed because of their significant differences (more than 2 meters) with the global models.
Compared to espite the classical point-wise method, the most recently released model, EIGEN-CG04 is the most accurate one according to our analysis over the GPS-leveling network of Iran.The Earth’s gravity field dedicated missions provide homogeneous and uniformly accurate information on the long wavelengths of the Earth’s gravity field. Many different global geopotential models have introduced by Earth and space research centers during the last years.
The geopotential models derived from the GRACE satellite measurements are compared with the other models at different frequency levels. As expected, the achieved results show that the GRACE-derived models are more accurate in low frequency. Although the GRACE-derived models have lower accuracy in medium frequencies they outperform the other alternatives.
Moreover, accuracy of the geopotential models are usually evaluated by comparing the geoidal heights derived from the models with those of the local GPS-leveling stations. Due to the different signal content of two observation types, it may lead to incorrect conclusion. For compatibility, we propose the idea of filtering the high-frequency signals of the GPS-leveling observations. We have employed spatial filters with the uniform and Gaussian kernels to filter the high-frequency components of the terrestrial (GPS-leveling) data.
Herein, we introduce an innovative approach for evaluation of the geopotential models. In the new method, the geodial height differences over baselines with different length are used for the evaluation of the geopotential models. The models behave differently for the baselines of different lengths and orientations. Sub-meter accuracy can be obtained using different models for baselines with lengths up to 50 km. However, for the longer baselines the geoidal height accuracy behaves differently. Furthermore, the lowest accuracy is observed for the baselines in the south-north direction. It might correspond to the accumulative error of the leveling network of Iran which spans the whole country from the Persian Gulf northward.
It is also recommended to perform data screening and outlier detection for terrestrial data. Among 490 GPS-leveling stations used in this study, 40 stations have been removed because of their significant differences (more than 2 meters) with the global models.
Compared to espite the classical point-wise method, the most recently released model, EIGEN-CG04 is the most accurate one according to our analysis over the GPS-leveling network of Iran.https://jesphys.ut.ac.ir/article_22401_53a6da2c66f0322b98a26c1b140226ff.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X364201101212D inversion of Radiomagnetotelluric data for mapping waste disposal sites, an example from The Netherlands2D inversion of Radiomagnetotelluric data for mapping waste disposal sites, an example from The Netherlands22402FABabakAssarzadeganBehrozOskooiMehrdadBastaniJournal Article19700101The Radio Magnetotelluric (RMT) method is one of the most widely used electromagnetic (EM) methods which employs artificial time varying electric and magnetic fields at the surface of the earth for imaging conductivity. The Radiomagnetotelluric method proposed by Goldstein and Strangway (1975) is based on measuring one set of horizontal electrical and perpendicular magnetic components. After that, many authors for example Sandberg and Hohmann (1982), Bartel and Jacobson (1987), Hughes and Carlson (1987), have studied this method in detail. The electromagnetic signals are emitted by powerful transmitters. The RMT method has a very broad spectrum that is defined here in the range al 10-250 kHz. With regard to this high frequency band, exploration depth is low and is used in shallow engineering studies. In Iran, the application of geophysical methods for the study of waste sites has become increasingly important because of industrial development.
In Autumn 1998, the RMT study was carried out to recognize the pollution zone of a waste site in Collendoorn, in Netherlands. The measurements in the Netherlands are selected to show the RMT application of the EnviroMT system. Collendoorn is a small town situated in the north of the Overijssel province, in the Netherlands. Close to the town, in the middle of a flat area, covered by marine Pliocene sediments, lies the former waste disposal site that was used as a public dumping ground from 1949. Waste disposal dumps occurred in (wet) pits that had been dug for winning sand. In 1960 the site was declared as an official waste disposal site and from then the waste was dumped in dry areas of the site. The waste disposal has been discontinued since 1988 and the area has been used for recreational purposes. Pollution has been detected in the area near the waste disposal site and the leakage of polluted water has moved to regions outside the site. Samples taken from bore-holes in the area show that the polluted water contains iron and chloride ions causing the pollution plume to be electrically highly conductive.
A very long geophysical profile taken by the Netherlands Institute of Applied Geoscience (TNO) shows that the Tegelen formation which mainly consists of clay with a thickness of 2-7 m lies at a depth of approximately 35 m below the surface in the vicinity of Collendoorn. The formations above Tegelen are alternating sand and clay beds. Models of the RMT data across profiles, considerably express conductivity of the layer at 25-30 meters and with regards the data of bore-hole, this layer is estimated as a polluted layer.
The main geophysical objective was to detect and map the vertical and lateral extensions of the pollution plume. Based on the information provided by TNO four RMT survey lines situated at the eastern part of the dumpsite were planned for the survey. Lines 1, 2, and 3 were west-eastward and line 4 was south-northward.
Each line contains stations with 10 m spacing. We already know that the Collendoorn RMT data has a one-dimensional character, meaning that a 1D interpretation may work quite satisfactorily. The EnviroMT database software delivers two 1D inversion routines for on line data interpretation. The Least Singular Values Inversion (LSVI) program developed by Pedersen (1999) was employed for the interpretation of the Collendoorn data. The results of the inversion are presented as resistivity-depth sections. Each section shows a compilation of independent models at stations along a survey line. In order to show the similarities and differences in both directions of the induced currents, namely XY and YX directions, the results are represented in separate sections along a survey line. Resistivity-depth sections of the determinant data are also illustrated for comparison with the other two.
The first test field campaign, carried out in the Collendoorn dumpsite in the Netherlands revealed that the EnviroMT system operates satisfactorily. In spite of some minor hardware problems caused by heavy rainfall, the hardware functionality of the system is stable. The hardware-software and software-software interfacing work and the measured RMT data are correctly processed and properly stored. The data is reliable in that the estimated resistivities correlate with the true values directly measured in the bore-holes close the survey lines, indicating that the system is properly calibrated. The RMT data contains sufficient information to resolve four layers in the upper 25 m with a resistiveconductive- resistive-conductive sequence. The resistivity-depth sections from 1D inversion of the RMT data in the Collendoorn depicted the vertical boundaries and lateral extensions of the pollution plume and indicated that the pollution plume is extended more in the north at the eastern parts of the dumpsite. The resistivity of the pollution plume is so low that the RMT responses become insensitive to the conductivity variations below 35 meters.The Radio Magnetotelluric (RMT) method is one of the most widely used electromagnetic (EM) methods which employs artificial time varying electric and magnetic fields at the surface of the earth for imaging conductivity. The Radiomagnetotelluric method proposed by Goldstein and Strangway (1975) is based on measuring one set of horizontal electrical and perpendicular magnetic components. After that, many authors for example Sandberg and Hohmann (1982), Bartel and Jacobson (1987), Hughes and Carlson (1987), have studied this method in detail. The electromagnetic signals are emitted by powerful transmitters. The RMT method has a very broad spectrum that is defined here in the range al 10-250 kHz. With regard to this high frequency band, exploration depth is low and is used in shallow engineering studies. In Iran, the application of geophysical methods for the study of waste sites has become increasingly important because of industrial development.
In Autumn 1998, the RMT study was carried out to recognize the pollution zone of a waste site in Collendoorn, in Netherlands. The measurements in the Netherlands are selected to show the RMT application of the EnviroMT system. Collendoorn is a small town situated in the north of the Overijssel province, in the Netherlands. Close to the town, in the middle of a flat area, covered by marine Pliocene sediments, lies the former waste disposal site that was used as a public dumping ground from 1949. Waste disposal dumps occurred in (wet) pits that had been dug for winning sand. In 1960 the site was declared as an official waste disposal site and from then the waste was dumped in dry areas of the site. The waste disposal has been discontinued since 1988 and the area has been used for recreational purposes. Pollution has been detected in the area near the waste disposal site and the leakage of polluted water has moved to regions outside the site. Samples taken from bore-holes in the area show that the polluted water contains iron and chloride ions causing the pollution plume to be electrically highly conductive.
A very long geophysical profile taken by the Netherlands Institute of Applied Geoscience (TNO) shows that the Tegelen formation which mainly consists of clay with a thickness of 2-7 m lies at a depth of approximately 35 m below the surface in the vicinity of Collendoorn. The formations above Tegelen are alternating sand and clay beds. Models of the RMT data across profiles, considerably express conductivity of the layer at 25-30 meters and with regards the data of bore-hole, this layer is estimated as a polluted layer.
The main geophysical objective was to detect and map the vertical and lateral extensions of the pollution plume. Based on the information provided by TNO four RMT survey lines situated at the eastern part of the dumpsite were planned for the survey. Lines 1, 2, and 3 were west-eastward and line 4 was south-northward.
Each line contains stations with 10 m spacing. We already know that the Collendoorn RMT data has a one-dimensional character, meaning that a 1D interpretation may work quite satisfactorily. The EnviroMT database software delivers two 1D inversion routines for on line data interpretation. The Least Singular Values Inversion (LSVI) program developed by Pedersen (1999) was employed for the interpretation of the Collendoorn data. The results of the inversion are presented as resistivity-depth sections. Each section shows a compilation of independent models at stations along a survey line. In order to show the similarities and differences in both directions of the induced currents, namely XY and YX directions, the results are represented in separate sections along a survey line. Resistivity-depth sections of the determinant data are also illustrated for comparison with the other two.
The first test field campaign, carried out in the Collendoorn dumpsite in the Netherlands revealed that the EnviroMT system operates satisfactorily. In spite of some minor hardware problems caused by heavy rainfall, the hardware functionality of the system is stable. The hardware-software and software-software interfacing work and the measured RMT data are correctly processed and properly stored. The data is reliable in that the estimated resistivities correlate with the true values directly measured in the bore-holes close the survey lines, indicating that the system is properly calibrated. The RMT data contains sufficient information to resolve four layers in the upper 25 m with a resistiveconductive- resistive-conductive sequence. The resistivity-depth sections from 1D inversion of the RMT data in the Collendoorn depicted the vertical boundaries and lateral extensions of the pollution plume and indicated that the pollution plume is extended more in the north at the eastern parts of the dumpsite. The resistivity of the pollution plume is so low that the RMT responses become insensitive to the conductivity variations below 35 meters.https://jesphys.ut.ac.ir/article_22402_a9e97c5a897ecabe9127e1d9890d571b.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36420110121Optimization of bin size using the objective function of a mathematical modelOptimization of bin size using the objective function of a mathematical model22403FAHakimEsmaeili OghazMohammad AliRiahiSaeedHashemi TabatabaeiJournal Article19700101Bin size is one of the fundamental and key parameters in 3D land seismic operation design and has an important role in the determination and calculation of other design parameters. Therefore, optimization of this parameter is vital. In this study a new method for optimization of the bin size has been used. The optimization algorithm is linear and based on a mathematical model. Due to the relationship between bin size and geological model variables, the mathematical objective function for optimization of the bin size was applied on a synthetic geology model. The geological model of five reflector horizons with different characteristics and very varied dip angles (between zero and 60 degrees) is formed. The bin size with conventional method and the objective function was evaluated. Using the mathematical model, optimum bin size of 25 meter is obtained.
In the conventional design methods, selecting the appropriate bin size is very difficult for the designer and there is risk of confusion. With the application of the mathematical objective function, the designer can easily choose the bin size.
Introduction: A 3-D seismic survey should be designed for the main zone of interest (primary target). This zone will determine project economics by affecting parameter selection for the 3-D seismic survey. Fold, bin size, and offset range all need to be related to the main target. The direction of major geological features, such as faults or channels, may influence the direction of the receiver and source lines.
In conventional cases, the designer uses simple trigonometric formulas for estimating the suitable CMP as well as maximum offset of source- receiver for layers slope
A Linear Programming problem is a special case in Mathematical Programming. From an analytical perspective, a mathematical program tries to identify an extreme (i.e., minimum or maximum) point of a function , which further satisfies a set of constraints, e.g., .
Linear programming is the specialization of mathematical programming for cases where both function f - to be called the objective function and the problem constraints are linear.
Methodology
Create the objective function
A Linear Programming is a combination of mathematical relationships that defines the possible applications.
In this paper, the objective function for the bin size is defined as follows.
n is geological layers
And constraints are:
; ;
Evaluation of fitness function
The fitness function f (x) is used to evaluate the optimum amount obtained from mathematical models. After each iteration of algorithm process, the amount of fitness function becomes better than the initial parameters. New parameters for the next stage now represent primary input parameters. This optimization process will continue on until the maximum fitness function is finally obtained. In this case, the algorithm could be accepted.
Results
Using the model objective function, constraints and variable parameters, the bin size was calculated for maximum dip angles of each layer and the following results were obtained:
Optimal bin size of the first layer is 25 m and the fitness function value is 0.955; Optimal bin size for the second layer is 30 m and f(x) value is 0.950; Optimal bin size of the third layer is 22 m and f(x) value is 0.918; optimal bin size for the fourth layer is 25 m, and f(x) value is 0.900; and optimal bin size for the fifth layer is 27 m and f(x) value is 0.914
Conclusions
1- Using a mathematical model for very dip angle can yield good results. In addition to this it is simple, based on mathematical logic and therefore its results are valid.
2- Advantages and superiority of this method can be illustrated in a geological synthetic model, with high dip angle between the third and fifth layer.
3- Using this method suitable bin size and other design parameters that are related to the bin size will be optimized.
4- Accuracy in calculating optimal bin size, simplicity and speed of calculations is achieved.Bin size is one of the fundamental and key parameters in 3D land seismic operation design and has an important role in the determination and calculation of other design parameters. Therefore, optimization of this parameter is vital. In this study a new method for optimization of the bin size has been used. The optimization algorithm is linear and based on a mathematical model. Due to the relationship between bin size and geological model variables, the mathematical objective function for optimization of the bin size was applied on a synthetic geology model. The geological model of five reflector horizons with different characteristics and very varied dip angles (between zero and 60 degrees) is formed. The bin size with conventional method and the objective function was evaluated. Using the mathematical model, optimum bin size of 25 meter is obtained.
In the conventional design methods, selecting the appropriate bin size is very difficult for the designer and there is risk of confusion. With the application of the mathematical objective function, the designer can easily choose the bin size.
Introduction: A 3-D seismic survey should be designed for the main zone of interest (primary target). This zone will determine project economics by affecting parameter selection for the 3-D seismic survey. Fold, bin size, and offset range all need to be related to the main target. The direction of major geological features, such as faults or channels, may influence the direction of the receiver and source lines.
In conventional cases, the designer uses simple trigonometric formulas for estimating the suitable CMP as well as maximum offset of source- receiver for layers slope
A Linear Programming problem is a special case in Mathematical Programming. From an analytical perspective, a mathematical program tries to identify an extreme (i.e., minimum or maximum) point of a function , which further satisfies a set of constraints, e.g., .
Linear programming is the specialization of mathematical programming for cases where both function f - to be called the objective function and the problem constraints are linear.
Methodology
Create the objective function
A Linear Programming is a combination of mathematical relationships that defines the possible applications.
In this paper, the objective function for the bin size is defined as follows.
n is geological layers
And constraints are:
; ;
Evaluation of fitness function
The fitness function f (x) is used to evaluate the optimum amount obtained from mathematical models. After each iteration of algorithm process, the amount of fitness function becomes better than the initial parameters. New parameters for the next stage now represent primary input parameters. This optimization process will continue on until the maximum fitness function is finally obtained. In this case, the algorithm could be accepted.
Results
Using the model objective function, constraints and variable parameters, the bin size was calculated for maximum dip angles of each layer and the following results were obtained:
Optimal bin size of the first layer is 25 m and the fitness function value is 0.955; Optimal bin size for the second layer is 30 m and f(x) value is 0.950; Optimal bin size of the third layer is 22 m and f(x) value is 0.918; optimal bin size for the fourth layer is 25 m, and f(x) value is 0.900; and optimal bin size for the fifth layer is 27 m and f(x) value is 0.914
Conclusions
1- Using a mathematical model for very dip angle can yield good results. In addition to this it is simple, based on mathematical logic and therefore its results are valid.
2- Advantages and superiority of this method can be illustrated in a geological synthetic model, with high dip angle between the third and fifth layer.
3- Using this method suitable bin size and other design parameters that are related to the bin size will be optimized.
4- Accuracy in calculating optimal bin size, simplicity and speed of calculations is achieved.https://jesphys.ut.ac.ir/article_22403_f5abb13dd11f9854939ff309ca7d2d8c.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X364201101213D Modeling of the GRACE Satellites Positions Using and Integration of the Hermite Polynomial Approximation and the Legendre Interpolation3D Modeling of the GRACE Satellites Positions Using and Integration of the Hermite Polynomial Approximation and the Legendre Interpolation22404FAMohammad AliSharifi0000-0003-0745-4147ZohrehErfani JazyJournal Article19700101The Gravity Recovery And Climate Experiment (GRACE), twin satellites launched in March 2002, are making detailed measurements of the Earth's gravity field. It will yield discoveries about gravity and the Earth's natural systems.
Different sensors and instruments have been placed in the GRACE satellites to fulfill the primary scientific objective of the mission in mapping the Earth’s gravity field and its temporal variations. The K-band inter-satellite ranging system observes the key observations of the twin satellites which continuously records the changes of the inter-satellite distance. However, the two satellites three dimensional (3D) positions are recorded using the Global Positioning System (GPS) with lower sampling rate. Densification of the position vector with a sampling rate compatible with that of the K-band ranging system is the main purpose of this article.
Interpolation methods are the simplest way to calculate the position of the satellites between a few measured positions. The Lagrange interpolation method is the most frequently used scheme for orbit interpolation purposes. However, the accuracy of the method is not convincing for satellite gravimetry applications. on the other hand, the Hermite polynomials approximation can be used to combine a function and its derivatives for interpolation applications. It has shown its high performance wherever a function and its derivates have been observed.
In the GRACE mission, only 3D positions are observed by the onboard GPS receivers. Moreover, the K-band system measurement can be expressed as a nonlinear function of the relative position and velocity of the two satellites. Consequently, the Hermite polynomial approximation cannot be employed in its original form because of the nonlinearity of derivatives. Herein, we propose the idea of integration of the Lagrange interpolation and Hermite polynomials for coordinate estimation. The Lagrange interpolation is used to provide approximate coordinates between the sampling points. Finally, the Hermite polynomial approximation is utilized for simultaneous adjustment of all the GPS-derived positions, K-band measurements and the approximate position derived from the Lagrange interpolation. Numerical analysis shows that the proposed method outperforms both the Lagrange interpolation and the Hermite polynomial approximation in terms of accuracy.The Gravity Recovery And Climate Experiment (GRACE), twin satellites launched in March 2002, are making detailed measurements of the Earth's gravity field. It will yield discoveries about gravity and the Earth's natural systems.
Different sensors and instruments have been placed in the GRACE satellites to fulfill the primary scientific objective of the mission in mapping the Earth’s gravity field and its temporal variations. The K-band inter-satellite ranging system observes the key observations of the twin satellites which continuously records the changes of the inter-satellite distance. However, the two satellites three dimensional (3D) positions are recorded using the Global Positioning System (GPS) with lower sampling rate. Densification of the position vector with a sampling rate compatible with that of the K-band ranging system is the main purpose of this article.
Interpolation methods are the simplest way to calculate the position of the satellites between a few measured positions. The Lagrange interpolation method is the most frequently used scheme for orbit interpolation purposes. However, the accuracy of the method is not convincing for satellite gravimetry applications. on the other hand, the Hermite polynomials approximation can be used to combine a function and its derivatives for interpolation applications. It has shown its high performance wherever a function and its derivates have been observed.
In the GRACE mission, only 3D positions are observed by the onboard GPS receivers. Moreover, the K-band system measurement can be expressed as a nonlinear function of the relative position and velocity of the two satellites. Consequently, the Hermite polynomial approximation cannot be employed in its original form because of the nonlinearity of derivatives. Herein, we propose the idea of integration of the Lagrange interpolation and Hermite polynomials for coordinate estimation. The Lagrange interpolation is used to provide approximate coordinates between the sampling points. Finally, the Hermite polynomial approximation is utilized for simultaneous adjustment of all the GPS-derived positions, K-band measurements and the approximate position derived from the Lagrange interpolation. Numerical analysis shows that the proposed method outperforms both the Lagrange interpolation and the Hermite polynomial approximation in terms of accuracy.https://jesphys.ut.ac.ir/article_22404_fc05236dd9d3ce1f49c790b5164974ef.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36420110121Direct hydrocarbon identification by quality factor determination using energy density calculation in time-frequency domainDirect hydrocarbon identification by quality factor determination using energy density calculation in time-frequency domain22405FAAminRoshandel KahooHamid RezaSiahkoohiJournal Article19700101In exploration seismology quality factor is widely used as a seismic attribute to identify anomalies related to attenuation, especially those caused by hydrocarbon. Previous studies have indicated that seismic energy loss known as attenuation is greater for high frequency components of seismic data compared to the low frequency components. Here the continuous wavelet transform is used to study the attenuation of seismic data and to calculate the energy density at different scales. The results show that the energy loss at low scales is more than that of the high scales. The method is also used for determination of the anomalies related to energy attenuation due to the presence of hydrocarbon. The results indicated that using modified complex Morlet wavelet needs fewer computations than the regular complex Morlet wavelet. We investigated the efficiency of the method on both synthetic and real seismic data and the results are compared to the results obtained from inversion of seismic data to acoustic impedance using the Hampson-Russell software. The results showed an acceptable correlation. We also found that regular complex Morlet wavelet is more sensitive to the presence of noise than the modified complex Morlet wavelet.
Continuous Wavelet Transform: The time domain continuous wavelet transform (CWT) of a signal can be defined as:
(1)
where, denotes the complex conjugate, is scale, is time shift and is the mother wavelet. Shifted and scaled version of the mother wavelet can be computed as:
(2)
We can define the frequency domain CWT as:
(3)
where, is angular frequency, and are the Fourier transform of and mother wavelet, respectively (Poularikas, 2000). Since the Morlet’s wavelet is similar to the seismic source wavelet, we used the complex Morlet wavelet and a modified version of it as the mother wavelet in our study (Li et. al., 2006).
Energy Attenuation Density Equation: Considering a plane wave propagating in the anelastic medium, assuming that the quality factor is constant, its propagating equation is defined as (Aki and Richard, 1980):
(4)
where, is angular frequency, is propagating distance and is phase velocity.
The energy density at any angular frequency is by definition:
(5)
By introducing Eq. (4) to Eq. (5) and calculating the frequency domain CWT, assuming that and the wavelet domain energy density of signal can be obtained as:
(6)
Equation (6) shows that the energy of a signal in the wavelet domain is a function of the quality factor and scale factor as well as travel time . The larger is, the more slowly the energy attenuates; The smaller is, the faster the energy attenuates. The smaller the scale, the less the energy involved in the signal, because high scales correspond to low frequencies, and low scales correspond to high frequencies.
Discussion: This paper derives an energy attenuation formula for seismic waves in the wavelet-scale domain from wavelet theory and the seismic propagation equation in the anelastic medium. To investigate the efficiency of this method, we tested the method on both synthetic and real seismic data. The results showed an acceptable correlation. We also found that regular complex Morlet wavelet is more sensitive to the presence of noise than the modified complex Morlet wavelet. Also, the results indicated that using the modified complex Morlet wavelet needs fewer computations than the regular complex Morlet wavelet.In exploration seismology quality factor is widely used as a seismic attribute to identify anomalies related to attenuation, especially those caused by hydrocarbon. Previous studies have indicated that seismic energy loss known as attenuation is greater for high frequency components of seismic data compared to the low frequency components. Here the continuous wavelet transform is used to study the attenuation of seismic data and to calculate the energy density at different scales. The results show that the energy loss at low scales is more than that of the high scales. The method is also used for determination of the anomalies related to energy attenuation due to the presence of hydrocarbon. The results indicated that using modified complex Morlet wavelet needs fewer computations than the regular complex Morlet wavelet. We investigated the efficiency of the method on both synthetic and real seismic data and the results are compared to the results obtained from inversion of seismic data to acoustic impedance using the Hampson-Russell software. The results showed an acceptable correlation. We also found that regular complex Morlet wavelet is more sensitive to the presence of noise than the modified complex Morlet wavelet.
Continuous Wavelet Transform: The time domain continuous wavelet transform (CWT) of a signal can be defined as:
(1)
where, denotes the complex conjugate, is scale, is time shift and is the mother wavelet. Shifted and scaled version of the mother wavelet can be computed as:
(2)
We can define the frequency domain CWT as:
(3)
where, is angular frequency, and are the Fourier transform of and mother wavelet, respectively (Poularikas, 2000). Since the Morlet’s wavelet is similar to the seismic source wavelet, we used the complex Morlet wavelet and a modified version of it as the mother wavelet in our study (Li et. al., 2006).
Energy Attenuation Density Equation: Considering a plane wave propagating in the anelastic medium, assuming that the quality factor is constant, its propagating equation is defined as (Aki and Richard, 1980):
(4)
where, is angular frequency, is propagating distance and is phase velocity.
The energy density at any angular frequency is by definition:
(5)
By introducing Eq. (4) to Eq. (5) and calculating the frequency domain CWT, assuming that and the wavelet domain energy density of signal can be obtained as:
(6)
Equation (6) shows that the energy of a signal in the wavelet domain is a function of the quality factor and scale factor as well as travel time . The larger is, the more slowly the energy attenuates; The smaller is, the faster the energy attenuates. The smaller the scale, the less the energy involved in the signal, because high scales correspond to low frequencies, and low scales correspond to high frequencies.
Discussion: This paper derives an energy attenuation formula for seismic waves in the wavelet-scale domain from wavelet theory and the seismic propagation equation in the anelastic medium. To investigate the efficiency of this method, we tested the method on both synthetic and real seismic data. The results showed an acceptable correlation. We also found that regular complex Morlet wavelet is more sensitive to the presence of noise than the modified complex Morlet wavelet. Also, the results indicated that using the modified complex Morlet wavelet needs fewer computations than the regular complex Morlet wavelet.https://jesphys.ut.ac.ir/article_22405_6208f226015a258329eea5ace1ea487b.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36420110121Determination of Shoreline Position in Pozm Bay Using Landsat Satellite DataDetermination of Shoreline Position in Pozm Bay Using Landsat Satellite Data22406FAS AliAzarmsaFarhadRazmkhahJournal Article19700101The Pozm fishery port is located in the south-east part of Pozm Bay in the Oman Sea. The area adjacent to this port especially in the lee of its breakwater is known to be highly affected by deposition of sediments since 1988. Intensity of deposition in this area has been so high that it caused the port to lose its functionality and be out of operation in only few years. No doubt, deposition in this area is not only affected by construction of breakwater and other engineering activities performed in this area, but is also related to and is in equilibrium with coastal processes in the whole Bay. To understand better these processes and determine shoreline changes in the Pozm bay, an analysis of available Landsat satellite images was performed over a period of 13 years ending in 2001. Technical problems have made newer images of TM sensor of Landsat unavailable since 2002. Besides, information about the shoreline situation in the future is necessary to assist in coastal management and regulatory programs.
In this paper, the shoreline position in Pozm Bay is predicted for years 2005 and 2010, using the images provided by the TM sensor of Landsat satellite from the study area in 1988, 1998 and 2001, and through application of 4 different predicting methods. Moreover, the results are analyzed and compared to determine the best predicting method for the study area.
The selected images have been recorded in cloud free and calm conditions and with 30 m by 30 m resolution. The first step in this research is to plot the data so as to be able to view the patterns of shoreline change both spatially and temporally. Then, different methods have been used to analyze this information, resolve the pattern of shoreline change and predict the shoreline position in the future. There are several possible methods for calculating an average rate of change within a selected time segment having more than two measurement points. One strategy is to compare the results of three or more different rate calculations, to observe the reasons for any significant differences in the results, and to recommend what is determined to be the best estimate of the rate. The calculation methods used in this study are the following:
1) The least-squares method which uses the slope of the least-squares fit (Y on X) to the distance/time data points within the time segment.
2) The "end-point" rate method which defines the first to last net difference in distance divided by the net time for the segment.
3) The Rate Averaging method which calculates the arithmetic average of all "long-term" rates within a time segment.
4) The compound method which is the average of the results obtained from other methods.
Finally, the results obtained from different calculation methods are compared to determine the sensitivity of the results to the method of calculation.
The results reveal that the spacing and accuracy of the data can have important effects on the rate calculation. The uneven clustering of data points resulted in significant differences between the outputs of the least square method and others. In the case where the time segment is short and the number of points are small (e.g. three), it is not recommended to use the least squares rate. An interesting end-point calculation is the first to last net effective change over the total time record, regardless of the path in between. This may or may not be meaningful, and can be very misleading, depending on that path. Therefore, the simple end-point rate can be used for comparison with other rate calculations as a checking device, and in those situations where any other rate is meaningless. The results of the compound method are the most accurate and reliable.The Pozm fishery port is located in the south-east part of Pozm Bay in the Oman Sea. The area adjacent to this port especially in the lee of its breakwater is known to be highly affected by deposition of sediments since 1988. Intensity of deposition in this area has been so high that it caused the port to lose its functionality and be out of operation in only few years. No doubt, deposition in this area is not only affected by construction of breakwater and other engineering activities performed in this area, but is also related to and is in equilibrium with coastal processes in the whole Bay. To understand better these processes and determine shoreline changes in the Pozm bay, an analysis of available Landsat satellite images was performed over a period of 13 years ending in 2001. Technical problems have made newer images of TM sensor of Landsat unavailable since 2002. Besides, information about the shoreline situation in the future is necessary to assist in coastal management and regulatory programs.
In this paper, the shoreline position in Pozm Bay is predicted for years 2005 and 2010, using the images provided by the TM sensor of Landsat satellite from the study area in 1988, 1998 and 2001, and through application of 4 different predicting methods. Moreover, the results are analyzed and compared to determine the best predicting method for the study area.
The selected images have been recorded in cloud free and calm conditions and with 30 m by 30 m resolution. The first step in this research is to plot the data so as to be able to view the patterns of shoreline change both spatially and temporally. Then, different methods have been used to analyze this information, resolve the pattern of shoreline change and predict the shoreline position in the future. There are several possible methods for calculating an average rate of change within a selected time segment having more than two measurement points. One strategy is to compare the results of three or more different rate calculations, to observe the reasons for any significant differences in the results, and to recommend what is determined to be the best estimate of the rate. The calculation methods used in this study are the following:
1) The least-squares method which uses the slope of the least-squares fit (Y on X) to the distance/time data points within the time segment.
2) The "end-point" rate method which defines the first to last net difference in distance divided by the net time for the segment.
3) The Rate Averaging method which calculates the arithmetic average of all "long-term" rates within a time segment.
4) The compound method which is the average of the results obtained from other methods.
Finally, the results obtained from different calculation methods are compared to determine the sensitivity of the results to the method of calculation.
The results reveal that the spacing and accuracy of the data can have important effects on the rate calculation. The uneven clustering of data points resulted in significant differences between the outputs of the least square method and others. In the case where the time segment is short and the number of points are small (e.g. three), it is not recommended to use the least squares rate. An interesting end-point calculation is the first to last net effective change over the total time record, regardless of the path in between. This may or may not be meaningful, and can be very misleading, depending on that path. Therefore, the simple end-point rate can be used for comparison with other rate calculations as a checking device, and in those situations where any other rate is meaningless. The results of the compound method are the most accurate and reliable.https://jesphys.ut.ac.ir/article_22406_7f760a6623a17f36bf15b4570a1966d3.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36420110121Evaluating different AOGCMs and downscaling procedures in climate change local impact assessment studiesEvaluating different AOGCMs and downscaling procedures in climate change local impact assessment studies22407FAAli RezaMassah BavaniSaeedMoridMohsenMohammadzadehJournal Article19700101Due to the growth of industries and factories, deforestation and other environmental degradation a well as greenhouse gases have been increasing greenhouse gases on the Earth's surface in recent decades. This increase disturbs the climate of the Earth and is called climate change. An Increase in greenhouse gases in the future could exacerbate the climate change phenomenon and have several negative consequences on different systems, including water resources, agriculture, environment, health and industry. On the other hand to evaluate the destructive effects of climate change on different systems, it is necessary to initially study the area affected by climate change phenomena. Although the effect of climate change on different fields has been studied up to now, most of these studies only used a downscaling method on the AOGCM output model. In climate change studies, using different AOGCM models, downscaling methods and scenarios of greenhouse gas emissions affect the final results. This paper aims to present a framework to assess the effect of using different AOGCM models and downscaling methods on the regional climate. One of the major problems in using the output of AOGCMs (Atmosphere-Ocean General circulation Model), is their low degree of resolution compared to the study area so to make them appropriate for use, downscaling methods are reguired. In this study the performance of kriging and IDW (Inverse Distance Weighting,) in downscaling monthly average of temperature and rainfall of 7 model AOGCM - presented in the IPCC Third Report- including CCSR / NIES: CGCM2, CSIRO MK2, ECHAM4 / OPOYC3, GFDL R30, HadCM3, and NCAR DOE PCM were evaluated using several computational cells around the desired position of the river basin. This performance was evaluated by the coefficient of determination (R2) and the Root Mean Square Error (RMSE) between observed and downscaled data. Finally, the IDW method with 8 computational cells was selected. Then accordingly, seasonal climate change scenarios of temperature and precipitation in the three periods 2039-2010, 2069-2040 and 2099-2070 from 7 AOGCM output models under the SRES (Special Report on Emission Scenario) were downscaled for the study area.
Based on the findings of this study the following conclusions are inferable. (1) Results of kriging and IDW with a different number of pixels around the original pixel did not show a significant difference. Therefore, because of its simplicity, the IDW method with 8 pixels was used to downscale the climate change scenarios of temperature and precipitation in future periods. (2) In all seasons and periods the average temperature of future increase was compared to the baseline period, so that in the period 2070 to 2099 the increase is more than the two other periods, while for rainfall, both reduction and increased amounts for future periods are predictable. According to the temperature results, in the 2070- 2099 period winter temperature would increase 2 to 7 ° C compared to baseline period in the study area, while for rainfall, this change is between -40 to +30 percent. On the other hand for the other seasons in the future period similar results were also derive. 3) The results indicate that the difference of climate change scenarios resulting from different AOGCM models under the same emission scenarios is more than the difference resulting from an AOGCM model under different emission. (4) Finally we can conclude that using data from only one AOGCM model and an emission scenario can force unrealistic results for related projects dealing with the destructive effects of climate change phenomena.Due to the growth of industries and factories, deforestation and other environmental degradation a well as greenhouse gases have been increasing greenhouse gases on the Earth's surface in recent decades. This increase disturbs the climate of the Earth and is called climate change. An Increase in greenhouse gases in the future could exacerbate the climate change phenomenon and have several negative consequences on different systems, including water resources, agriculture, environment, health and industry. On the other hand to evaluate the destructive effects of climate change on different systems, it is necessary to initially study the area affected by climate change phenomena. Although the effect of climate change on different fields has been studied up to now, most of these studies only used a downscaling method on the AOGCM output model. In climate change studies, using different AOGCM models, downscaling methods and scenarios of greenhouse gas emissions affect the final results. This paper aims to present a framework to assess the effect of using different AOGCM models and downscaling methods on the regional climate. One of the major problems in using the output of AOGCMs (Atmosphere-Ocean General circulation Model), is their low degree of resolution compared to the study area so to make them appropriate for use, downscaling methods are reguired. In this study the performance of kriging and IDW (Inverse Distance Weighting,) in downscaling monthly average of temperature and rainfall of 7 model AOGCM - presented in the IPCC Third Report- including CCSR / NIES: CGCM2, CSIRO MK2, ECHAM4 / OPOYC3, GFDL R30, HadCM3, and NCAR DOE PCM were evaluated using several computational cells around the desired position of the river basin. This performance was evaluated by the coefficient of determination (R2) and the Root Mean Square Error (RMSE) between observed and downscaled data. Finally, the IDW method with 8 computational cells was selected. Then accordingly, seasonal climate change scenarios of temperature and precipitation in the three periods 2039-2010, 2069-2040 and 2099-2070 from 7 AOGCM output models under the SRES (Special Report on Emission Scenario) were downscaled for the study area.
Based on the findings of this study the following conclusions are inferable. (1) Results of kriging and IDW with a different number of pixels around the original pixel did not show a significant difference. Therefore, because of its simplicity, the IDW method with 8 pixels was used to downscale the climate change scenarios of temperature and precipitation in future periods. (2) In all seasons and periods the average temperature of future increase was compared to the baseline period, so that in the period 2070 to 2099 the increase is more than the two other periods, while for rainfall, both reduction and increased amounts for future periods are predictable. According to the temperature results, in the 2070- 2099 period winter temperature would increase 2 to 7 ° C compared to baseline period in the study area, while for rainfall, this change is between -40 to +30 percent. On the other hand for the other seasons in the future period similar results were also derive. 3) The results indicate that the difference of climate change scenarios resulting from different AOGCM models under the same emission scenarios is more than the difference resulting from an AOGCM model under different emission. (4) Finally we can conclude that using data from only one AOGCM model and an emission scenario can force unrealistic results for related projects dealing with the destructive effects of climate change phenomena.https://jesphys.ut.ac.ir/article_22407_9e19366cb1318ff68d0031b7d3b43d5b.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X36420110121The Application of Archaeoseismology in IranThe Application of Archaeoseismology in Iran22408FARezaSohbatiMortezaFattahiJournal Article19700101A prerequisite to seismic hazard assessment is to have knowledge of historical and prehistorical earthquakes. To obtain such data, researchers have already practiced different methods including historical seismology, paleoseismology, and recently archaeoseismology. Archaeoseismology is a multidisciplinary approach which ideally tries to find the age, epicenter, and magnitude of past earthquakes through investigating the damage left in ancient monuments. However, a successful archaeoseismological study needs the help of other disciplines like archaeology, seismology, geology, geophysics, history, and civil engineering.
One of the difficulties of archaeoseismology is the recognition of seismic damage from that of nonseismic. Sometimes, the effects of natural disasters such as floods, landslides, and rockfalls or human activities like wars and revolutions might be very similar to damage caused by earthquakes. Accordingly, it is very important to develop methods which help distinguish between earthquake related destruction and that caused by other calamities.
In this review, we categorize the archaeoseismic evidences into the following groups:
- Displacements and collapses
- Coseismic geological effects (liquefaction, etc.) due to earthquakes and their effects on structures
- Deformation of building remains still in primary position
- Human and animal skeletons under collapsed ruins
- Abandonment of sites
- Evidence of reconstruction of damage caused by earthquakes
We also introduce some methods to recognize these evidences from nonseismic damages such as:
- Application of the feasibility matrix
- Dating the probable effect
- Territorial archaeoseismology
- Microzoning of the archaeological site
Iran is an ancient country with an old civilization which has many historical monuments and prehistorical tells. These structures could play the role of a seismoscope for ancient earthquakes, and might have recorded the effects of such events. On the other hand, Iran is placed in the Alpine-Himalayan seismic belt and most parts of it have experienced large and fatal earthquakes during both historical and instrumental periods. These two factors present a great potential for archaeoseismological studies in Iran. However, Iran is still untouched in this respect. In this paper, we also investigate the applicability of archaeoseismology in Iran by providing a few examples.A prerequisite to seismic hazard assessment is to have knowledge of historical and prehistorical earthquakes. To obtain such data, researchers have already practiced different methods including historical seismology, paleoseismology, and recently archaeoseismology. Archaeoseismology is a multidisciplinary approach which ideally tries to find the age, epicenter, and magnitude of past earthquakes through investigating the damage left in ancient monuments. However, a successful archaeoseismological study needs the help of other disciplines like archaeology, seismology, geology, geophysics, history, and civil engineering.
One of the difficulties of archaeoseismology is the recognition of seismic damage from that of nonseismic. Sometimes, the effects of natural disasters such as floods, landslides, and rockfalls or human activities like wars and revolutions might be very similar to damage caused by earthquakes. Accordingly, it is very important to develop methods which help distinguish between earthquake related destruction and that caused by other calamities.
In this review, we categorize the archaeoseismic evidences into the following groups:
- Displacements and collapses
- Coseismic geological effects (liquefaction, etc.) due to earthquakes and their effects on structures
- Deformation of building remains still in primary position
- Human and animal skeletons under collapsed ruins
- Abandonment of sites
- Evidence of reconstruction of damage caused by earthquakes
We also introduce some methods to recognize these evidences from nonseismic damages such as:
- Application of the feasibility matrix
- Dating the probable effect
- Territorial archaeoseismology
- Microzoning of the archaeological site
Iran is an ancient country with an old civilization which has many historical monuments and prehistorical tells. These structures could play the role of a seismoscope for ancient earthquakes, and might have recorded the effects of such events. On the other hand, Iran is placed in the Alpine-Himalayan seismic belt and most parts of it have experienced large and fatal earthquakes during both historical and instrumental periods. These two factors present a great potential for archaeoseismological studies in Iran. However, Iran is still untouched in this respect. In this paper, we also investigate the applicability of archaeoseismology in Iran by providing a few examples.https://jesphys.ut.ac.ir/article_22408_c1c606d7a78f6a7e419c509193412dbd.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X364201101212D inversion of the Magnetotelluric data from Travale Geothermal
Field in Italy2D inversion of the Magnetotelluric data from Travale Geothermal
Field in Italy22409FABehrozOskooiAdeleManzellaJournal Article19700101A detailed study of the exploited geothermal field of Travale in Italy was conducted using Magnetotelluric (MT) data in 2004. This paper detects the main features of the conductivity structures of this area. For subsurface mapping purpose, the long period natural-field MT method proved very useful. For processing and modeling of the MT data, 2D inversion schemes were used and to have the best possible interpretation all modes of data were examined.
The resistivity model obtained from MT data is consistent with the geological model of the Travale region down to five kilometers. The current MT results reveal the presence of a deep geothermal reservoir in the area. Recognition of the conductive zones in the resistive basement in many sites can clearly be interpreted as the flow of the fluids in the faults and fractures of the metamorphic rocks.A detailed study of the exploited geothermal field of Travale in Italy was conducted using Magnetotelluric (MT) data in 2004. This paper detects the main features of the conductivity structures of this area. For subsurface mapping purpose, the long period natural-field MT method proved very useful. For processing and modeling of the MT data, 2D inversion schemes were used and to have the best possible interpretation all modes of data were examined.
The resistivity model obtained from MT data is consistent with the geological model of the Travale region down to five kilometers. The current MT results reveal the presence of a deep geothermal reservoir in the area. Recognition of the conductive zones in the resistive basement in many sites can clearly be interpreted as the flow of the fluids in the faults and fractures of the metamorphic rocks.https://jesphys.ut.ac.ir/article_22409_26dc67f28774c62d6727eafe74d9373e.pdf