Institute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X35120090421Evaluation of Iran Strong Motion Network (ISMN) recording and its effects on the improvement of earthquake location in Zanjan and its neighboring regionsEvaluation of Iran Strong Motion Network (ISMN) recording and its effects on the improvement of earthquake location in Zanjan and its neighboring regions1167996410.22059/jesphys.2009.79964FAA. R. GhodsAssociate Professor, Department of Sciences, Institute for Advanced Studies in Basic Sciences, Zanjan, IranR. AskariInstructor, Department of Sciences, Institute for Advanced Studies in Basic Sciences, Zanjan, IranJournal Article20210221By merging the data from the University of Tehran’s Iran Seismic Telemetry Network (ISTN) with those from Iran Strong Motion Network (ISMN), we investigate the improvement of earthquake location accuracy for the events which happened in the Zanjan province and its neighboring regions (the region bounded by 35º-37.5º N latitudes and 46.5º-50.5º E longitudes and for the period 1996-2006). Due to insufficient distribution of seismic stations in the study region, most of the events have poor
azimuthal coverage and location accuracy. The events in the Zanjan province are largely recorded by Tehran and Tabriz seismic sub-networks of ISTN in the east and west of the province, respectively. ISMN has a very good coverage within Zanjan and its neighboring provinces, and thus has a promising potential to improve the location accuracy of the events within the Zanjan province.
In this study, we assess the improvement of location accuracy by merging the ISMN data with the catalog of relocated events for Zanjan and its neighboring provinces (Askari and Ghods, 2007). The catalog is consists of 304 events for the period 1996-2006, with local magnitudes larger than 3.1, and RMS of less than 0.7 s. The catalog complete for magnitudes larger than 3.5. ISMN is an offline network and does not use GPS for precise timing of its waveform records. We first associate ISMN waveforms based on their time tag but later check if the phase readings are consistent with the phase readings from ISTN. Due to the lack of precise timing of ISMN waveforms, only relative phase arrival of Sg-Pg could be participated in the location procedure. We were able to associate 403 ISMN records with 76 events of the catalog.
We find that the ISMN data cannot significantly improve earthquake location in the Zanjan region. Out of 76 events with the ISMN data, azimuthal coverage and epicentral accuracy of 7 events could be improved significantly. This is primarily related to the fact that most of the regional faults are more or less aligned with the direction of the spread of the weak-motion seismic networks in east and west of the province. We found that ISMN could detect all events with magnitude above 4 within the study region. According to our results, the ISMN stations have been maintained properly, and we could not detect significant data loss. However, we could detect several problems in archiving the ISMN data. We have also found that the ISMN instruments do not have enough resolution for accurate recording of seismic amplitudes. This implies that picking of the first arrivals on ISMN waveforms may have errors in the range of 0.2-0.4 seconds.By merging the data from the University of Tehran’s Iran Seismic Telemetry Network (ISTN) with those from Iran Strong Motion Network (ISMN), we investigate the improvement of earthquake location accuracy for the events which happened in the Zanjan province and its neighboring regions (the region bounded by 35º-37.5º N latitudes and 46.5º-50.5º E longitudes and for the period 1996-2006). Due to insufficient distribution of seismic stations in the study region, most of the events have poor
azimuthal coverage and location accuracy. The events in the Zanjan province are largely recorded by Tehran and Tabriz seismic sub-networks of ISTN in the east and west of the province, respectively. ISMN has a very good coverage within Zanjan and its neighboring provinces, and thus has a promising potential to improve the location accuracy of the events within the Zanjan province.
In this study, we assess the improvement of location accuracy by merging the ISMN data with the catalog of relocated events for Zanjan and its neighboring provinces (Askari and Ghods, 2007). The catalog is consists of 304 events for the period 1996-2006, with local magnitudes larger than 3.1, and RMS of less than 0.7 s. The catalog complete for magnitudes larger than 3.5. ISMN is an offline network and does not use GPS for precise timing of its waveform records. We first associate ISMN waveforms based on their time tag but later check if the phase readings are consistent with the phase readings from ISTN. Due to the lack of precise timing of ISMN waveforms, only relative phase arrival of Sg-Pg could be participated in the location procedure. We were able to associate 403 ISMN records with 76 events of the catalog.
We find that the ISMN data cannot significantly improve earthquake location in the Zanjan region. Out of 76 events with the ISMN data, azimuthal coverage and epicentral accuracy of 7 events could be improved significantly. This is primarily related to the fact that most of the regional faults are more or less aligned with the direction of the spread of the weak-motion seismic networks in east and west of the province. We found that ISMN could detect all events with magnitude above 4 within the study region. According to our results, the ISMN stations have been maintained properly, and we could not detect significant data loss. However, we could detect several problems in archiving the ISMN data. We have also found that the ISMN instruments do not have enough resolution for accurate recording of seismic amplitudes. This implies that picking of the first arrivals on ISMN waveforms may have errors in the range of 0.2-0.4 seconds.https://jesphys.ut.ac.ir/article_79964_3a6ca4e7acf595a62c96b80ffe537f3e.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X35120090421Paleostress analysis around the Lar Dam (Central Alborz), to recognize the structures involved in water escapePaleostress analysis around the Lar Dam (Central Alborz), to recognize the structures involved in water escape17307996510.22059/jesphys.2009.79965FAS. OmidianGraduate Student of Petrology, School of Geology, College of Science, University of Tehran, Tehran, IranM. EliassiAssistant Professor, School of Geology, College of Science, University of Tehran, Tehran, IranJ. HassanzadehAssociate Professor, School of Geology, College of Science, University of Tehran, Tehran, IranM. ZareenejadHead of GIS group, Geological Survey of IranJournal Article20210221Lar Dam is located 85 km to the north-east of Tehran. It supplies a fraction of agricultural water in Mazandaran and the drinking water in Tehran. This dam was built in 1980 and 2 water escape ways around it have been explored; one of them is under the right shoulder
of the dam and the other is located north of the dam toward the Haraz road (Ab-e-Ask region). For this reason, new researches are designed for shifting it to west of the same zone (Gozal Darreh). But our investigations show that structural factor (Faulting) is effective for water escape and this agent has expanded to the whole of this region and it has covered new places as well.
Geologically, the studied area is located on calcic formations related to the second and third geological periods (Jurassic and Cretaceous). These formations have a W-NW trend parallel to the general trend in the Central Alborz. In the studied region, folds, thrusts and reverse faults have the same trend is due to continuous pressure from the Arabian plate on the Iranian plate.
Originally, the aim of our research was to obtain the direction of effective stresses that are responsible for forming and expanding the structural factor for water escape. So we considered some structures such as faults, joints and fractures which are useful for reconstructing the tectonic events (the diverse directions of stress with respect to relative time) in this region. "Inversion Method" is the base of tectonic software and is designed with respect to some of these structures. The important point which is also the target of using the slickenside is to exploit the data and return step by step to reach the initial stress conditions.
In order to obtain the direction of the main stress axis in the time of affecting stress using the inversion analysis, a considerable amount of structural data was compiled. All slickenside was categorized into 12 groups, according to Yamaji software (2005) that is known to MIM (Multiple Inversion Method). Four final parameters form software are which are the results of solving reduced stress tensor. , the shape of stress field, is a quantitative parameter. So the examination of the trend of changes in the shape of stress field will be reconstructible.
Field work resulting in the structural data and the computer analysis performed afterwards shows intensive changes in the rate of stress field shape all around the dam. The path of these changes of the rate of the stress field shape shows the existence of a new fault beneath the dam which is probably a natural channel for water escape. These abrupt changes are from a prolate stress field shape (Φ=5/0-1) to an oblate stress field shape (Φ=0-5/0) with a linear WNW trend. This path is parallel to the linear trend of the pitch of the sink holes located at the basement of the dam. The first channel of the water escape is beneath of the right shoulder. Based upon detailed studies, the proposed position for constructing a new dam, at the western end of the Lar Dam (Gozal Darreh), is still located on the continuation of the new fault we discovered. In conclusion water would escape through this fault anyway.Lar Dam is located 85 km to the north-east of Tehran. It supplies a fraction of agricultural water in Mazandaran and the drinking water in Tehran. This dam was built in 1980 and 2 water escape ways around it have been explored; one of them is under the right shoulder
of the dam and the other is located north of the dam toward the Haraz road (Ab-e-Ask region). For this reason, new researches are designed for shifting it to west of the same zone (Gozal Darreh). But our investigations show that structural factor (Faulting) is effective for water escape and this agent has expanded to the whole of this region and it has covered new places as well.
Geologically, the studied area is located on calcic formations related to the second and third geological periods (Jurassic and Cretaceous). These formations have a W-NW trend parallel to the general trend in the Central Alborz. In the studied region, folds, thrusts and reverse faults have the same trend is due to continuous pressure from the Arabian plate on the Iranian plate.
Originally, the aim of our research was to obtain the direction of effective stresses that are responsible for forming and expanding the structural factor for water escape. So we considered some structures such as faults, joints and fractures which are useful for reconstructing the tectonic events (the diverse directions of stress with respect to relative time) in this region. "Inversion Method" is the base of tectonic software and is designed with respect to some of these structures. The important point which is also the target of using the slickenside is to exploit the data and return step by step to reach the initial stress conditions.
In order to obtain the direction of the main stress axis in the time of affecting stress using the inversion analysis, a considerable amount of structural data was compiled. All slickenside was categorized into 12 groups, according to Yamaji software (2005) that is known to MIM (Multiple Inversion Method). Four final parameters form software are which are the results of solving reduced stress tensor. , the shape of stress field, is a quantitative parameter. So the examination of the trend of changes in the shape of stress field will be reconstructible.
Field work resulting in the structural data and the computer analysis performed afterwards shows intensive changes in the rate of stress field shape all around the dam. The path of these changes of the rate of the stress field shape shows the existence of a new fault beneath the dam which is probably a natural channel for water escape. These abrupt changes are from a prolate stress field shape (Φ=5/0-1) to an oblate stress field shape (Φ=0-5/0) with a linear WNW trend. This path is parallel to the linear trend of the pitch of the sink holes located at the basement of the dam. The first channel of the water escape is beneath of the right shoulder. Based upon detailed studies, the proposed position for constructing a new dam, at the western end of the Lar Dam (Gozal Darreh), is still located on the continuation of the new fault we discovered. In conclusion water would escape through this fault anyway.https://jesphys.ut.ac.ir/article_79965_7bb712c4006f3fa2b371c5b548082edd.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X35120090421Determination of Uranium anomalies in Barandagh region by using airborne radiometry dataDetermination of Uranium anomalies in Barandagh region by using airborne radiometry data31447996910.22059/jesphys.2009.79969FAA. R. LackzaeiGraduate Student of Geophysics, Institute of Geophysics, University of Tehran, IranM. Nabi-BidhendiAssociate Professor, Earth Physics Department, Institute of Geophysics, University of Tehran, Iran0000-0002-9555-8327A. Zia ZarifiAcademic member of Islamic Azad university, Lahijan Branch, IranF. YeganiEmployee of Atomic Energy Organization of IranM. K. HafiziAssociate Professor, Earth Physics Department, Institute of Geophysics, University of Tehran, IranJournal Article20210221Some new statistical techniques are gaining favor and momentum in the separation of background concentration values from anomalous values in determining the economics of extraction of Uranium deposits. In exploration of minerals and feasibility of
exploration procedures, old conventional methods are replaced by new ones which have roots in natural distribution patterns. One of the methods is usage of fractal geometry in the separation of various statistical populations used in these studies such as different background values, threshold limits and anomalous values.
In this paper, in the first step, separation of anomaly values has been performed by means of classical statistics. Then the tables of frequency distribution of Uranium, Thorium and Potassium have been classified, and the frequency distribution histograms have been plotted. The statistical parameters of these three elements have then been estimated. Then separation of anomaly values has been performed based on dispersion around the average. In the second step, separation of anomaly values has been performed by using fractal method based on concentration-area curves.
In this work a comparison of classical statistical method has been made with fractal techniques of separation and grouping of the various values. The data used in this study were the airborne acquired geophysical data of the area which has been based on gamma ray emission of radioactive nuclides present in natural earth’s environment. At the first stage of the study the statistical parameters such as mean, mode, median dispersion of the mean, standard deviation, skewness, kurtosis of the data were plotted and the relative values were calculated. At the second stage, using fractal geometry techniques, concentration-area mathematical model of fractal curves were drawn after the interpolation of X, Y and Z digitized data had been constructed. On the basis of the concentration-area model the fractal dimensions were calculated and separation of various statistical populations was made on the basis of tangent values drawn to the fractal curves. The trends of variations in various statistical populations representing the interpreted concentration values were made using the above two procedures, and the advantages and disadvantages of the methods are described. Finally, based on both classical statistics and fractal methods, anomaly maps are plotted in which the anomaly values are separated from background values for all three radioactive elements of Uranium, Thorium and Potassium.Some new statistical techniques are gaining favor and momentum in the separation of background concentration values from anomalous values in determining the economics of extraction of Uranium deposits. In exploration of minerals and feasibility of
exploration procedures, old conventional methods are replaced by new ones which have roots in natural distribution patterns. One of the methods is usage of fractal geometry in the separation of various statistical populations used in these studies such as different background values, threshold limits and anomalous values.
In this paper, in the first step, separation of anomaly values has been performed by means of classical statistics. Then the tables of frequency distribution of Uranium, Thorium and Potassium have been classified, and the frequency distribution histograms have been plotted. The statistical parameters of these three elements have then been estimated. Then separation of anomaly values has been performed based on dispersion around the average. In the second step, separation of anomaly values has been performed by using fractal method based on concentration-area curves.
In this work a comparison of classical statistical method has been made with fractal techniques of separation and grouping of the various values. The data used in this study were the airborne acquired geophysical data of the area which has been based on gamma ray emission of radioactive nuclides present in natural earth’s environment. At the first stage of the study the statistical parameters such as mean, mode, median dispersion of the mean, standard deviation, skewness, kurtosis of the data were plotted and the relative values were calculated. At the second stage, using fractal geometry techniques, concentration-area mathematical model of fractal curves were drawn after the interpolation of X, Y and Z digitized data had been constructed. On the basis of the concentration-area model the fractal dimensions were calculated and separation of various statistical populations was made on the basis of tangent values drawn to the fractal curves. The trends of variations in various statistical populations representing the interpreted concentration values were made using the above two procedures, and the advantages and disadvantages of the methods are described. Finally, based on both classical statistics and fractal methods, anomaly maps are plotted in which the anomaly values are separated from background values for all three radioactive elements of Uranium, Thorium and Potassium.https://jesphys.ut.ac.ir/article_79969_d5307b73a823b594214ae84b7a415bea.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X35120090421Short period fluctuations of seismicity around Tehran inferred from "a" and "b" valuesShort period fluctuations of seismicity around Tehran inferred from "a" and "b" values45577997010.22059/jesphys.2009.79970FAM. Ashtari JafariInstructor, Earth Physics Department, Institute of Geophysics, University of Tehran, IranJournal Article20210221Earthquake size distribution follows a power law whose slope is known as the b-value and its constant is named as the a-value. The b-value fluctuations have been theoretically studied in laboratories and practically investigated in several seismotectonics zones e.g. volcanic areas, continental rifts and mines which also present different stress regimes. The b-value can explain the relative density of large and small events which has found many applications in seismic hazard studies, spatio-temporal prediction and earthquake physics. On the other hand the a-value is concerned with regional seismicity level, so studying these parameters can be of great help in an area just like Tehran where there is a high concentration of people and social-economical activities. To begin this study we extracted the events from the Tehran Digital Seismic Network database. Processing followed by removing time-dependent quakes under the examination of the Poissionian assumption and later by computing the magnitude of completeness using the goodness of fit method. Then a-value and b-value changes were mapped in time and space. The b-value temporal changes are not significant during the period of data which may be under the control of local effects but reduction by depth exists. Both values show a change of around 51.5E. Meanwhile the b-value map shows a reduction toward regions with high density of thrust and strike-slip faults.Earthquake size distribution follows a power law whose slope is known as the b-value and its constant is named as the a-value. The b-value fluctuations have been theoretically studied in laboratories and practically investigated in several seismotectonics zones e.g. volcanic areas, continental rifts and mines which also present different stress regimes. The b-value can explain the relative density of large and small events which has found many applications in seismic hazard studies, spatio-temporal prediction and earthquake physics. On the other hand the a-value is concerned with regional seismicity level, so studying these parameters can be of great help in an area just like Tehran where there is a high concentration of people and social-economical activities. To begin this study we extracted the events from the Tehran Digital Seismic Network database. Processing followed by removing time-dependent quakes under the examination of the Poissionian assumption and later by computing the magnitude of completeness using the goodness of fit method. Then a-value and b-value changes were mapped in time and space. The b-value temporal changes are not significant during the period of data which may be under the control of local effects but reduction by depth exists. Both values show a change of around 51.5E. Meanwhile the b-value map shows a reduction toward regions with high density of thrust and strike-slip faults.https://jesphys.ut.ac.ir/article_79970_a3b7fdb160862ebc978758da767142e1.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X35120090421Modeling the co-seismic deformation field of a fault and determining the sensitivity of the geometrical and physical parameters of the model to this deformation fieldModeling the co-seismic deformation field of a fault and determining the sensitivity of the geometrical and physical parameters of the model to this deformation field59737997110.22059/jesphys.2009.79971FAS. NooriGraduate Student, Faculty of Geodesy & Geomatics Engineering, K. N. Toosi University of Technology, Tehran, IranB. VoosoghiAssistant Professor, Faculty of Geodesy & Geomatics Engineering, K. N. Toosi University of Technology, Tehran, Iran0000-0002-5667-0447A. M. AbolghasemAssistant Professor, Faculty of Geodesy & Geomatics Engineering, K. N. Toosi University of Technology, Tehran, IranJournal Article20210221The study of faulting and its resultant earth surface deformations is an essential research field in Iran. The major part of this large country is located in very active seismic zones. Any study in this field can enable us to mitigate the risk of earthquake hazards.
3D modeling of displacement and surface deformation caused by earthquake faulting based on a homogeneous, isotropic, elastic half space model is the main aim of this paper. The paper focuses on the modeling of a 3D co-seismic deformation field caused by stress accumulation and its release along seismogenic faults based on a homogeneous elastic half-space model. The most commonly used analytic models of fault deformations have been based on the dislocation solutions of Okada (1985, 1992). This dislocation model is used to investigate surface deformations which are generated by strike-slip and dip-slip faulting. A method of sensitivity analysis is applied to determine sensitivity of the model and its resultant displacement field with respect to the change of parameters of the model.
During the last decades, powerful new models have been developed and deployed with encouraging results for improving knowledge of fault system behavior and its consequent earthquake hazards. Fault displacement models based on elastic dislocation theory have been used to calculate displacements and strains due to co-seismic slip events. In the elastic dislocation theory, faults are considered as displacement discontinuities or dislocations in an otherwise continuous elastic medium. In this approach, faults are represented as surfaces across which there is a discontinuity in the elastic displacement field.
The elastic dislocation theory is conceptually valid for modeling co-seismic deformations. The elastic dislocation formulation of Okada is used in our models, which expresses the displacement field U(x, y, z) at any given point as a function of fault parameters (slip, dip, strike, length, and width) and the elastic constants within the continuum, for rectangular fault panels with horizontal upper and lower edges. The Okada formulation is mathematically robust and tractable, and these attributes make it suitable for rapid, iterative, forward numerical modeling.
In the first section of the paper, the relationship between surface deformation and dislocation theory will be summarized using representation formula. The dislocation theory can be described as that part of the theory of elasticity dealing with surfaces across which the displacement field is discontinuous, the suggestion seems reasonable. As commonly done in mathematical physics, it is necessary for simplicity to make some assumptions. Here the curvature of the earth, its gravity, temperature, magnetism and non-homogeneity are neglected and a semi-infinite medium which is homogeneous and isotropic is considered. For this modeling, the fault parameters that must be considered, are dislocation amount, length, width, depth and dip angle for fault plane.
This model can calculate displacements at every depth and the free surface specially. Here, the first displacement field is calculated for a simulated fault and sensitivity analysis is carried out for model parameters. The dislocation model provides us with surface deformation fields generated by strike-slip and dip-slip faulting and the vector maps of horizontal and vertical displacement fields can be represented.
In the next step, the sensitivity analysis is done to determine the sensitivity of the model and its deformation behavior with respect to any fault parameters. Then the results of the analysis for both cases of strike and dip-slip faults are compared. The analysis shows that the model has maximum sensitivity to dislocation parameter and minimum sensitivity to lame coefficients.
The numerical results of the analysis show that when the amount of dislocation increases the range and area of the surface deformation are greater. The horizontal displacements are more sensitive to the change of the dislocation amount in comparison with the vertical displacements. The results of the analysis result are summarized in the following table.
Result of sensitivity analysis
No.
Parameters
1
Dislocation (U)
2
Depth of fault (c)
3
Dip Angle (δ)
4
Width of fault (w)
5
Length of fault (L)
6
Lame coefficient (λ , µ)
The model can be applied for simulating co-seismic deformation fields of faulting to prepare a hazard map for the investigated fault in case of any consequent earthquake due to fault motions and for use in any further planning. This knowledge will translate into tangible societal benefits by providing the basis for more effective hazard assessments and mitigation efforts.The study of faulting and its resultant earth surface deformations is an essential research field in Iran. The major part of this large country is located in very active seismic zones. Any study in this field can enable us to mitigate the risk of earthquake hazards.
3D modeling of displacement and surface deformation caused by earthquake faulting based on a homogeneous, isotropic, elastic half space model is the main aim of this paper. The paper focuses on the modeling of a 3D co-seismic deformation field caused by stress accumulation and its release along seismogenic faults based on a homogeneous elastic half-space model. The most commonly used analytic models of fault deformations have been based on the dislocation solutions of Okada (1985, 1992). This dislocation model is used to investigate surface deformations which are generated by strike-slip and dip-slip faulting. A method of sensitivity analysis is applied to determine sensitivity of the model and its resultant displacement field with respect to the change of parameters of the model.
During the last decades, powerful new models have been developed and deployed with encouraging results for improving knowledge of fault system behavior and its consequent earthquake hazards. Fault displacement models based on elastic dislocation theory have been used to calculate displacements and strains due to co-seismic slip events. In the elastic dislocation theory, faults are considered as displacement discontinuities or dislocations in an otherwise continuous elastic medium. In this approach, faults are represented as surfaces across which there is a discontinuity in the elastic displacement field.
The elastic dislocation theory is conceptually valid for modeling co-seismic deformations. The elastic dislocation formulation of Okada is used in our models, which expresses the displacement field U(x, y, z) at any given point as a function of fault parameters (slip, dip, strike, length, and width) and the elastic constants within the continuum, for rectangular fault panels with horizontal upper and lower edges. The Okada formulation is mathematically robust and tractable, and these attributes make it suitable for rapid, iterative, forward numerical modeling.
In the first section of the paper, the relationship between surface deformation and dislocation theory will be summarized using representation formula. The dislocation theory can be described as that part of the theory of elasticity dealing with surfaces across which the displacement field is discontinuous, the suggestion seems reasonable. As commonly done in mathematical physics, it is necessary for simplicity to make some assumptions. Here the curvature of the earth, its gravity, temperature, magnetism and non-homogeneity are neglected and a semi-infinite medium which is homogeneous and isotropic is considered. For this modeling, the fault parameters that must be considered, are dislocation amount, length, width, depth and dip angle for fault plane.
This model can calculate displacements at every depth and the free surface specially. Here, the first displacement field is calculated for a simulated fault and sensitivity analysis is carried out for model parameters. The dislocation model provides us with surface deformation fields generated by strike-slip and dip-slip faulting and the vector maps of horizontal and vertical displacement fields can be represented.
In the next step, the sensitivity analysis is done to determine the sensitivity of the model and its deformation behavior with respect to any fault parameters. Then the results of the analysis for both cases of strike and dip-slip faults are compared. The analysis shows that the model has maximum sensitivity to dislocation parameter and minimum sensitivity to lame coefficients.
The numerical results of the analysis show that when the amount of dislocation increases the range and area of the surface deformation are greater. The horizontal displacements are more sensitive to the change of the dislocation amount in comparison with the vertical displacements. The results of the analysis result are summarized in the following table.
Result of sensitivity analysis
No.
Parameters
1
Dislocation (U)
2
Depth of fault (c)
3
Dip Angle (δ)
4
Width of fault (w)
5
Length of fault (L)
6
Lame coefficient (λ , µ)
The model can be applied for simulating co-seismic deformation fields of faulting to prepare a hazard map for the investigated fault in case of any consequent earthquake due to fault motions and for use in any further planning. This knowledge will translate into tangible societal benefits by providing the basis for more effective hazard assessments and mitigation efforts.https://jesphys.ut.ac.ir/article_79971_b73a9d0f917d7272981046a65c79147c.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X35120090421Focal mechanism analysis using synthetic seismogramsFocal mechanism analysis using synthetic seismograms75887997210.22059/jesphys.2009.79972FAM. R. HatamiAssistant Professor, Earth Physics Department, Institute of Geophysics, University of Tehran, IranZ. H. ShomaliAssistant Professor, Earth Physics Department, Institute of Geophysics, University of Tehran, IranGH, Javan-DoloeiAssistant Professor, International Institute of Earthquake Engineering and Seismology (IIEES), Tehran, IranJournal Article20210221According to the representation theorem, the elastic displacement due to a point source is given by the following equation:
(1)
In this equation, is the component of the displacement, is the source time function which indicates how the energy is released during the earthquake process, , is the Green's function that describes the propagation path effects between the source located at and the station at . are the moment tensor components . In this equation all components of the moment tensor are assumed to have the same time functionality. Computation of the Green's function is the most important step for producing synthetic seismograms.
1. Simulation of an Earthquake and its Linear Inversion for a Completely Shear Source (Pure Double-Couple)
1-1 Earthquake Simulation: For the simulation purpose, a source with definite rake, dip and strike was assumed. The synthetic seismograms were then calculated using the wavenumber integration method based on a 12km source depth.
1-2 Linear Inversion of the Seismograms for Determination of the Earthquake source parameters under Pure Double-Couple Conditions: The moment tensor is an overall indication of the earthquake source i.e. the volume variations and a variety of shear source in different directions are included in the moment tensor. Consequently, an earthquake due to a pure double-couple can be considered as a special case of the moment tensor.
For big earthquakes, i.e. Mw>=6.0, the source time function also should be included in the inversion. In such cases, the unknown model parameters are the 6 components of the moment tensor and the source time function components.
In this section, the inversion was carried out under pure double-couple conditions. In other words it was assumed that the sum of moment tensor diagonal components was zero and the eigenvalues of the moment tensor were assumed to be 1, 0, and.
Different parameters may affect the solution in the linear inversion method e.g. the frequency band, the number of components involved and the way the stations are distributed. The effects of the most important parameters are discussed below.
A. The Effect of Frequency Band on the Inversion Solution: The linear inversion method was applied for different frequencies. In most cases, the earthquake specifications obtained were in complete agreement with the simulated earthquake which indicates that in the tested frequency bands, all the frequencies were below the corner frequency of the earthquake source. The solution obtained was very similar to the original mechanism. The fault plane resulted was perpendicular to the original plane that indicates the major fault plane and minor fault plane.
B. The Effect of the Number of Components Involved: The linear inversion method was applied for three different cases. First, the vertical components were used. Second, only the radial components were applied. Third, the inversion was varied solely by tangential components. In the first and second cases, the source coordinates were obtained precisely. But for the third one, the solution was completely wrong even though the depth was determined precisely.
C. The Effect of the Station Distribution Pattern: The effect of the station distribution was examined based on the configuration of seismographic stations in different quadrants. It was concluded that the solution can be obtained even though the data are from one quadrant but the more quadrant are involved the less error there will be.
2. Simulation of an Earthquake and its Linear Inversion for a Non-Pure Double-Couple (without volume variation)
2.1 Earthquake Simulation: In this mode, a five layered crustal model was used for simulation of synthetic seismograms in eight stations distributed in the four quadrants with respect to the source.
2.2 Linear Inversion of the Seismograms for Determination of the Earthquake source parameters under Non-Pure Double-Couple Conditions: In contrast to the linear inversion in 1.2, in this case, the inversion was used in conditions which were more similar to the reality. In the natural mode, the elastic waves traverse layers in the earth about which we don’t have enough information. Therefore, in earthquake mechanism determinations, models are used which are much more simplistic than the real one.
Like the other case, each of the parameters affecting the solution in the linear inversion method will be examined briefly
A. The Effect of Frequency Band on the Inversion Solution: The linear inversion method was applied for different frequencies. In every case, a parabolic function with the <br /> corner frequency of 0.2Hz was used. In all the cases, the earthquake specifications were restored and, as in the other case, the depth was determined within 2 or 3 km of the original depth.
B. The Effect of the Number of Components Involved: Again the linear inversion method was applied for three different cases. And the results indicated that given the mechanism, the epicenter distances and the depth of the earthquake, the vertical components are of greater amplitudes, and therefore stabilize the solution to the inversion method.
C. The Effect of the Station Distribution Pattern: Again the effect of the station distribution was examined based on the configuration of seismographic stations in different quadrants and accordingly the solution could be obtained even though the data were from one quadrant but the more quadrants there were involved the less error there was.
Conclusion: Two different types of sources (pure and non-pure double-couple) were <br /> used to produce synthetic seismograms based on the wavenumber integration method <br /> for a given velocity model. In both cases, the source model was obtained precisely depending on the conditions. Generally, it can be concluded that if the velocity <br /> model is not precise its effect can be seen on CLVD as well as the depth. In addition, if the stations are distributed in at least two quadrants, more precise solutions will be gained.According to the representation theorem, the elastic displacement due to a point source is given by the following equation:
(1)
In this equation, is the component of the displacement, is the source time function which indicates how the energy is released during the earthquake process, , is the Green's function that describes the propagation path effects between the source located at and the station at . are the moment tensor components . In this equation all components of the moment tensor are assumed to have the same time functionality. Computation of the Green's function is the most important step for producing synthetic seismograms.
1. Simulation of an Earthquake and its Linear Inversion for a Completely Shear Source (Pure Double-Couple)
1-1 Earthquake Simulation: For the simulation purpose, a source with definite rake, dip and strike was assumed. The synthetic seismograms were then calculated using the wavenumber integration method based on a 12km source depth.
1-2 Linear Inversion of the Seismograms for Determination of the Earthquake source parameters under Pure Double-Couple Conditions: The moment tensor is an overall indication of the earthquake source i.e. the volume variations and a variety of shear source in different directions are included in the moment tensor. Consequently, an earthquake due to a pure double-couple can be considered as a special case of the moment tensor.
For big earthquakes, i.e. Mw>=6.0, the source time function also should be included in the inversion. In such cases, the unknown model parameters are the 6 components of the moment tensor and the source time function components.
In this section, the inversion was carried out under pure double-couple conditions. In other words it was assumed that the sum of moment tensor diagonal components was zero and the eigenvalues of the moment tensor were assumed to be 1, 0, and.
Different parameters may affect the solution in the linear inversion method e.g. the frequency band, the number of components involved and the way the stations are distributed. The effects of the most important parameters are discussed below.
A. The Effect of Frequency Band on the Inversion Solution: The linear inversion method was applied for different frequencies. In most cases, the earthquake specifications obtained were in complete agreement with the simulated earthquake which indicates that in the tested frequency bands, all the frequencies were below the corner frequency of the earthquake source. The solution obtained was very similar to the original mechanism. The fault plane resulted was perpendicular to the original plane that indicates the major fault plane and minor fault plane.
B. The Effect of the Number of Components Involved: The linear inversion method was applied for three different cases. First, the vertical components were used. Second, only the radial components were applied. Third, the inversion was varied solely by tangential components. In the first and second cases, the source coordinates were obtained precisely. But for the third one, the solution was completely wrong even though the depth was determined precisely.
C. The Effect of the Station Distribution Pattern: The effect of the station distribution was examined based on the configuration of seismographic stations in different quadrants. It was concluded that the solution can be obtained even though the data are from one quadrant but the more quadrant are involved the less error there will be.
2. Simulation of an Earthquake and its Linear Inversion for a Non-Pure Double-Couple (without volume variation)
2.1 Earthquake Simulation: In this mode, a five layered crustal model was used for simulation of synthetic seismograms in eight stations distributed in the four quadrants with respect to the source.
2.2 Linear Inversion of the Seismograms for Determination of the Earthquake source parameters under Non-Pure Double-Couple Conditions: In contrast to the linear inversion in 1.2, in this case, the inversion was used in conditions which were more similar to the reality. In the natural mode, the elastic waves traverse layers in the earth about which we don’t have enough information. Therefore, in earthquake mechanism determinations, models are used which are much more simplistic than the real one.
Like the other case, each of the parameters affecting the solution in the linear inversion method will be examined briefly
A. The Effect of Frequency Band on the Inversion Solution: The linear inversion method was applied for different frequencies. In every case, a parabolic function with the <br /> corner frequency of 0.2Hz was used. In all the cases, the earthquake specifications were restored and, as in the other case, the depth was determined within 2 or 3 km of the original depth.
B. The Effect of the Number of Components Involved: Again the linear inversion method was applied for three different cases. And the results indicated that given the mechanism, the epicenter distances and the depth of the earthquake, the vertical components are of greater amplitudes, and therefore stabilize the solution to the inversion method.
C. The Effect of the Station Distribution Pattern: Again the effect of the station distribution was examined based on the configuration of seismographic stations in different quadrants and accordingly the solution could be obtained even though the data were from one quadrant but the more quadrants there were involved the less error there was.
Conclusion: Two different types of sources (pure and non-pure double-couple) were <br /> used to produce synthetic seismograms based on the wavenumber integration method <br /> for a given velocity model. In both cases, the source model was obtained precisely depending on the conditions. Generally, it can be concluded that if the velocity <br /> model is not precise its effect can be seen on CLVD as well as the depth. In addition, if the stations are distributed in at least two quadrants, more precise solutions will be gained.https://jesphys.ut.ac.ir/article_79972_ec8643187d580cddd7c59f97ad0fed7f.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X35120090421Numerical simulation and experimental investigation of the thermal diffusivity of frozen soil under different moisture contents conditionsNumerical simulation and experimental investigation of the thermal diffusivity of frozen soil under different moisture contents conditions89997997310.22059/jesphys.2009.79973FAY. KhoshkhooGraduate Student of Agrometeorology, College of Water and Soil Engineering, Faculty of Agriculture and Natural Resources, University of Tehran, IranA. KhaliliProfessor, College of Water and Soil Engineering, Faculty of Agriculture and Natural Resources, University of Tehran, IranH. RahimiProfessor, College of Water and Soil Engineering, Faculty of Agriculture and Natural Resources, University of Tehran, IranP. IrannejadAssistant Professor, Space Physics Department, Institute of Geophysics, University of Tehran, IranJournal Article20210221Soil thermal diffusivity is considered as the most important thermal characteristic of the
soil which indicates the gradient of its warming due to a unit change in its temperature. Several methods are available to determine soil thermal diffusivity from observed temperature variations. Most of these methods are based on solutions of the one-dimensional conduction heat equation with constant diffusivity and thus apply to uniform soils only.
In the absence of local heat sources or sinks, the equation describing conductive heat transfer in a one-dimensional isotropic medium is:
(1)
where is temperature, is time, is the soil depth, and () is the thermal diffusivity of the soil , equal to and being the volumetric heat capacity . Several methods have been developed for estimating the soil diffusivity using equation (1). Horton et al. (1983) have tested six methods and concluded that Harmonic Equation and Numerical Method provide the most accurate results among all. The finite difference is considered as the most applicable method for numerical solution of the heat conduction equation in soils. For approximation of partial derivatives using finite differences, different algorithms may be used. In the present research, the Crank-Nicolson method which has a high degree of accuracy was employed. Using the above method, equation (1) can be discretised as:
(2)
with
and .
where and indicate the depth node and time step, respectively.
In the present work, for numerical solution of the equation, time intervals of 1 second (Δt = 1s) and spatial intervals of 1cm (Δz = 1cm) have been employed. With application of the Crank-Nicolson method, a set of simultaneous equations will be produced for each time interval. This set of simultaneous equations can be solved using different methods. In the present research the Tri-Diagonal Matrix Algorithm (TDMA) method has been employed. When the initial and boundary conditions are known and soil temperatures at different points have been measured, the soil thermal diffusivity can be determined using a trial and error technique. The approach is based on solving equation (2) iteratively by changing , and determining the -value based on which the calculated values of temperature best match observations. In the present work the criterion used for choosing is minimizing the Root Mean Square Error (RMSE) of the calculated (C<sub>i</sub>) against observed (Mi) temperature:
The above procedure was conducted to evaluate the thermal diffusivity (α) of a silty soil with mass moisture contents of 5, 10, 15 and 20 per cent. A chamber with dimensions of 500×500×800 mm was made and its walls and bottom were carefully insulated using layers of plasto-foam sheets with a thickness of 100mm, to minimize the exchange of heat with the surrounding environment. The chamber was filled with the soil of given texture and moisture. To eliminate evaporation from the top surface of the soil, it was covered by a plastic sheet. Temperature was measured using seven thermometers installed at the depths of 50, 110, 170, 250, 350, and 500 mm, as well as at the top surface of the soils. The sensors were connected to a computer, where soil temperatures were recorded at 1 min intervals. A frost condition in soil was simulated with the use of a cooling system located at the top of the soil that was able to produce temperatures as low as -20 <sup>o</sup>C.
Thermal diffusivity was estimated for two different thermal conditions in the soil profile: one with temperatures lower than -2 <sup>o</sup>C throughout the soil at all times, and the other having temperatures lower than zero degrees centigrade at some and higher than zero at other depths.
According to the results, the model used in this study led to a low RMSE (between 0.41 to 0.71 <sup>o</sup>C) and reasonable predictions of soil temperature for the first case (i.e. temperatures lower than -2 <sup>o</sup>C at all times and depths). The results showed that the values of α increased with increasing moisture content up to a critical point and then decreased. The maximum value of α occurred at 15 percent moisture content.
The model failed to estimate the soil temperature profile within an acceptable range of error in the second case. The range of RMSE values between the simulated and measured temperature in this case was found between 1.58 to 2.76 <sup>o</sup>C. This failure was attributed to the fact that the assumptions made in solving the heat conduction equation, namely homogeneity of soil and lack of sources and sinks of heat within the soil, were not fulfilled for the second case.Soil thermal diffusivity is considered as the most important thermal characteristic of the
soil which indicates the gradient of its warming due to a unit change in its temperature. Several methods are available to determine soil thermal diffusivity from observed temperature variations. Most of these methods are based on solutions of the one-dimensional conduction heat equation with constant diffusivity and thus apply to uniform soils only.
In the absence of local heat sources or sinks, the equation describing conductive heat transfer in a one-dimensional isotropic medium is:
(1)
where is temperature, is time, is the soil depth, and () is the thermal diffusivity of the soil , equal to and being the volumetric heat capacity . Several methods have been developed for estimating the soil diffusivity using equation (1). Horton et al. (1983) have tested six methods and concluded that Harmonic Equation and Numerical Method provide the most accurate results among all. The finite difference is considered as the most applicable method for numerical solution of the heat conduction equation in soils. For approximation of partial derivatives using finite differences, different algorithms may be used. In the present research, the Crank-Nicolson method which has a high degree of accuracy was employed. Using the above method, equation (1) can be discretised as:
(2)
with
and .
where and indicate the depth node and time step, respectively.
In the present work, for numerical solution of the equation, time intervals of 1 second (Δt = 1s) and spatial intervals of 1cm (Δz = 1cm) have been employed. With application of the Crank-Nicolson method, a set of simultaneous equations will be produced for each time interval. This set of simultaneous equations can be solved using different methods. In the present research the Tri-Diagonal Matrix Algorithm (TDMA) method has been employed. When the initial and boundary conditions are known and soil temperatures at different points have been measured, the soil thermal diffusivity can be determined using a trial and error technique. The approach is based on solving equation (2) iteratively by changing , and determining the -value based on which the calculated values of temperature best match observations. In the present work the criterion used for choosing is minimizing the Root Mean Square Error (RMSE) of the calculated (C<sub>i</sub>) against observed (Mi) temperature:
The above procedure was conducted to evaluate the thermal diffusivity (α) of a silty soil with mass moisture contents of 5, 10, 15 and 20 per cent. A chamber with dimensions of 500×500×800 mm was made and its walls and bottom were carefully insulated using layers of plasto-foam sheets with a thickness of 100mm, to minimize the exchange of heat with the surrounding environment. The chamber was filled with the soil of given texture and moisture. To eliminate evaporation from the top surface of the soil, it was covered by a plastic sheet. Temperature was measured using seven thermometers installed at the depths of 50, 110, 170, 250, 350, and 500 mm, as well as at the top surface of the soils. The sensors were connected to a computer, where soil temperatures were recorded at 1 min intervals. A frost condition in soil was simulated with the use of a cooling system located at the top of the soil that was able to produce temperatures as low as -20 <sup>o</sup>C.
Thermal diffusivity was estimated for two different thermal conditions in the soil profile: one with temperatures lower than -2 <sup>o</sup>C throughout the soil at all times, and the other having temperatures lower than zero degrees centigrade at some and higher than zero at other depths.
According to the results, the model used in this study led to a low RMSE (between 0.41 to 0.71 <sup>o</sup>C) and reasonable predictions of soil temperature for the first case (i.e. temperatures lower than -2 <sup>o</sup>C at all times and depths). The results showed that the values of α increased with increasing moisture content up to a critical point and then decreased. The maximum value of α occurred at 15 percent moisture content.
The model failed to estimate the soil temperature profile within an acceptable range of error in the second case. The range of RMSE values between the simulated and measured temperature in this case was found between 1.58 to 2.76 <sup>o</sup>C. This failure was attributed to the fact that the assumptions made in solving the heat conduction equation, namely homogeneity of soil and lack of sources and sinks of heat within the soil, were not fulfilled for the second case.https://jesphys.ut.ac.ir/article_79973_a383249774120011cdcc467cb4139aa2.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X35120090421The role of convection parameterization in the simulation of the winter temperature and precipitation fields over Iran using Regional Climate Model (RegCM3)The role of convection parameterization in the simulation of the winter temperature and precipitation fields over Iran using Regional Climate Model (RegCM3)1011207997410.22059/jesphys.2009.79974FAP. IrannejadAssistant Professor, Space Physics Department, Institute of Geophysics, University of Tehran, IranF. Ahmadi-GiviAssistant Professor, Space Physics Department, Institute of Geophysics, University of Tehran, IranR. PazoukiGraduate Student of Meteorology, Institute of Geophysics, University of Tehran, IranJournal Article20210221Convection affects the climate through its role in the redistribution of energy and moisture in the atmosphere, and subsequently producing clouds and precipitation. Even with the recent improvements in computational power, the numerical models of the atmosphere still have to run in spatial resolutions that are too coarse to capture the local scale processes such as convection. For this reason, and because of the importance of convection for the surface climate, parameterization schemes have been developed to empirically upscale convection to the scale of the model grid areas.
This study aims at evaluating the impact of different types of parameterizing convection on the simulations by Version 3 of the Regional Climate Model (RegCM3: Dickinson et al., 1989; Giorgi, 1989) of precipitation and air temperature. The convection schemes currently coupled with RegCM3 are Anthes (1997), Betts (1986) and Grell (1993), which may be closed using either the Arakawa-Schubert (1974) or the Fritch and Chappell (1980) closure schemes. The simulations are conducted for the four-month period of December 1998 to March 1999 (inclusive) with 45 × 45 km grid spacing over a domain having 60 and 70 grid points along, latitude and longitude respectively and centered in Iran at 34 ˚E and 48 ˚N. The initial and boundary conditions are derived from the NCEP/NCAR reanalysis. RegCM3 was run four times, keeping all the components of the model and the initial and boundary conditions the same, by each time coupling one the convection schemes (Anthes, Betts, Arakawa, Fritch and Chappell) with the model. To minimize the impact of possibly incorrect initial conditions, we assumed one month as the model's spin-up period, and analyzed the results for the three months of January to March 1999.
The simulated monthly mean precipitation and air temperature as well as the spatial distribution of the model outputs using different schemes are intercompared and compared with observations from the Climate Research Unit (CRU) at the University of East Anglia, United kindom. The results show that the grid-scale mean monthly and mean seasonal (winter) near-surface air temperature simulated by RegCM3 coupled with different convection schemes agree very well with the corresponding observed values. The slope of the regression line of the simulated mean winter temperature against observations is very close to one and varies between 0.973 (Fritch-Chappell) and 0.997 (Kuo-Anthes), with the coefficient of determination (R2) in the range of 0.941 to 0.944, respectively. The degree of agreement between the simulated monthly mean temperature with observations for the three months is somewhat lower that of the mean seasonal with the slope of regression line varying between 0.908 and 0.939 and coefficient of determination between 0.935 and 0.938. It is concluded that RegCM3 is highly effectual in simulating air temperature, irrespective of the type of the convection scheme used. The differences between the simulated mean temperature using different schemes are very small.
On the other hand, the effectiveness of RegCM3 in simulating monthly and seasonal precipitation is very low. Differences between simulated monthly and seasonal precipitation using the four convection schemes are negligible. Although the geographical distribution pattern of precipitation is well simulated by the model, the simulated monthly and seasonal precipitation regressed against observation shows that RegCM3 generally underestimates precipitation during the winter months. The slope of the regression lines significantly differs from unity, varying between about 0.570 and about 0.715. The highest coefficient of determination found for the four schemes, during the three months and the season is smaller than 0.280.
Given the insignificant differences among the model simulations using any of the four convection schemes, the simplest form of convection parameterization with the lowest computational costs, i.e. the Kuo-Anthes scheme, proved to be most appropriate available scheme for medium-range weather predictions in Iran.Convection affects the climate through its role in the redistribution of energy and moisture in the atmosphere, and subsequently producing clouds and precipitation. Even with the recent improvements in computational power, the numerical models of the atmosphere still have to run in spatial resolutions that are too coarse to capture the local scale processes such as convection. For this reason, and because of the importance of convection for the surface climate, parameterization schemes have been developed to empirically upscale convection to the scale of the model grid areas.
This study aims at evaluating the impact of different types of parameterizing convection on the simulations by Version 3 of the Regional Climate Model (RegCM3: Dickinson et al., 1989; Giorgi, 1989) of precipitation and air temperature. The convection schemes currently coupled with RegCM3 are Anthes (1997), Betts (1986) and Grell (1993), which may be closed using either the Arakawa-Schubert (1974) or the Fritch and Chappell (1980) closure schemes. The simulations are conducted for the four-month period of December 1998 to March 1999 (inclusive) with 45 × 45 km grid spacing over a domain having 60 and 70 grid points along, latitude and longitude respectively and centered in Iran at 34 ˚E and 48 ˚N. The initial and boundary conditions are derived from the NCEP/NCAR reanalysis. RegCM3 was run four times, keeping all the components of the model and the initial and boundary conditions the same, by each time coupling one the convection schemes (Anthes, Betts, Arakawa, Fritch and Chappell) with the model. To minimize the impact of possibly incorrect initial conditions, we assumed one month as the model's spin-up period, and analyzed the results for the three months of January to March 1999.
The simulated monthly mean precipitation and air temperature as well as the spatial distribution of the model outputs using different schemes are intercompared and compared with observations from the Climate Research Unit (CRU) at the University of East Anglia, United kindom. The results show that the grid-scale mean monthly and mean seasonal (winter) near-surface air temperature simulated by RegCM3 coupled with different convection schemes agree very well with the corresponding observed values. The slope of the regression line of the simulated mean winter temperature against observations is very close to one and varies between 0.973 (Fritch-Chappell) and 0.997 (Kuo-Anthes), with the coefficient of determination (R2) in the range of 0.941 to 0.944, respectively. The degree of agreement between the simulated monthly mean temperature with observations for the three months is somewhat lower that of the mean seasonal with the slope of regression line varying between 0.908 and 0.939 and coefficient of determination between 0.935 and 0.938. It is concluded that RegCM3 is highly effectual in simulating air temperature, irrespective of the type of the convection scheme used. The differences between the simulated mean temperature using different schemes are very small.
On the other hand, the effectiveness of RegCM3 in simulating monthly and seasonal precipitation is very low. Differences between simulated monthly and seasonal precipitation using the four convection schemes are negligible. Although the geographical distribution pattern of precipitation is well simulated by the model, the simulated monthly and seasonal precipitation regressed against observation shows that RegCM3 generally underestimates precipitation during the winter months. The slope of the regression lines significantly differs from unity, varying between about 0.570 and about 0.715. The highest coefficient of determination found for the four schemes, during the three months and the season is smaller than 0.280.
Given the insignificant differences among the model simulations using any of the four convection schemes, the simplest form of convection parameterization with the lowest computational costs, i.e. the Kuo-Anthes scheme, proved to be most appropriate available scheme for medium-range weather predictions in Iran.https://jesphys.ut.ac.ir/article_79974_8b32f41af27b68ef1fc85122b636468c.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X35120090421The 500 hpa atmospheric centers of action and circulation patterns over the Middle East and their relationship with precipitation in IranThe 500 hpa atmospheric centers of action and circulation patterns over the Middle East and their relationship with precipitation in Iran1211417997510.22059/jesphys.2009.79975FAT. RazieiAssistant Professor, Soil Conservation and Watershed Management Research Institute, Tehran, IranA. MofidiAssistant Professor, Geography Department Tabarestan Institute of Higher Education, Chalous, IranA. ZarinAssistant Professor, Geography Department Tabarestan Institute of Higher Education, Chalous, IranJournal Article20210221It is well-known that regional weather and climate around the globe are strongly influenced by large-scale atmospheric circulation patterns. The centers of action corresponding to different levels of atmosphere play an essential role in controlling the climate of different climatic regions around the globe. Over the years several efforts have been made to identify the main centers of action and large-scale atmospheric circulation patterns leading to precipitation events and to study how their variability can affect the frequency and intensity of precipitation. Hence, using weather types or circulation patterns one is able to investigate and explain the physical causes for the frequency and intensity variation of precipitation over a region.
Reviewing synoptic studies in relation to Iran have suggested that in spite of many subjective circulation classifications being implemented using observational data mainly on the monthly basis; few attempts have been done objectively using reanalysis data, especially on the basis of daily data. Hence, this paper aims to identify the main centers of action and circulation patterns in relation to winter precipitation variability over Iran.
To recognize winter atmospheric circulation patterns over the Middle East, the mean daily 500 gph for December, January, February and March were retrieved and used from the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis archive, covering the period from January 1965 to 2000, accounting for 4355 days during a period of 36 years. Subsequently, daily precipitation rates for some selected days were also retrieved from the NCEP/NCAR reanalysis archive in order to assess the influences of the identified circulation patterns on precipitation in Iran.
To classify the 4355 days and extract the main circulation patterns, S-mode PCA was applied to the data matrix and the resultant 9 leading PCs were retained based on the scree test. The retained PCs were then rotated using the varimax criterion. By plotting the rotated PC loadings the centers of action for 500 hpa level that control the winter climate of the Middle East were identified. To analyze the corresponding synoptic characteristics of the identified centers of action, 10 days with the highest PC score (positive phase) were selected for each PC. The composite map of the selected days and the corresponding vorticity maps for 500 and 1000 hpa levels were represented as the winter circulation patterns. Finally, by composing the precipitation rates associated to the each aforementioned selected 10 days, the relationship between identified synoptic circulation patterns over the Middle East and the winter precipitation in Iran was investigated.
The results indicate that the spatial pattern of winter precipitation in Iran, with the exception of the southern coastal areas of the Caspian Sea, is properly governed by the 500 hpa circulation patterns. The results also show that the large spread water deficit and dry periods in Iran are related to the northward displacement of the Arabian high pressure at mid troposphere level over the western part of the Middle East. Moreover, the results indicate that the deepening of the westerly wave and the increasing positive vorticity in the area between Iran and the southern part of the Red Sea, accompanied with the development and/or reinforcement of a high pressure in the area between eastern Saudi Arabia and the central part of the Red Sea is responsible for widespread precipitation occurrences over vast areas of western and southwestern parts of Iran. Investigation of the relationship between synoptic circulation patterns and the regional scale winter precipitation in Iran shows that the precipitation occurrence in the Caspian region is mostly related to the position and strength of the lower atmosphere high pressures rather than the mid troposphere circulation patterns. This is evident if we consider that in about 4 out of 9 identified circulation patterns the southern coastal areas of the Caspian Sea get remarkable precipitation due to the predominance of a high pressure system and an increase in the negative vorticity in mid troposphere over the western part of the Caspian Sea as well as the development and/or prolonging anticyclonic circulation and northerly flows over the Caspian Sea.It is well-known that regional weather and climate around the globe are strongly influenced by large-scale atmospheric circulation patterns. The centers of action corresponding to different levels of atmosphere play an essential role in controlling the climate of different climatic regions around the globe. Over the years several efforts have been made to identify the main centers of action and large-scale atmospheric circulation patterns leading to precipitation events and to study how their variability can affect the frequency and intensity of precipitation. Hence, using weather types or circulation patterns one is able to investigate and explain the physical causes for the frequency and intensity variation of precipitation over a region.
Reviewing synoptic studies in relation to Iran have suggested that in spite of many subjective circulation classifications being implemented using observational data mainly on the monthly basis; few attempts have been done objectively using reanalysis data, especially on the basis of daily data. Hence, this paper aims to identify the main centers of action and circulation patterns in relation to winter precipitation variability over Iran.
To recognize winter atmospheric circulation patterns over the Middle East, the mean daily 500 gph for December, January, February and March were retrieved and used from the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis archive, covering the period from January 1965 to 2000, accounting for 4355 days during a period of 36 years. Subsequently, daily precipitation rates for some selected days were also retrieved from the NCEP/NCAR reanalysis archive in order to assess the influences of the identified circulation patterns on precipitation in Iran.
To classify the 4355 days and extract the main circulation patterns, S-mode PCA was applied to the data matrix and the resultant 9 leading PCs were retained based on the scree test. The retained PCs were then rotated using the varimax criterion. By plotting the rotated PC loadings the centers of action for 500 hpa level that control the winter climate of the Middle East were identified. To analyze the corresponding synoptic characteristics of the identified centers of action, 10 days with the highest PC score (positive phase) were selected for each PC. The composite map of the selected days and the corresponding vorticity maps for 500 and 1000 hpa levels were represented as the winter circulation patterns. Finally, by composing the precipitation rates associated to the each aforementioned selected 10 days, the relationship between identified synoptic circulation patterns over the Middle East and the winter precipitation in Iran was investigated.
The results indicate that the spatial pattern of winter precipitation in Iran, with the exception of the southern coastal areas of the Caspian Sea, is properly governed by the 500 hpa circulation patterns. The results also show that the large spread water deficit and dry periods in Iran are related to the northward displacement of the Arabian high pressure at mid troposphere level over the western part of the Middle East. Moreover, the results indicate that the deepening of the westerly wave and the increasing positive vorticity in the area between Iran and the southern part of the Red Sea, accompanied with the development and/or reinforcement of a high pressure in the area between eastern Saudi Arabia and the central part of the Red Sea is responsible for widespread precipitation occurrences over vast areas of western and southwestern parts of Iran. Investigation of the relationship between synoptic circulation patterns and the regional scale winter precipitation in Iran shows that the precipitation occurrence in the Caspian region is mostly related to the position and strength of the lower atmosphere high pressures rather than the mid troposphere circulation patterns. This is evident if we consider that in about 4 out of 9 identified circulation patterns the southern coastal areas of the Caspian Sea get remarkable precipitation due to the predominance of a high pressure system and an increase in the negative vorticity in mid troposphere over the western part of the Caspian Sea as well as the development and/or prolonging anticyclonic circulation and northerly flows over the Caspian Sea.https://jesphys.ut.ac.ir/article_79975_efea08a2d7e4c031b476735a08459474.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X35120090421Orbit integration in non-inertial framesOrbit integration in non-inertial frames187997610.22059/jesphys.2009.79976FAM. EshaghPhD. student, Royal Institute of Technology, SE 10044, Stockholm, SwedenJournal Article20210221A precise orbit of a low Earth orbiting satellite helps us to compute the long wavelength structure part of the gravity field of the Earth. There are different methods and frames for orbit integration depending on the problem and satellite mission.
In this paper, the dynamic equations of the satellite motion are presented in different frames of navigation. A simple numerical study on a satellite orbit in local frames is also included. In these frames the geodetic coordinate of the satellite is directly integrated. Numerical studies confirm that the north-east-down frame is not stable for orbit integration either. However, the paper shows how to solve this problem by choosing a wander frame and its ability.A precise orbit of a low Earth orbiting satellite helps us to compute the long wavelength structure part of the gravity field of the Earth. There are different methods and frames for orbit integration depending on the problem and satellite mission.
In this paper, the dynamic equations of the satellite motion are presented in different frames of navigation. A simple numerical study on a satellite orbit in local frames is also included. In these frames the geodetic coordinate of the satellite is directly integrated. Numerical studies confirm that the north-east-down frame is not stable for orbit integration either. However, the paper shows how to solve this problem by choosing a wander frame and its ability.https://jesphys.ut.ac.ir/article_79976_bd5390a0ee10da09cc8f4de70a31de52.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X35120090421Detection of subsurface Qanats by Artificial Neural Network via Microgravity dataDetection of subsurface Qanats by Artificial Neural Network via Microgravity data9157997710.22059/jesphys.2009.79977FAA. R. HajianInstructor, Physics Department, Islamic Azad University, Najaf Abad Branch, Isfahan, IranV. E. ArdestaniAssociate Professor, Earth Physics Department, Institute of Geophysics, University of Tehran and Center of Excellence in Survey Engineering and Disaster Management, Tehran, IranC. LucasProfessor of control, Electrical Engineering Department, University of Tehran, IranS. M. SaghaiannejadAssociate Professor, Electrical Engineering, Electrical Engineering Department, Isfahan Technical University, Isfahan, IranJournal Article20210221A full automatic algorithm is designed to detect subsurface Qanats (sub terrains) via Artificial Neural Networks .We first gained the residual gravity anomaly from microgravity data and then applied it to a Multi Layer Perceptron (MLP) which was trained for the models of sphere and cylinder.
As a field example, the depth of a subsurface Qanat buried under the north entrance of the Geophysics Institute is determined through MLP (trained with noisy data).A full automatic algorithm is designed to detect subsurface Qanats (sub terrains) via Artificial Neural Networks .We first gained the residual gravity anomaly from microgravity data and then applied it to a Multi Layer Perceptron (MLP) which was trained for the models of sphere and cylinder.
As a field example, the depth of a subsurface Qanat buried under the north entrance of the Geophysics Institute is determined through MLP (trained with noisy data).https://jesphys.ut.ac.ir/article_79977_7c67200d9342048b353733d50fb5b051.pdfInstitute of Geophysics, University of TehranJournal of the Earth and Space Physics2538-371X35120090421Gravity field implied density modeling of topography, for precise determination of the geoidGravity field implied density modeling of topography, for precise determination of the geoid17317997810.22059/jesphys.2009.79978FAM. Najafi-AlamdariAssociate Professor, Faculty of Geodesy and Geomatics Engineering, K.N.Toosi University of Technology, Tehran, IranM. SedighiSenior technical staff, National Cartographic Center (NCC), Tehran, IranS. H. TabatabaieLand seismic manager, National Iran Oil Company (NIOC), Exploration management Dept., Tehran, IranJournal Article20210221Precise determination of the geoid using the Stokes-Helmert approach requires a density distribution model within the topography. The model is used for precise evaluation of topographical indirect effects on gravity and potential applied in transforming between the real and the Helmert spaces. The range of mass density variation within the topography is between 1000 and 3100 kg.m<sup>-3</sup>. Assigning the global average value of kg.m<sup>-3</sup> at a point, instead of its real point value, may cause errors of decimeter magnitude in the geoid determination. A regional-local gravity anomaly separation technique using Bouguer gravity anomaly (BA) along with the Free air gravity Anomaly (FA) in the region of Iran are used to estimate the local topographical effect on gravity from the observed anomalies after eliminating the non-density origin long wavelength including isostatic features into the observed BA. A Global Geopotential Model (GGM) is also used to eliminate the deep sited density-origin long wavelength features from the observed anomalies as well. Then, the power spectral analysis, apparent density mapping, and forward modeling techniques are used to convert the local topographical effect on gravity into the corresponding 3-D density model (GRADEN) model in the region. The model showed thorough correlation with the superficial geological density (GEODEN) model at the surface level, provided that the reliable digitization of the model is in order. The GRADEN model minus the constant density demonstrates contributions up to a meter in mountainous areas and 7cm in the RMS scale to the geoid in the region.Precise determination of the geoid using the Stokes-Helmert approach requires a density distribution model within the topography. The model is used for precise evaluation of topographical indirect effects on gravity and potential applied in transforming between the real and the Helmert spaces. The range of mass density variation within the topography is between 1000 and 3100 kg.m<sup>-3</sup>. Assigning the global average value of kg.m<sup>-3</sup> at a point, instead of its real point value, may cause errors of decimeter magnitude in the geoid determination. A regional-local gravity anomaly separation technique using Bouguer gravity anomaly (BA) along with the Free air gravity Anomaly (FA) in the region of Iran are used to estimate the local topographical effect on gravity from the observed anomalies after eliminating the non-density origin long wavelength including isostatic features into the observed BA. A Global Geopotential Model (GGM) is also used to eliminate the deep sited density-origin long wavelength features from the observed anomalies as well. Then, the power spectral analysis, apparent density mapping, and forward modeling techniques are used to convert the local topographical effect on gravity into the corresponding 3-D density model (GRADEN) model in the region. The model showed thorough correlation with the superficial geological density (GEODEN) model at the surface level, provided that the reliable digitization of the model is in order. The GRADEN model minus the constant density demonstrates contributions up to a meter in mountainous areas and 7cm in the RMS scale to the geoid in the region.https://jesphys.ut.ac.ir/article_79978_94ff38d75ace1e1a7b3744fe977930a8.pdf